multi_coll_drop.js on MCI_enterprise-rhel-71-ppc64le_job0 Plain Text 2016-04-06 07:51:56 +0000 [js_test:multi_coll_drop] 2016-04-06T02:51:56.418-0500 Starting JSTest jstests/sharding/multi_coll_drop.js... ./mongo --eval MongoRunner.dataDir = "/data/db/job0/mongorunner"; TestData = new Object(); TestData.wiredTigerEngineConfigString = ""; TestData.maxPort = 20249; TestData.wiredTigerIndexConfigString = ""; TestData.noJournal = false; TestData.testName = "multi_coll_drop"; TestData.storageEngine = "wiredTiger"; TestData.minPort = 20010; TestData.noJournalPrealloc = true; TestData.wiredTigerCollectionConfigString = ""; MongoRunner.dataPath = "/data/db/job0/mongorunner/"; load('jstests/libs/override_methods/sharding_continuous_config_stepdown.js'); --readMode commands --nodb jstests/sharding/multi_coll_drop.js [js_test:multi_coll_drop] 2016-04-06T02:51:56.469-0500 MongoDB shell version: 3.3.4-37-g36f3ff8 [js_test:multi_coll_drop] 2016-04-06T02:51:56.501-0500 JSTest jstests/sharding/multi_coll_drop.js started with pid 63037. [js_test:multi_coll_drop] 2016-04-06T02:51:56.501-0500 true [js_test:multi_coll_drop] 2016-04-06T02:51:56.501-0500 Resetting db path '/data/db/job0/mongorunner/multidrop0' [js_test:multi_coll_drop] 2016-04-06T02:51:56.506-0500 2016-04-06T02:51:56.501-0500 I - [thread1] shell: started program (sh63091): /data/mci/src/mongod --dbpath /data/db/job0/mongorunner/multidrop0 --port 20010 --nopreallocj --setParameter enableTestCommands=1 --storageEngine wiredTiger [js_test:multi_coll_drop] 2016-04-06T02:51:56.512-0500 2016-04-06T02:51:56.511-0500 W NETWORK [thread1] Failed to connect to 127.0.0.1:20010, reason: Connection refused [js_test:multi_coll_drop] 2016-04-06T02:51:56.528-0500 d20010| 2016-04-06T02:51:56.525-0500 I CONTROL [initandlisten] MongoDB starting : pid=63091 port=20010 dbpath=/data/db/job0/mongorunner/multidrop0 64-bit host=mongovm16 [js_test:multi_coll_drop] 2016-04-06T02:51:56.528-0500 d20010| 2016-04-06T02:51:56.525-0500 I CONTROL [initandlisten] db version v3.3.4-37-g36f3ff8 [js_test:multi_coll_drop] 2016-04-06T02:51:56.529-0500 d20010| 2016-04-06T02:51:56.525-0500 I CONTROL [initandlisten] git version: 36f3ff8da1f7ae3710ceacc4e13adfd4abdb99da [js_test:multi_coll_drop] 2016-04-06T02:51:56.531-0500 d20010| 2016-04-06T02:51:56.525-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 [js_test:multi_coll_drop] 2016-04-06T02:51:56.533-0500 d20010| 2016-04-06T02:51:56.525-0500 I CONTROL [initandlisten] allocator: tcmalloc [js_test:multi_coll_drop] 2016-04-06T02:51:56.534-0500 d20010| 2016-04-06T02:51:56.525-0500 I CONTROL [initandlisten] modules: enterprise [js_test:multi_coll_drop] 2016-04-06T02:51:56.535-0500 d20010| 2016-04-06T02:51:56.525-0500 I CONTROL [initandlisten] build environment: [js_test:multi_coll_drop] 2016-04-06T02:51:56.538-0500 d20010| 2016-04-06T02:51:56.525-0500 I CONTROL [initandlisten] distmod: rhel71 [js_test:multi_coll_drop] 2016-04-06T02:51:56.539-0500 d20010| 2016-04-06T02:51:56.525-0500 I CONTROL [initandlisten] distarch: ppc64le [js_test:multi_coll_drop] 2016-04-06T02:51:56.540-0500 d20010| 2016-04-06T02:51:56.525-0500 I CONTROL [initandlisten] target_arch: ppc64le [js_test:multi_coll_drop] 2016-04-06T02:51:56.549-0500 d20010| 2016-04-06T02:51:56.525-0500 I CONTROL [initandlisten] options: { net: { port: 20010 }, nopreallocj: true, setParameter: { enableTestCommands: "1" }, storage: { dbPath: "/data/db/job0/mongorunner/multidrop0", engine: "wiredTiger" } } [js_test:multi_coll_drop] 2016-04-06T02:51:56.585-0500 d20010| 2016-04-06T02:51:56.584-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=73G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0), [js_test:multi_coll_drop] 2016-04-06T02:51:56.630-0500 d20010| 2016-04-06T02:51:56.627-0500 I CONTROL [initandlisten] [js_test:multi_coll_drop] 2016-04-06T02:51:56.630-0500 d20010| 2016-04-06T02:51:56.627-0500 I CONTROL [initandlisten] ** NOTE: This is a development version (3.3.4-37-g36f3ff8) of MongoDB. [js_test:multi_coll_drop] 2016-04-06T02:51:56.632-0500 d20010| 2016-04-06T02:51:56.627-0500 I CONTROL [initandlisten] ** Not recommended for production. [js_test:multi_coll_drop] 2016-04-06T02:51:56.632-0500 d20010| 2016-04-06T02:51:56.627-0500 I CONTROL [initandlisten] [js_test:multi_coll_drop] 2016-04-06T02:51:56.637-0500 d20010| 2016-04-06T02:51:56.627-0500 I CONTROL [initandlisten] ** WARNING: Insecure configuration, access control is not enabled and no --bind_ip has been specified. [js_test:multi_coll_drop] 2016-04-06T02:51:56.638-0500 d20010| 2016-04-06T02:51:56.627-0500 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted, [js_test:multi_coll_drop] 2016-04-06T02:51:56.639-0500 d20010| 2016-04-06T02:51:56.627-0500 I CONTROL [initandlisten] ** and the server listens on all available network interfaces. [js_test:multi_coll_drop] 2016-04-06T02:51:56.639-0500 d20010| 2016-04-06T02:51:56.627-0500 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended. [js_test:multi_coll_drop] 2016-04-06T02:51:56.640-0500 d20010| 2016-04-06T02:51:56.627-0500 I CONTROL [initandlisten] [js_test:multi_coll_drop] 2016-04-06T02:51:56.642-0500 d20010| 2016-04-06T02:51:56.628-0500 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/job0/mongorunner/multidrop0/diagnostic.data' [js_test:multi_coll_drop] 2016-04-06T02:51:56.642-0500 d20010| 2016-04-06T02:51:56.628-0500 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker [js_test:multi_coll_drop] 2016-04-06T02:51:56.645-0500 d20010| 2016-04-06T02:51:56.638-0500 I NETWORK [initandlisten] waiting for connections on port 20010 [js_test:multi_coll_drop] 2016-04-06T02:51:56.712-0500 d20010| 2016-04-06T02:51:56.712-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:60569 #1 (1 connection now open) [js_test:multi_coll_drop] 2016-04-06T02:51:56.719-0500 Starting new replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:51:56.720-0500 ReplSetTest starting set [js_test:multi_coll_drop] 2016-04-06T02:51:56.720-0500 ReplSetTest n is : 0 [js_test:multi_coll_drop] 2016-04-06T02:51:56.721-0500 { [js_test:multi_coll_drop] 2016-04-06T02:51:56.721-0500 "useHostName" : true, [js_test:multi_coll_drop] 2016-04-06T02:51:56.729-0500 "oplogSize" : 40, [js_test:multi_coll_drop] 2016-04-06T02:51:56.742-0500 "keyFile" : undefined, [js_test:multi_coll_drop] 2016-04-06T02:51:56.742-0500 "port" : 20011, [js_test:multi_coll_drop] 2016-04-06T02:51:56.745-0500 "noprealloc" : "", [js_test:multi_coll_drop] 2016-04-06T02:51:56.745-0500 "smallfiles" : "", [js_test:multi_coll_drop] 2016-04-06T02:51:56.746-0500 "replSet" : "multidrop-configRS", [js_test:multi_coll_drop] 2016-04-06T02:51:56.747-0500 "dbpath" : "$set-$node", [js_test:multi_coll_drop] 2016-04-06T02:51:56.747-0500 "pathOpts" : { [js_test:multi_coll_drop] 2016-04-06T02:51:56.748-0500 "testName" : "multidrop", [js_test:multi_coll_drop] 2016-04-06T02:51:56.748-0500 "node" : 0, [js_test:multi_coll_drop] 2016-04-06T02:51:56.748-0500 "set" : "multidrop-configRS" [js_test:multi_coll_drop] 2016-04-06T02:51:56.754-0500 }, [js_test:multi_coll_drop] 2016-04-06T02:51:56.754-0500 "journal" : "", [js_test:multi_coll_drop] 2016-04-06T02:51:56.756-0500 "configsvr" : "", [js_test:multi_coll_drop] 2016-04-06T02:51:56.756-0500 "noJournalPrealloc" : undefined, [js_test:multi_coll_drop] 2016-04-06T02:51:56.756-0500 "storageEngine" : "wiredTiger", [js_test:multi_coll_drop] 2016-04-06T02:51:56.756-0500 "verbose" : 2, [js_test:multi_coll_drop] 2016-04-06T02:51:56.757-0500 "restart" : undefined [js_test:multi_coll_drop] 2016-04-06T02:51:56.757-0500 } [js_test:multi_coll_drop] 2016-04-06T02:51:56.758-0500 ReplSetTest Starting.... [js_test:multi_coll_drop] 2016-04-06T02:51:56.758-0500 Resetting db path '/data/db/job0/mongorunner/multidrop-configRS-0' [js_test:multi_coll_drop] 2016-04-06T02:51:56.760-0500 2016-04-06T02:51:56.717-0500 I - [thread1] shell: started program (sh63808): /data/mci/src/mongod --oplogSize 40 --port 20011 --noprealloc --smallfiles --replSet multidrop-configRS --dbpath /data/db/job0/mongorunner/multidrop-configRS-0 --journal --configsvr --storageEngine wiredTiger -vv --nopreallocj --setParameter enableTestCommands=1 [js_test:multi_coll_drop] 2016-04-06T02:51:56.763-0500 2016-04-06T02:51:56.717-0500 W NETWORK [thread1] Failed to connect to 127.0.0.1:20011, reason: Connection refused [js_test:multi_coll_drop] 2016-04-06T02:51:56.764-0500 c20011| note: noprealloc may hurt performance in many applications [js_test:multi_coll_drop] 2016-04-06T02:51:56.765-0500 c20011| 2016-04-06T02:51:56.747-0500 I CONTROL [initandlisten] MongoDB starting : pid=63808 port=20011 dbpath=/data/db/job0/mongorunner/multidrop-configRS-0 64-bit host=mongovm16 [js_test:multi_coll_drop] 2016-04-06T02:51:56.766-0500 c20011| 2016-04-06T02:51:56.747-0500 I CONTROL [initandlisten] db version v3.3.4-37-g36f3ff8 [js_test:multi_coll_drop] 2016-04-06T02:51:56.767-0500 c20011| 2016-04-06T02:51:56.747-0500 I CONTROL [initandlisten] git version: 36f3ff8da1f7ae3710ceacc4e13adfd4abdb99da [js_test:multi_coll_drop] 2016-04-06T02:51:56.767-0500 c20011| 2016-04-06T02:51:56.747-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 [js_test:multi_coll_drop] 2016-04-06T02:51:56.768-0500 c20011| 2016-04-06T02:51:56.747-0500 I CONTROL [initandlisten] allocator: tcmalloc [js_test:multi_coll_drop] 2016-04-06T02:51:56.768-0500 c20011| 2016-04-06T02:51:56.747-0500 I CONTROL [initandlisten] modules: enterprise [js_test:multi_coll_drop] 2016-04-06T02:51:56.768-0500 c20011| 2016-04-06T02:51:56.747-0500 I CONTROL [initandlisten] build environment: [js_test:multi_coll_drop] 2016-04-06T02:51:56.770-0500 c20011| 2016-04-06T02:51:56.747-0500 I CONTROL [initandlisten] distmod: rhel71 [js_test:multi_coll_drop] 2016-04-06T02:51:56.773-0500 c20011| 2016-04-06T02:51:56.747-0500 I CONTROL [initandlisten] distarch: ppc64le [js_test:multi_coll_drop] 2016-04-06T02:51:56.773-0500 c20011| 2016-04-06T02:51:56.747-0500 I CONTROL [initandlisten] target_arch: ppc64le [js_test:multi_coll_drop] 2016-04-06T02:51:56.791-0500 c20011| 2016-04-06T02:51:56.747-0500 I CONTROL [initandlisten] options: { net: { port: 20011 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "multidrop-configRS" }, setParameter: { enableTestCommands: "1" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/data/db/job0/mongorunner/multidrop-configRS-0", engine: "wiredTiger", journal: { enabled: true }, mmapv1: { preallocDataFiles: false, smallFiles: true } }, systemLog: { verbosity: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:51:56.797-0500 c20011| 2016-04-06T02:51:56.747-0500 D NETWORK [initandlisten] fd limit hard:64000 soft:64000 max conn: 51200 [js_test:multi_coll_drop] 2016-04-06T02:51:56.818-0500 c20011| 2016-04-06T02:51:56.811-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=73G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0), [js_test:multi_coll_drop] 2016-04-06T02:51:56.875-0500 c20011| 2016-04-06T02:51:56.875-0500 D COMMAND [WTJournalFlusher] BackgroundJob starting: WTJournalFlusher [js_test:multi_coll_drop] 2016-04-06T02:51:56.876-0500 c20011| 2016-04-06T02:51:56.875-0500 D STORAGE [WTJournalFlusher] starting WTJournalFlusher thread [js_test:multi_coll_drop] 2016-04-06T02:51:56.892-0500 c20011| 2016-04-06T02:51:56.889-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:_mdb_catalog config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:51:56.900-0500 c20011| 2016-04-06T02:51:56.896-0500 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:_mdb_catalog ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:51:56.906-0500 c20011| 2016-04-06T02:51:56.901-0500 D STORAGE [initandlisten] flushing directory /data/db/job0/mongorunner/multidrop-configRS-0 [js_test:multi_coll_drop] 2016-04-06T02:51:56.914-0500 c20011| 2016-04-06T02:51:56.901-0500 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger [js_test:multi_coll_drop] 2016-04-06T02:51:56.914-0500 c20011| 2016-04-06T02:51:56.901-0500 I CONTROL [initandlisten] [js_test:multi_coll_drop] 2016-04-06T02:51:56.914-0500 c20011| 2016-04-06T02:51:56.901-0500 I CONTROL [initandlisten] ** NOTE: This is a development version (3.3.4-37-g36f3ff8) of MongoDB. [js_test:multi_coll_drop] 2016-04-06T02:51:56.915-0500 c20011| 2016-04-06T02:51:56.901-0500 I CONTROL [initandlisten] ** Not recommended for production. [js_test:multi_coll_drop] 2016-04-06T02:51:56.915-0500 c20011| 2016-04-06T02:51:56.901-0500 I CONTROL [initandlisten] [js_test:multi_coll_drop] 2016-04-06T02:51:56.922-0500 c20011| 2016-04-06T02:51:56.901-0500 I CONTROL [initandlisten] ** WARNING: Insecure configuration, access control is not enabled and no --bind_ip has been specified. [js_test:multi_coll_drop] 2016-04-06T02:51:56.924-0500 c20011| 2016-04-06T02:51:56.901-0500 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted, [js_test:multi_coll_drop] 2016-04-06T02:51:56.933-0500 c20011| 2016-04-06T02:51:56.901-0500 I CONTROL [initandlisten] ** and the server listens on all available network interfaces. [js_test:multi_coll_drop] 2016-04-06T02:51:56.934-0500 c20011| 2016-04-06T02:51:56.901-0500 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended. [js_test:multi_coll_drop] 2016-04-06T02:51:56.940-0500 c20011| 2016-04-06T02:51:56.901-0500 I CONTROL [initandlisten] [js_test:multi_coll_drop] 2016-04-06T02:51:56.941-0500 c20011| 2016-04-06T02:51:56.901-0500 D COMMAND [SNMPAgent] BackgroundJob starting: SNMPAgent [js_test:multi_coll_drop] 2016-04-06T02:51:56.944-0500 c20011| 2016-04-06T02:51:56.901-0500 D NETWORK [SNMPAgent] SNMPAgent not enabled [js_test:multi_coll_drop] 2016-04-06T02:51:56.951-0500 c20011| 2016-04-06T02:51:56.901-0500 D STORAGE [initandlisten] enter repairDatabases (to check pdfile version #) [js_test:multi_coll_drop] 2016-04-06T02:51:56.952-0500 c20011| 2016-04-06T02:51:56.901-0500 D STORAGE [initandlisten] Checking node for SERVER-23299 eligibility [js_test:multi_coll_drop] 2016-04-06T02:51:56.952-0500 c20011| 2016-04-06T02:51:56.901-0500 D STORAGE [initandlisten] Didn't find local.startup_log [js_test:multi_coll_drop] 2016-04-06T02:51:56.952-0500 c20011| 2016-04-06T02:51:56.901-0500 D STORAGE [initandlisten] done repairDatabases [js_test:multi_coll_drop] 2016-04-06T02:51:56.953-0500 c20011| 2016-04-06T02:51:56.901-0500 D QUERY [initandlisten] Running query: query: {} sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:51:56.959-0500 c20011| 2016-04-06T02:51:56.901-0500 D QUERY [initandlisten] Collection admin.system.roles does not exist. Using EOF plan: query: {} sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:51:56.964-0500 c20011| 2016-04-06T02:51:56.901-0500 I COMMAND [initandlisten] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:20 locks:{ Global: { acquireCount: { r: 6, W: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:56.970-0500 c20011| 2016-04-06T02:51:56.901-0500 D INDEX [initandlisten] checking complete [js_test:multi_coll_drop] 2016-04-06T02:51:56.972-0500 c20011| 2016-04-06T02:51:56.901-0500 D QUERY [initandlisten] Collection local.me does not exist. Using EOF plan: query: {} sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:51:56.973-0500 c20011| 2016-04-06T02:51:56.901-0500 D STORAGE [initandlisten] stored meta data for local.me @ RecordId(1) [js_test:multi_coll_drop] 2016-04-06T02:51:56.974-0500 c20011| 2016-04-06T02:51:56.901-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-0--6404702321693896372 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:51:56.992-0500 c20011| 2016-04-06T02:51:56.912-0500 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-0--6404702321693896372 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:51:56.994-0500 c20011| 2016-04-06T02:51:56.912-0500 D STORAGE [initandlisten] local.me: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:51:57.014-0500 c20011| 2016-04-06T02:51:56.912-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createSortedDataInterface ident: index-1--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.me" }), [js_test:multi_coll_drop] 2016-04-06T02:51:57.017-0500 c20011| 2016-04-06T02:51:56.912-0500 D STORAGE [initandlisten] create uri: table:index-1--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.me" }), [js_test:multi_coll_drop] 2016-04-06T02:51:57.022-0500 c20011| 2016-04-06T02:51:56.917-0500 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-1--6404702321693896372 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:51:57.025-0500 c20011| 2016-04-06T02:51:56.917-0500 D STORAGE [initandlisten] local.me: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:51:57.026-0500 c20011| 2016-04-06T02:51:56.918-0500 D QUERY [initandlisten] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:51:57.031-0500 c20011| 2016-04-06T02:51:56.918-0500 I REPL [initandlisten] Did not find local voted for document at startup; NoMatchingDocument: Did not find replica set lastVote document in local.replset.election [js_test:multi_coll_drop] 2016-04-06T02:51:57.032-0500 c20011| 2016-04-06T02:51:56.918-0500 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset [js_test:multi_coll_drop] 2016-04-06T02:51:57.035-0500 c20011| 2016-04-06T02:51:56.918-0500 D ASIO [NetworkInterfaceASIO-Replication-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:51:57.036-0500 2016-04-06T02:51:56.919-0500 W NETWORK [thread1] Failed to connect to 127.0.0.1:20011, reason: Connection refused [js_test:multi_coll_drop] 2016-04-06T02:51:57.040-0500 c20011| 2016-04-06T02:51:56.919-0500 D EXECUTOR [replExecDBWorker-0] starting thread in pool replExecDBWorker-Pool [js_test:multi_coll_drop] 2016-04-06T02:51:57.041-0500 c20011| 2016-04-06T02:51:56.919-0500 D COMMAND [TTLMonitor] BackgroundJob starting: TTLMonitor [js_test:multi_coll_drop] 2016-04-06T02:51:57.043-0500 c20011| 2016-04-06T02:51:56.919-0500 D COMMAND [ClientCursorMonitor] BackgroundJob starting: ClientCursorMonitor [js_test:multi_coll_drop] 2016-04-06T02:51:57.045-0500 c20011| 2016-04-06T02:51:56.919-0500 D EXECUTOR [replExecDBWorker-1] starting thread in pool replExecDBWorker-Pool [js_test:multi_coll_drop] 2016-04-06T02:51:57.045-0500 c20011| 2016-04-06T02:51:56.919-0500 D COMMAND [PeriodicTaskRunner] BackgroundJob starting: PeriodicTaskRunner [js_test:multi_coll_drop] 2016-04-06T02:51:57.046-0500 c20011| 2016-04-06T02:51:56.919-0500 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/job0/mongorunner/multidrop-configRS-0/diagnostic.data' [js_test:multi_coll_drop] 2016-04-06T02:51:57.047-0500 c20011| 2016-04-06T02:51:56.919-0500 D EXECUTOR [replExecDBWorker-2] starting thread in pool replExecDBWorker-Pool [js_test:multi_coll_drop] 2016-04-06T02:51:57.047-0500 c20011| 2016-04-06T02:51:56.919-0500 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker [js_test:multi_coll_drop] 2016-04-06T02:51:57.049-0500 c20011| 2016-04-06T02:51:56.919-0500 D STORAGE [initandlisten] create collection local.startup_log { capped: true, size: 10485760 } [js_test:multi_coll_drop] 2016-04-06T02:51:57.051-0500 c20011| 2016-04-06T02:51:56.919-0500 D STORAGE [initandlisten] stored meta data for local.startup_log @ RecordId(2) [js_test:multi_coll_drop] 2016-04-06T02:51:57.054-0500 c20011| 2016-04-06T02:51:56.919-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-2--6404702321693896372 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:51:57.056-0500 c20011| 2016-04-06T02:51:56.927-0500 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-2--6404702321693896372 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:51:57.059-0500 c20011| 2016-04-06T02:51:56.927-0500 D STORAGE [initandlisten] local.startup_log: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:51:57.066-0500 c20011| 2016-04-06T02:51:56.927-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createSortedDataInterface ident: index-3--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.startup_log" }), [js_test:multi_coll_drop] 2016-04-06T02:51:57.068-0500 c20011| 2016-04-06T02:51:56.927-0500 D STORAGE [initandlisten] create uri: table:index-3--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.startup_log" }), [js_test:multi_coll_drop] 2016-04-06T02:51:57.071-0500 c20011| 2016-04-06T02:51:56.933-0500 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-3--6404702321693896372 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:51:57.074-0500 c20011| 2016-04-06T02:51:56.933-0500 D STORAGE [initandlisten] local.startup_log: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:51:57.075-0500 c20011| 2016-04-06T02:51:56.933-0500 I NETWORK [initandlisten] waiting for connections on port 20011 [js_test:multi_coll_drop] 2016-04-06T02:51:57.122-0500 c20011| 2016-04-06T02:51:57.120-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:33447 #1 (1 connection now open) [js_test:multi_coll_drop] 2016-04-06T02:51:57.123-0500 c20011| 2016-04-06T02:51:57.120-0500 D COMMAND [conn1] run command admin.$cmd { isMaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:51:57.126-0500 c20011| 2016-04-06T02:51:57.121-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { isMaster: 1 } numYields:0 reslen:342 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:57.128-0500 [ connection to mongovm16:20011 ] [js_test:multi_coll_drop] 2016-04-06T02:51:57.129-0500 ReplSetTest n is : 1 [js_test:multi_coll_drop] 2016-04-06T02:51:57.130-0500 { [js_test:multi_coll_drop] 2016-04-06T02:51:57.132-0500 "useHostName" : true, [js_test:multi_coll_drop] 2016-04-06T02:51:57.133-0500 "oplogSize" : 40, [js_test:multi_coll_drop] 2016-04-06T02:51:57.133-0500 "keyFile" : undefined, [js_test:multi_coll_drop] 2016-04-06T02:51:57.133-0500 "port" : 20012, [js_test:multi_coll_drop] 2016-04-06T02:51:57.135-0500 "noprealloc" : "", [js_test:multi_coll_drop] 2016-04-06T02:51:57.135-0500 "smallfiles" : "", [js_test:multi_coll_drop] 2016-04-06T02:51:57.137-0500 "replSet" : "multidrop-configRS", [js_test:multi_coll_drop] 2016-04-06T02:51:57.138-0500 "dbpath" : "$set-$node", [js_test:multi_coll_drop] 2016-04-06T02:51:57.139-0500 "pathOpts" : { [js_test:multi_coll_drop] 2016-04-06T02:51:57.140-0500 "testName" : "multidrop", [js_test:multi_coll_drop] 2016-04-06T02:51:57.140-0500 "node" : 1, [js_test:multi_coll_drop] 2016-04-06T02:51:57.141-0500 "set" : "multidrop-configRS" [js_test:multi_coll_drop] 2016-04-06T02:51:57.141-0500 }, [js_test:multi_coll_drop] 2016-04-06T02:51:57.142-0500 "journal" : "", [js_test:multi_coll_drop] 2016-04-06T02:51:57.142-0500 "configsvr" : "", [js_test:multi_coll_drop] 2016-04-06T02:51:57.143-0500 "noJournalPrealloc" : undefined, [js_test:multi_coll_drop] 2016-04-06T02:51:57.144-0500 "storageEngine" : "wiredTiger", [js_test:multi_coll_drop] 2016-04-06T02:51:57.144-0500 "verbose" : 2, [js_test:multi_coll_drop] 2016-04-06T02:51:57.145-0500 "restart" : undefined [js_test:multi_coll_drop] 2016-04-06T02:51:57.145-0500 } [js_test:multi_coll_drop] 2016-04-06T02:51:57.146-0500 ReplSetTest Starting.... [js_test:multi_coll_drop] 2016-04-06T02:51:57.146-0500 Resetting db path '/data/db/job0/mongorunner/multidrop-configRS-1' [js_test:multi_coll_drop] 2016-04-06T02:51:57.148-0500 2016-04-06T02:51:57.123-0500 I - [thread1] shell: started program (sh65723): /data/mci/src/mongod --oplogSize 40 --port 20012 --noprealloc --smallfiles --replSet multidrop-configRS --dbpath /data/db/job0/mongorunner/multidrop-configRS-1 --journal --configsvr --storageEngine wiredTiger -vv --nopreallocj --setParameter enableTestCommands=1 [js_test:multi_coll_drop] 2016-04-06T02:51:57.149-0500 2016-04-06T02:51:57.124-0500 W NETWORK [thread1] Failed to connect to 127.0.0.1:20012, reason: Connection refused [js_test:multi_coll_drop] 2016-04-06T02:51:57.152-0500 c20012| note: noprealloc may hurt performance in many applications [js_test:multi_coll_drop] 2016-04-06T02:51:57.160-0500 c20012| 2016-04-06T02:51:57.157-0500 I CONTROL [initandlisten] MongoDB starting : pid=65723 port=20012 dbpath=/data/db/job0/mongorunner/multidrop-configRS-1 64-bit host=mongovm16 [js_test:multi_coll_drop] 2016-04-06T02:51:57.161-0500 c20012| 2016-04-06T02:51:57.157-0500 I CONTROL [initandlisten] db version v3.3.4-37-g36f3ff8 [js_test:multi_coll_drop] 2016-04-06T02:51:57.163-0500 c20012| 2016-04-06T02:51:57.157-0500 I CONTROL [initandlisten] git version: 36f3ff8da1f7ae3710ceacc4e13adfd4abdb99da [js_test:multi_coll_drop] 2016-04-06T02:51:57.164-0500 c20012| 2016-04-06T02:51:57.157-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 [js_test:multi_coll_drop] 2016-04-06T02:51:57.166-0500 c20012| 2016-04-06T02:51:57.157-0500 I CONTROL [initandlisten] allocator: tcmalloc [js_test:multi_coll_drop] 2016-04-06T02:51:57.167-0500 c20012| 2016-04-06T02:51:57.157-0500 I CONTROL [initandlisten] modules: enterprise [js_test:multi_coll_drop] 2016-04-06T02:51:57.169-0500 c20012| 2016-04-06T02:51:57.157-0500 I CONTROL [initandlisten] build environment: [js_test:multi_coll_drop] 2016-04-06T02:51:57.169-0500 c20012| 2016-04-06T02:51:57.157-0500 I CONTROL [initandlisten] distmod: rhel71 [js_test:multi_coll_drop] 2016-04-06T02:51:57.170-0500 c20012| 2016-04-06T02:51:57.157-0500 I CONTROL [initandlisten] distarch: ppc64le [js_test:multi_coll_drop] 2016-04-06T02:51:57.171-0500 c20012| 2016-04-06T02:51:57.157-0500 I CONTROL [initandlisten] target_arch: ppc64le [js_test:multi_coll_drop] 2016-04-06T02:51:57.176-0500 c20012| 2016-04-06T02:51:57.157-0500 I CONTROL [initandlisten] options: { net: { port: 20012 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "multidrop-configRS" }, setParameter: { enableTestCommands: "1" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/data/db/job0/mongorunner/multidrop-configRS-1", engine: "wiredTiger", journal: { enabled: true }, mmapv1: { preallocDataFiles: false, smallFiles: true } }, systemLog: { verbosity: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:51:57.178-0500 c20012| 2016-04-06T02:51:57.158-0500 D NETWORK [initandlisten] fd limit hard:64000 soft:64000 max conn: 51200 [js_test:multi_coll_drop] 2016-04-06T02:51:57.223-0500 c20012| 2016-04-06T02:51:57.214-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=73G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0), [js_test:multi_coll_drop] 2016-04-06T02:51:57.262-0500 c20012| 2016-04-06T02:51:57.261-0500 D COMMAND [WTJournalFlusher] BackgroundJob starting: WTJournalFlusher [js_test:multi_coll_drop] 2016-04-06T02:51:57.263-0500 c20012| 2016-04-06T02:51:57.261-0500 D STORAGE [WTJournalFlusher] starting WTJournalFlusher thread [js_test:multi_coll_drop] 2016-04-06T02:51:57.273-0500 c20012| 2016-04-06T02:51:57.267-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:_mdb_catalog config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:51:57.279-0500 c20012| 2016-04-06T02:51:57.270-0500 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:_mdb_catalog ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:51:57.286-0500 c20012| 2016-04-06T02:51:57.274-0500 D STORAGE [initandlisten] flushing directory /data/db/job0/mongorunner/multidrop-configRS-1 [js_test:multi_coll_drop] 2016-04-06T02:51:57.290-0500 c20012| 2016-04-06T02:51:57.274-0500 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger [js_test:multi_coll_drop] 2016-04-06T02:51:57.291-0500 c20012| 2016-04-06T02:51:57.274-0500 I CONTROL [initandlisten] [js_test:multi_coll_drop] 2016-04-06T02:51:57.292-0500 c20012| 2016-04-06T02:51:57.274-0500 I CONTROL [initandlisten] ** NOTE: This is a development version (3.3.4-37-g36f3ff8) of MongoDB. [js_test:multi_coll_drop] 2016-04-06T02:51:57.293-0500 c20012| 2016-04-06T02:51:57.274-0500 I CONTROL [initandlisten] ** Not recommended for production. [js_test:multi_coll_drop] 2016-04-06T02:51:57.294-0500 c20012| 2016-04-06T02:51:57.274-0500 I CONTROL [initandlisten] [js_test:multi_coll_drop] 2016-04-06T02:51:57.294-0500 c20012| 2016-04-06T02:51:57.274-0500 I CONTROL [initandlisten] ** WARNING: Insecure configuration, access control is not enabled and no --bind_ip has been specified. [js_test:multi_coll_drop] 2016-04-06T02:51:57.296-0500 c20012| 2016-04-06T02:51:57.274-0500 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted, [js_test:multi_coll_drop] 2016-04-06T02:51:57.297-0500 c20012| 2016-04-06T02:51:57.274-0500 I CONTROL [initandlisten] ** and the server listens on all available network interfaces. [js_test:multi_coll_drop] 2016-04-06T02:51:57.298-0500 c20012| 2016-04-06T02:51:57.274-0500 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended. [js_test:multi_coll_drop] 2016-04-06T02:51:57.298-0500 c20012| 2016-04-06T02:51:57.274-0500 I CONTROL [initandlisten] [js_test:multi_coll_drop] 2016-04-06T02:51:57.300-0500 c20012| 2016-04-06T02:51:57.275-0500 D COMMAND [SNMPAgent] BackgroundJob starting: SNMPAgent [js_test:multi_coll_drop] 2016-04-06T02:51:57.302-0500 c20012| 2016-04-06T02:51:57.275-0500 D NETWORK [SNMPAgent] SNMPAgent not enabled [js_test:multi_coll_drop] 2016-04-06T02:51:57.303-0500 c20012| 2016-04-06T02:51:57.275-0500 D STORAGE [initandlisten] enter repairDatabases (to check pdfile version #) [js_test:multi_coll_drop] 2016-04-06T02:51:57.303-0500 c20012| 2016-04-06T02:51:57.275-0500 D STORAGE [initandlisten] Checking node for SERVER-23299 eligibility [js_test:multi_coll_drop] 2016-04-06T02:51:57.304-0500 c20012| 2016-04-06T02:51:57.275-0500 D STORAGE [initandlisten] Didn't find local.startup_log [js_test:multi_coll_drop] 2016-04-06T02:51:57.304-0500 c20012| 2016-04-06T02:51:57.275-0500 D STORAGE [initandlisten] done repairDatabases [js_test:multi_coll_drop] 2016-04-06T02:51:57.305-0500 c20012| 2016-04-06T02:51:57.275-0500 D QUERY [initandlisten] Running query: query: {} sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:51:57.308-0500 c20012| 2016-04-06T02:51:57.275-0500 D QUERY [initandlisten] Collection admin.system.roles does not exist. Using EOF plan: query: {} sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:51:57.312-0500 c20012| 2016-04-06T02:51:57.275-0500 I COMMAND [initandlisten] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:20 locks:{ Global: { acquireCount: { r: 6, W: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:57.313-0500 c20012| 2016-04-06T02:51:57.275-0500 D INDEX [initandlisten] checking complete [js_test:multi_coll_drop] 2016-04-06T02:51:57.316-0500 c20012| 2016-04-06T02:51:57.275-0500 D QUERY [initandlisten] Collection local.me does not exist. Using EOF plan: query: {} sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:51:57.317-0500 c20012| 2016-04-06T02:51:57.275-0500 D STORAGE [initandlisten] stored meta data for local.me @ RecordId(1) [js_test:multi_coll_drop] 2016-04-06T02:51:57.319-0500 c20012| 2016-04-06T02:51:57.275-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-0-6577373056560964212 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:51:57.320-0500 c20012| 2016-04-06T02:51:57.280-0500 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-0-6577373056560964212 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:51:57.321-0500 c20012| 2016-04-06T02:51:57.280-0500 D STORAGE [initandlisten] local.me: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:51:57.323-0500 c20012| 2016-04-06T02:51:57.280-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createSortedDataInterface ident: index-1-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.me" }), [js_test:multi_coll_drop] 2016-04-06T02:51:57.326-0500 c20012| 2016-04-06T02:51:57.280-0500 D STORAGE [initandlisten] create uri: table:index-1-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.me" }), [js_test:multi_coll_drop] 2016-04-06T02:51:57.329-0500 c20012| 2016-04-06T02:51:57.283-0500 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-1-6577373056560964212 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:51:57.330-0500 c20012| 2016-04-06T02:51:57.283-0500 D STORAGE [initandlisten] local.me: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:51:57.330-0500 c20012| 2016-04-06T02:51:57.283-0500 D QUERY [initandlisten] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:51:57.333-0500 c20012| 2016-04-06T02:51:57.283-0500 I REPL [initandlisten] Did not find local voted for document at startup; NoMatchingDocument: Did not find replica set lastVote document in local.replset.election [js_test:multi_coll_drop] 2016-04-06T02:51:57.334-0500 c20012| 2016-04-06T02:51:57.283-0500 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset [js_test:multi_coll_drop] 2016-04-06T02:51:57.335-0500 c20012| 2016-04-06T02:51:57.283-0500 D ASIO [NetworkInterfaceASIO-Replication-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:51:57.336-0500 c20012| 2016-04-06T02:51:57.283-0500 D COMMAND [TTLMonitor] BackgroundJob starting: TTLMonitor [js_test:multi_coll_drop] 2016-04-06T02:51:57.336-0500 c20012| 2016-04-06T02:51:57.284-0500 D EXECUTOR [replExecDBWorker-0] starting thread in pool replExecDBWorker-Pool [js_test:multi_coll_drop] 2016-04-06T02:51:57.336-0500 c20012| 2016-04-06T02:51:57.284-0500 D COMMAND [ClientCursorMonitor] BackgroundJob starting: ClientCursorMonitor [js_test:multi_coll_drop] 2016-04-06T02:51:57.337-0500 c20012| 2016-04-06T02:51:57.284-0500 D COMMAND [PeriodicTaskRunner] BackgroundJob starting: PeriodicTaskRunner [js_test:multi_coll_drop] 2016-04-06T02:51:57.338-0500 c20012| 2016-04-06T02:51:57.284-0500 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/job0/mongorunner/multidrop-configRS-1/diagnostic.data' [js_test:multi_coll_drop] 2016-04-06T02:51:57.341-0500 c20012| 2016-04-06T02:51:57.284-0500 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker [js_test:multi_coll_drop] 2016-04-06T02:51:57.342-0500 c20012| 2016-04-06T02:51:57.284-0500 D STORAGE [initandlisten] create collection local.startup_log { capped: true, size: 10485760 } [js_test:multi_coll_drop] 2016-04-06T02:51:57.343-0500 c20012| 2016-04-06T02:51:57.284-0500 D EXECUTOR [replExecDBWorker-2] starting thread in pool replExecDBWorker-Pool [js_test:multi_coll_drop] 2016-04-06T02:51:57.344-0500 c20012| 2016-04-06T02:51:57.284-0500 D STORAGE [initandlisten] stored meta data for local.startup_log @ RecordId(2) [js_test:multi_coll_drop] 2016-04-06T02:51:57.345-0500 c20012| 2016-04-06T02:51:57.284-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-2-6577373056560964212 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:51:57.346-0500 c20012| 2016-04-06T02:51:57.287-0500 D EXECUTOR [replExecDBWorker-1] starting thread in pool replExecDBWorker-Pool [js_test:multi_coll_drop] 2016-04-06T02:51:57.347-0500 c20012| 2016-04-06T02:51:57.287-0500 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-2-6577373056560964212 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:51:57.348-0500 c20012| 2016-04-06T02:51:57.287-0500 D STORAGE [initandlisten] local.startup_log: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:51:57.355-0500 c20012| 2016-04-06T02:51:57.288-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createSortedDataInterface ident: index-3-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.startup_log" }), [js_test:multi_coll_drop] 2016-04-06T02:51:57.358-0500 c20012| 2016-04-06T02:51:57.288-0500 D STORAGE [initandlisten] create uri: table:index-3-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.startup_log" }), [js_test:multi_coll_drop] 2016-04-06T02:51:57.360-0500 c20012| 2016-04-06T02:51:57.292-0500 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-3-6577373056560964212 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:51:57.361-0500 c20012| 2016-04-06T02:51:57.292-0500 D STORAGE [initandlisten] local.startup_log: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:51:57.363-0500 c20012| 2016-04-06T02:51:57.292-0500 I NETWORK [initandlisten] waiting for connections on port 20012 [js_test:multi_coll_drop] 2016-04-06T02:51:57.366-0500 c20012| 2016-04-06T02:51:57.324-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:54926 #1 (1 connection now open) [js_test:multi_coll_drop] 2016-04-06T02:51:57.367-0500 c20012| 2016-04-06T02:51:57.325-0500 D COMMAND [conn1] run command admin.$cmd { isMaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:51:57.369-0500 c20012| 2016-04-06T02:51:57.325-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { isMaster: 1 } numYields:0 reslen:342 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:57.370-0500 [ connection to mongovm16:20011, connection to mongovm16:20012 ] [js_test:multi_coll_drop] 2016-04-06T02:51:57.371-0500 ReplSetTest n is : 2 [js_test:multi_coll_drop] 2016-04-06T02:51:57.372-0500 { [js_test:multi_coll_drop] 2016-04-06T02:51:57.372-0500 "useHostName" : true, [js_test:multi_coll_drop] 2016-04-06T02:51:57.372-0500 "oplogSize" : 40, [js_test:multi_coll_drop] 2016-04-06T02:51:57.373-0500 "keyFile" : undefined, [js_test:multi_coll_drop] 2016-04-06T02:51:57.375-0500 "port" : 20013, [js_test:multi_coll_drop] 2016-04-06T02:51:57.375-0500 "noprealloc" : "", [js_test:multi_coll_drop] 2016-04-06T02:51:57.375-0500 "smallfiles" : "", [js_test:multi_coll_drop] 2016-04-06T02:51:57.377-0500 "replSet" : "multidrop-configRS", [js_test:multi_coll_drop] 2016-04-06T02:51:57.378-0500 "dbpath" : "$set-$node", [js_test:multi_coll_drop] 2016-04-06T02:51:57.379-0500 "pathOpts" : { [js_test:multi_coll_drop] 2016-04-06T02:51:57.380-0500 "testName" : "multidrop", [js_test:multi_coll_drop] 2016-04-06T02:51:57.382-0500 "node" : 2, [js_test:multi_coll_drop] 2016-04-06T02:51:57.382-0500 "set" : "multidrop-configRS" [js_test:multi_coll_drop] 2016-04-06T02:51:57.382-0500 }, [js_test:multi_coll_drop] 2016-04-06T02:51:57.385-0500 "journal" : "", [js_test:multi_coll_drop] 2016-04-06T02:51:57.385-0500 "configsvr" : "", [js_test:multi_coll_drop] 2016-04-06T02:51:57.386-0500 "noJournalPrealloc" : undefined, [js_test:multi_coll_drop] 2016-04-06T02:51:57.386-0500 "storageEngine" : "wiredTiger", [js_test:multi_coll_drop] 2016-04-06T02:51:57.386-0500 "verbose" : 2, [js_test:multi_coll_drop] 2016-04-06T02:51:57.387-0500 "restart" : undefined [js_test:multi_coll_drop] 2016-04-06T02:51:57.388-0500 } [js_test:multi_coll_drop] 2016-04-06T02:51:57.388-0500 ReplSetTest Starting.... [js_test:multi_coll_drop] 2016-04-06T02:51:57.390-0500 Resetting db path '/data/db/job0/mongorunner/multidrop-configRS-2' [js_test:multi_coll_drop] 2016-04-06T02:51:57.393-0500 2016-04-06T02:51:57.328-0500 I - [thread1] shell: started program (sh66033): /data/mci/src/mongod --oplogSize 40 --port 20013 --noprealloc --smallfiles --replSet multidrop-configRS --dbpath /data/db/job0/mongorunner/multidrop-configRS-2 --journal --configsvr --storageEngine wiredTiger -vv --nopreallocj --setParameter enableTestCommands=1 [js_test:multi_coll_drop] 2016-04-06T02:51:57.395-0500 2016-04-06T02:51:57.329-0500 W NETWORK [thread1] Failed to connect to 127.0.0.1:20013, reason: Connection refused [js_test:multi_coll_drop] 2016-04-06T02:51:57.396-0500 c20013| note: noprealloc may hurt performance in many applications [js_test:multi_coll_drop] 2016-04-06T02:51:57.397-0500 c20013| 2016-04-06T02:51:57.356-0500 I CONTROL [initandlisten] MongoDB starting : pid=66033 port=20013 dbpath=/data/db/job0/mongorunner/multidrop-configRS-2 64-bit host=mongovm16 [js_test:multi_coll_drop] 2016-04-06T02:51:57.398-0500 c20013| 2016-04-06T02:51:57.356-0500 I CONTROL [initandlisten] db version v3.3.4-37-g36f3ff8 [js_test:multi_coll_drop] 2016-04-06T02:51:57.398-0500 c20013| 2016-04-06T02:51:57.356-0500 I CONTROL [initandlisten] git version: 36f3ff8da1f7ae3710ceacc4e13adfd4abdb99da [js_test:multi_coll_drop] 2016-04-06T02:51:57.399-0500 c20013| 2016-04-06T02:51:57.356-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 [js_test:multi_coll_drop] 2016-04-06T02:51:57.402-0500 c20013| 2016-04-06T02:51:57.356-0500 I CONTROL [initandlisten] allocator: tcmalloc [js_test:multi_coll_drop] 2016-04-06T02:51:57.402-0500 c20013| 2016-04-06T02:51:57.356-0500 I CONTROL [initandlisten] modules: enterprise [js_test:multi_coll_drop] 2016-04-06T02:51:57.402-0500 c20013| 2016-04-06T02:51:57.356-0500 I CONTROL [initandlisten] build environment: [js_test:multi_coll_drop] 2016-04-06T02:51:57.404-0500 c20013| 2016-04-06T02:51:57.356-0500 I CONTROL [initandlisten] distmod: rhel71 [js_test:multi_coll_drop] 2016-04-06T02:51:57.405-0500 c20013| 2016-04-06T02:51:57.356-0500 I CONTROL [initandlisten] distarch: ppc64le [js_test:multi_coll_drop] 2016-04-06T02:51:57.406-0500 c20013| 2016-04-06T02:51:57.356-0500 I CONTROL [initandlisten] target_arch: ppc64le [js_test:multi_coll_drop] 2016-04-06T02:51:57.409-0500 c20013| 2016-04-06T02:51:57.356-0500 I CONTROL [initandlisten] options: { net: { port: 20013 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "multidrop-configRS" }, setParameter: { enableTestCommands: "1" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/data/db/job0/mongorunner/multidrop-configRS-2", engine: "wiredTiger", journal: { enabled: true }, mmapv1: { preallocDataFiles: false, smallFiles: true } }, systemLog: { verbosity: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:51:57.415-0500 c20013| 2016-04-06T02:51:57.356-0500 D NETWORK [initandlisten] fd limit hard:64000 soft:64000 max conn: 51200 [js_test:multi_coll_drop] 2016-04-06T02:51:57.418-0500 c20013| 2016-04-06T02:51:57.414-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=73G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0), [js_test:multi_coll_drop] 2016-04-06T02:51:57.468-0500 c20013| 2016-04-06T02:51:57.466-0500 D COMMAND [WTJournalFlusher] BackgroundJob starting: WTJournalFlusher [js_test:multi_coll_drop] 2016-04-06T02:51:57.468-0500 c20013| 2016-04-06T02:51:57.466-0500 D STORAGE [WTJournalFlusher] starting WTJournalFlusher thread [js_test:multi_coll_drop] 2016-04-06T02:51:57.487-0500 c20013| 2016-04-06T02:51:57.486-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:_mdb_catalog config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:51:57.489-0500 c20013| 2016-04-06T02:51:57.489-0500 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:_mdb_catalog ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:51:57.493-0500 c20013| 2016-04-06T02:51:57.492-0500 D STORAGE [initandlisten] flushing directory /data/db/job0/mongorunner/multidrop-configRS-2 [js_test:multi_coll_drop] 2016-04-06T02:51:57.494-0500 c20013| 2016-04-06T02:51:57.493-0500 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger [js_test:multi_coll_drop] 2016-04-06T02:51:57.494-0500 c20013| 2016-04-06T02:51:57.493-0500 I CONTROL [initandlisten] [js_test:multi_coll_drop] 2016-04-06T02:51:57.496-0500 c20013| 2016-04-06T02:51:57.493-0500 I CONTROL [initandlisten] ** NOTE: This is a development version (3.3.4-37-g36f3ff8) of MongoDB. [js_test:multi_coll_drop] 2016-04-06T02:51:57.497-0500 c20013| 2016-04-06T02:51:57.493-0500 I CONTROL [initandlisten] ** Not recommended for production. [js_test:multi_coll_drop] 2016-04-06T02:51:57.497-0500 c20013| 2016-04-06T02:51:57.493-0500 I CONTROL [initandlisten] [js_test:multi_coll_drop] 2016-04-06T02:51:57.499-0500 c20013| 2016-04-06T02:51:57.493-0500 I CONTROL [initandlisten] ** WARNING: Insecure configuration, access control is not enabled and no --bind_ip has been specified. [js_test:multi_coll_drop] 2016-04-06T02:51:57.500-0500 c20013| 2016-04-06T02:51:57.493-0500 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted, [js_test:multi_coll_drop] 2016-04-06T02:51:57.503-0500 c20013| 2016-04-06T02:51:57.493-0500 I CONTROL [initandlisten] ** and the server listens on all available network interfaces. [js_test:multi_coll_drop] 2016-04-06T02:51:57.503-0500 c20013| 2016-04-06T02:51:57.493-0500 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended. [js_test:multi_coll_drop] 2016-04-06T02:51:57.503-0500 c20013| 2016-04-06T02:51:57.493-0500 I CONTROL [initandlisten] [js_test:multi_coll_drop] 2016-04-06T02:51:57.505-0500 c20013| 2016-04-06T02:51:57.493-0500 D STORAGE [initandlisten] enter repairDatabases (to check pdfile version #) [js_test:multi_coll_drop] 2016-04-06T02:51:57.509-0500 c20013| 2016-04-06T02:51:57.493-0500 D STORAGE [initandlisten] Checking node for SERVER-23299 eligibility [js_test:multi_coll_drop] 2016-04-06T02:51:57.511-0500 c20013| 2016-04-06T02:51:57.493-0500 D STORAGE [initandlisten] Didn't find local.startup_log [js_test:multi_coll_drop] 2016-04-06T02:51:57.512-0500 c20013| 2016-04-06T02:51:57.493-0500 D STORAGE [initandlisten] done repairDatabases [js_test:multi_coll_drop] 2016-04-06T02:51:57.515-0500 c20013| 2016-04-06T02:51:57.493-0500 D QUERY [initandlisten] Running query: query: {} sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:51:57.516-0500 c20013| 2016-04-06T02:51:57.493-0500 D QUERY [initandlisten] Collection admin.system.roles does not exist. Using EOF plan: query: {} sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:51:57.520-0500 c20013| 2016-04-06T02:51:57.493-0500 I COMMAND [initandlisten] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:20 locks:{ Global: { acquireCount: { r: 6, W: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:57.521-0500 c20013| 2016-04-06T02:51:57.493-0500 D INDEX [initandlisten] checking complete [js_test:multi_coll_drop] 2016-04-06T02:51:57.524-0500 c20013| 2016-04-06T02:51:57.493-0500 D QUERY [initandlisten] Collection local.me does not exist. Using EOF plan: query: {} sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:51:57.527-0500 c20013| 2016-04-06T02:51:57.493-0500 D STORAGE [initandlisten] stored meta data for local.me @ RecordId(1) [js_test:multi_coll_drop] 2016-04-06T02:51:57.534-0500 c20013| 2016-04-06T02:51:57.493-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-0-751336887848580549 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:51:57.534-0500 c20013| 2016-04-06T02:51:57.496-0500 D COMMAND [SNMPAgent] BackgroundJob starting: SNMPAgent [js_test:multi_coll_drop] 2016-04-06T02:51:57.536-0500 c20013| 2016-04-06T02:51:57.496-0500 D NETWORK [SNMPAgent] SNMPAgent not enabled [js_test:multi_coll_drop] 2016-04-06T02:51:57.538-0500 c20013| 2016-04-06T02:51:57.497-0500 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-0-751336887848580549 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:51:57.543-0500 c20013| 2016-04-06T02:51:57.497-0500 D STORAGE [initandlisten] local.me: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:51:57.547-0500 c20013| 2016-04-06T02:51:57.497-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createSortedDataInterface ident: index-1-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.me" }), [js_test:multi_coll_drop] 2016-04-06T02:51:57.552-0500 c20013| 2016-04-06T02:51:57.497-0500 D STORAGE [initandlisten] create uri: table:index-1-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.me" }), [js_test:multi_coll_drop] 2016-04-06T02:51:57.556-0500 c20013| 2016-04-06T02:51:57.503-0500 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-1-751336887848580549 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:51:57.558-0500 c20013| 2016-04-06T02:51:57.503-0500 D STORAGE [initandlisten] local.me: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:51:57.561-0500 c20013| 2016-04-06T02:51:57.503-0500 D QUERY [initandlisten] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:51:57.564-0500 c20013| 2016-04-06T02:51:57.503-0500 I REPL [initandlisten] Did not find local voted for document at startup; NoMatchingDocument: Did not find replica set lastVote document in local.replset.election [js_test:multi_coll_drop] 2016-04-06T02:51:57.567-0500 c20013| 2016-04-06T02:51:57.503-0500 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset [js_test:multi_coll_drop] 2016-04-06T02:51:57.567-0500 c20013| 2016-04-06T02:51:57.503-0500 D ASIO [NetworkInterfaceASIO-Replication-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:51:57.571-0500 c20013| 2016-04-06T02:51:57.503-0500 D EXECUTOR [replExecDBWorker-0] starting thread in pool replExecDBWorker-Pool [js_test:multi_coll_drop] 2016-04-06T02:51:57.577-0500 c20013| 2016-04-06T02:51:57.504-0500 D EXECUTOR [replExecDBWorker-1] starting thread in pool replExecDBWorker-Pool [js_test:multi_coll_drop] 2016-04-06T02:51:57.579-0500 c20013| 2016-04-06T02:51:57.504-0500 D EXECUTOR [replExecDBWorker-2] starting thread in pool replExecDBWorker-Pool [js_test:multi_coll_drop] 2016-04-06T02:51:57.581-0500 c20013| 2016-04-06T02:51:57.504-0500 D COMMAND [TTLMonitor] BackgroundJob starting: TTLMonitor [js_test:multi_coll_drop] 2016-04-06T02:51:57.584-0500 c20013| 2016-04-06T02:51:57.504-0500 D COMMAND [ClientCursorMonitor] BackgroundJob starting: ClientCursorMonitor [js_test:multi_coll_drop] 2016-04-06T02:51:57.586-0500 c20013| 2016-04-06T02:51:57.504-0500 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/job0/mongorunner/multidrop-configRS-2/diagnostic.data' [js_test:multi_coll_drop] 2016-04-06T02:51:57.586-0500 c20013| 2016-04-06T02:51:57.504-0500 D STORAGE [initandlisten] create collection local.startup_log { capped: true, size: 10485760 } [js_test:multi_coll_drop] 2016-04-06T02:51:57.587-0500 c20013| 2016-04-06T02:51:57.504-0500 D STORAGE [initandlisten] stored meta data for local.startup_log @ RecordId(2) [js_test:multi_coll_drop] 2016-04-06T02:51:57.599-0500 c20013| 2016-04-06T02:51:57.504-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-2-751336887848580549 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:51:57.600-0500 c20013| 2016-04-06T02:51:57.504-0500 D COMMAND [PeriodicTaskRunner] BackgroundJob starting: PeriodicTaskRunner [js_test:multi_coll_drop] 2016-04-06T02:51:57.602-0500 c20013| 2016-04-06T02:51:57.504-0500 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker [js_test:multi_coll_drop] 2016-04-06T02:51:57.604-0500 c20013| 2016-04-06T02:51:57.510-0500 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-2-751336887848580549 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:51:57.607-0500 c20013| 2016-04-06T02:51:57.510-0500 D STORAGE [initandlisten] local.startup_log: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:51:57.612-0500 c20013| 2016-04-06T02:51:57.511-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createSortedDataInterface ident: index-3-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.startup_log" }), [js_test:multi_coll_drop] 2016-04-06T02:51:57.615-0500 c20013| 2016-04-06T02:51:57.511-0500 D STORAGE [initandlisten] create uri: table:index-3-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.startup_log" }), [js_test:multi_coll_drop] 2016-04-06T02:51:57.616-0500 c20013| 2016-04-06T02:51:57.522-0500 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-3-751336887848580549 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:51:57.617-0500 c20013| 2016-04-06T02:51:57.522-0500 D STORAGE [initandlisten] local.startup_log: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:51:57.617-0500 c20013| 2016-04-06T02:51:57.522-0500 I NETWORK [initandlisten] waiting for connections on port 20013 [js_test:multi_coll_drop] 2016-04-06T02:51:57.619-0500 c20013| 2016-04-06T02:51:57.529-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:40479 #1 (1 connection now open) [js_test:multi_coll_drop] 2016-04-06T02:51:57.619-0500 c20013| 2016-04-06T02:51:57.530-0500 D COMMAND [conn1] run command admin.$cmd { isMaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:51:57.620-0500 c20013| 2016-04-06T02:51:57.530-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { isMaster: 1 } numYields:0 reslen:342 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:57.621-0500 [ [js_test:multi_coll_drop] 2016-04-06T02:51:57.621-0500 connection to mongovm16:20011, [js_test:multi_coll_drop] 2016-04-06T02:51:57.623-0500 connection to mongovm16:20012, [js_test:multi_coll_drop] 2016-04-06T02:51:57.623-0500 connection to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:51:57.624-0500 ] [js_test:multi_coll_drop] 2016-04-06T02:51:57.626-0500 { [js_test:multi_coll_drop] 2016-04-06T02:51:57.627-0500 "replSetInitiate" : { [js_test:multi_coll_drop] 2016-04-06T02:51:57.628-0500 "_id" : "multidrop-configRS", [js_test:multi_coll_drop] 2016-04-06T02:51:57.629-0500 "members" : [ [js_test:multi_coll_drop] 2016-04-06T02:51:57.629-0500 { [js_test:multi_coll_drop] 2016-04-06T02:51:57.629-0500 "_id" : 0, [js_test:multi_coll_drop] 2016-04-06T02:51:57.630-0500 "host" : "mongovm16:20011" [js_test:multi_coll_drop] 2016-04-06T02:51:57.631-0500 }, [js_test:multi_coll_drop] 2016-04-06T02:51:57.631-0500 { [js_test:multi_coll_drop] 2016-04-06T02:51:57.631-0500 "_id" : 1, [js_test:multi_coll_drop] 2016-04-06T02:51:57.631-0500 "host" : "mongovm16:20012" [js_test:multi_coll_drop] 2016-04-06T02:51:57.634-0500 }, [js_test:multi_coll_drop] 2016-04-06T02:51:57.635-0500 { [js_test:multi_coll_drop] 2016-04-06T02:51:57.636-0500 "_id" : 2, [js_test:multi_coll_drop] 2016-04-06T02:51:57.636-0500 "host" : "mongovm16:20013" [js_test:multi_coll_drop] 2016-04-06T02:51:57.636-0500 } [js_test:multi_coll_drop] 2016-04-06T02:51:57.640-0500 ], [js_test:multi_coll_drop] 2016-04-06T02:51:57.640-0500 "settings" : { [js_test:multi_coll_drop] 2016-04-06T02:51:57.644-0500 "electionTimeoutMillis" : 5000 [js_test:multi_coll_drop] 2016-04-06T02:51:57.650-0500 }, [js_test:multi_coll_drop] 2016-04-06T02:51:57.650-0500 "configsvr" : true [js_test:multi_coll_drop] 2016-04-06T02:51:57.650-0500 } [js_test:multi_coll_drop] 2016-04-06T02:51:57.652-0500 } [js_test:multi_coll_drop] 2016-04-06T02:51:57.656-0500 c20011| 2016-04-06T02:51:57.531-0500 D COMMAND [conn1] run command admin.$cmd { replSetInitiate: { _id: "multidrop-configRS", members: [ { _id: 0.0, host: "mongovm16:20011" }, { _id: 1.0, host: "mongovm16:20012" }, { _id: 2.0, host: "mongovm16:20013" } ], settings: { electionTimeoutMillis: 5000.0 }, configsvr: true } } [js_test:multi_coll_drop] 2016-04-06T02:51:57.656-0500 c20011| 2016-04-06T02:51:57.531-0500 D COMMAND [conn1] command: replSetInitiate [js_test:multi_coll_drop] 2016-04-06T02:51:57.656-0500 c20011| 2016-04-06T02:51:57.531-0500 I REPL [conn1] replSetInitiate admin command received from client [js_test:multi_coll_drop] 2016-04-06T02:51:57.659-0500 c20011| 2016-04-06T02:51:57.531-0500 D NETWORK [conn1] getBoundAddrs(): [ 127.0.0.1] [ 192.168.100.28] [ 192.168.2.13] [js_test:multi_coll_drop] 2016-04-06T02:51:57.662-0500 c20011| 2016-04-06T02:51:57.531-0500 D NETWORK [conn1] getAddrsForHost("mongovm16:20011"): [ 192.168.100.28] [js_test:multi_coll_drop] 2016-04-06T02:51:57.677-0500 c20011| 2016-04-06T02:51:57.532-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:51:57.678-0500 c20012| 2016-04-06T02:51:57.532-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:36069 #2 (2 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:51:57.679-0500 c20011| 2016-04-06T02:51:57.533-0500 D NETWORK [conn1] connected to server mongovm16:20012 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:51:57.680-0500 c20012| 2016-04-06T02:51:57.537-0500 D COMMAND [conn2] run command admin.$cmd { _isSelf: 1 } [js_test:multi_coll_drop] 2016-04-06T02:51:57.682-0500 c20012| 2016-04-06T02:51:57.537-0500 I COMMAND [conn2] command admin.$cmd command: _isSelf { _isSelf: 1 } numYields:0 reslen:113 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:57.693-0500 c20012| 2016-04-06T02:51:57.537-0500 D NETWORK [conn2] SocketException: remote: 192.168.100.28:36069 error: 9001 socket exception [CLOSED] server [192.168.100.28:36069] [js_test:multi_coll_drop] 2016-04-06T02:51:57.693-0500 c20012| 2016-04-06T02:51:57.537-0500 I NETWORK [conn2] end connection 192.168.100.28:36069 (1 connection now open) [js_test:multi_coll_drop] 2016-04-06T02:51:57.693-0500 c20011| 2016-04-06T02:51:57.538-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:51:57.694-0500 c20013| 2016-04-06T02:51:57.538-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:49335 #2 (2 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:51:57.695-0500 c20011| 2016-04-06T02:51:57.538-0500 D NETWORK [conn1] connected to server mongovm16:20013 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:51:57.695-0500 c20013| 2016-04-06T02:51:57.538-0500 D COMMAND [conn2] run command admin.$cmd { _isSelf: 1 } [js_test:multi_coll_drop] 2016-04-06T02:51:57.699-0500 c20013| 2016-04-06T02:51:57.538-0500 I COMMAND [conn2] command admin.$cmd command: _isSelf { _isSelf: 1 } numYields:0 reslen:113 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:57.700-0500 c20011| 2016-04-06T02:51:57.538-0500 I REPL [conn1] replSetInitiate config object with 3 members parses ok [js_test:multi_coll_drop] 2016-04-06T02:51:57.705-0500 c20011| 2016-04-06T02:51:57.538-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:07.538-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", pv: 1, v: 1, from: "mongovm16:20011", fromId: 0, checkEmpty: true } [js_test:multi_coll_drop] 2016-04-06T02:51:57.715-0500 c20011| 2016-04-06T02:51:57.538-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:51:57.716-0500 c20013| 2016-04-06T02:51:57.538-0500 D NETWORK [conn2] SocketException: remote: 192.168.100.28:49335 error: 9001 socket exception [CLOSED] server [192.168.100.28:49335] [js_test:multi_coll_drop] 2016-04-06T02:51:57.717-0500 c20013| 2016-04-06T02:51:57.538-0500 I NETWORK [conn2] end connection 192.168.100.28:49335 (1 connection now open) [js_test:multi_coll_drop] 2016-04-06T02:51:57.731-0500 c20011| 2016-04-06T02:51:57.538-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 2 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:07.538-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", pv: 1, v: 1, from: "mongovm16:20011", fromId: 0, checkEmpty: true } [js_test:multi_coll_drop] 2016-04-06T02:51:57.734-0500 c20011| 2016-04-06T02:51:57.538-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:51:57.736-0500 c20011| 2016-04-06T02:51:57.538-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 3 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:51:57.737-0500 c20011| 2016-04-06T02:51:57.539-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 4 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:51:57.741-0500 c20013| 2016-04-06T02:51:57.539-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:49337 #3 (2 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:51:57.750-0500 c20012| 2016-04-06T02:51:57.539-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:36071 #3 (2 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:51:57.751-0500 c20012| 2016-04-06T02:51:57.539-0500 D COMMAND [conn3] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20011" } [js_test:multi_coll_drop] 2016-04-06T02:51:57.753-0500 c20013| 2016-04-06T02:51:57.539-0500 D COMMAND [conn3] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20011" } [js_test:multi_coll_drop] 2016-04-06T02:51:57.754-0500 c20012| 2016-04-06T02:51:57.539-0500 I COMMAND [conn3] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20011" } numYields:0 reslen:342 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:57.754-0500 c20011| 2016-04-06T02:51:57.539-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:51:57.754-0500 c20011| 2016-04-06T02:51:57.539-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 3 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:51:57.755-0500 c20011| 2016-04-06T02:51:57.539-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:51:57.756-0500 c20012| 2016-04-06T02:51:57.539-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", pv: 1, v: 1, from: "mongovm16:20011", fromId: 0, checkEmpty: true } [js_test:multi_coll_drop] 2016-04-06T02:51:57.757-0500 c20012| 2016-04-06T02:51:57.539-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:51:57.758-0500 c20013| 2016-04-06T02:51:57.540-0500 I COMMAND [conn3] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20011" } numYields:0 reslen:342 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:57.758-0500 c20012| 2016-04-06T02:51:57.540-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:51:57.540Z [js_test:multi_coll_drop] 2016-04-06T02:51:57.761-0500 c20012| 2016-04-06T02:51:57.540-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:07.540-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", pv: 1, v: -2, from: "", checkEmpty: false } [js_test:multi_coll_drop] 2016-04-06T02:51:57.761-0500 c20011| 2016-04-06T02:51:57.540-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:51:57.762-0500 c20011| 2016-04-06T02:51:57.540-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 4 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:51:57.762-0500 c20011| 2016-04-06T02:51:57.540-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 2 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:51:57.772-0500 c20012| 2016-04-06T02:51:57.540-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:51:57.774-0500 c20012| 2016-04-06T02:51:57.540-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 2 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:51:57.776-0500 c20011| 2016-04-06T02:51:57.540-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:58404 #2 (2 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:51:57.777-0500 c20013| 2016-04-06T02:51:57.540-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", pv: 1, v: 1, from: "mongovm16:20011", fromId: 0, checkEmpty: true } [js_test:multi_coll_drop] 2016-04-06T02:51:57.778-0500 c20013| 2016-04-06T02:51:57.540-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:51:57.780-0500 c20012| 2016-04-06T02:51:57.540-0500 I COMMAND [conn3] command local.oplog.rs command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", pv: 1, v: 1, from: "mongovm16:20011", fromId: 0, checkEmpty: true } numYields:0 reslen:438 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:57.781-0500 c20011| 2016-04-06T02:51:57.540-0500 D COMMAND [conn2] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20012" } [js_test:multi_coll_drop] 2016-04-06T02:51:57.783-0500 c20011| 2016-04-06T02:51:57.540-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1 finished with response: { hasData: false, ok: 1.0, time: 1459929117, e: false, rs: true, state: 0, v: -2, hbmsg: "", set: "multidrop-configRS", durableOpTime: { ts: Timestamp 0|0, t: -1 }, opTime: new Date(0) } [js_test:multi_coll_drop] 2016-04-06T02:51:57.785-0500 c20013| 2016-04-06T02:51:57.540-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:51:57.540Z [js_test:multi_coll_drop] 2016-04-06T02:51:57.788-0500 c20013| 2016-04-06T02:51:57.540-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:07.540-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", pv: 1, v: -2, from: "", checkEmpty: false } [js_test:multi_coll_drop] 2016-04-06T02:51:57.789-0500 c20011| 2016-04-06T02:51:57.540-0500 I COMMAND [conn2] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20012" } numYields:0 reslen:342 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:57.790-0500 c20012| 2016-04-06T02:51:57.540-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:51:57.790-0500 c20012| 2016-04-06T02:51:57.540-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 2 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:51:57.797-0500 c20012| 2016-04-06T02:51:57.540-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:51:57.798-0500 c20013| 2016-04-06T02:51:57.540-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:51:57.799-0500 c20011| 2016-04-06T02:51:57.540-0500 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", pv: 1, v: -2, from: "", checkEmpty: false } [js_test:multi_coll_drop] 2016-04-06T02:51:57.800-0500 c20011| 2016-04-06T02:51:57.540-0500 D COMMAND [conn2] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:51:57.802-0500 c20011| 2016-04-06T02:51:57.541-0500 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", pv: 1, v: -2, from: "", checkEmpty: false } numYields:0 reslen:428 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:57.804-0500 c20012| 2016-04-06T02:51:57.541-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1 finished with response: { ok: 1.0, time: 1459929117, e: false, rs: true, state: 0, v: -2, hbmsg: "", set: "multidrop-configRS", durableOpTime: { ts: Timestamp 0|0, t: -1 }, opTime: new Date(0) } [js_test:multi_coll_drop] 2016-04-06T02:51:57.805-0500 c20012| 2016-04-06T02:51:57.541-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:51:59.541Z [js_test:multi_coll_drop] 2016-04-06T02:51:57.813-0500 c20011| 2016-04-06T02:51:57.541-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 2 finished with response: { hasData: false, ok: 1.0, time: 1459929117, e: false, rs: true, state: 0, v: -2, hbmsg: "", set: "multidrop-configRS", durableOpTime: { ts: Timestamp 0|0, t: -1 }, opTime: new Date(0) } [js_test:multi_coll_drop] 2016-04-06T02:51:57.813-0500 c20011| 2016-04-06T02:51:57.541-0500 I REPL [conn1] ****** [js_test:multi_coll_drop] 2016-04-06T02:51:57.820-0500 c20011| 2016-04-06T02:51:57.541-0500 I REPL [conn1] creating replication oplog of size: 40MB... [js_test:multi_coll_drop] 2016-04-06T02:51:57.822-0500 c20011| 2016-04-06T02:51:57.541-0500 D STORAGE [conn1] stored meta data for local.oplog.rs @ RecordId(3) [js_test:multi_coll_drop] 2016-04-06T02:51:57.826-0500 c20011| 2016-04-06T02:51:57.541-0500 D STORAGE [conn1] WiredTigerKVEngine::createRecordStore uri: table:collection-4--6404702321693896372 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,type=file,memory_page_max=10m,key_format=q,value_format=u,app_metadata=(formatVersion=1,oplogKeyExtractionVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:51:57.829-0500 c20013| 2016-04-06T02:51:57.541-0500 I COMMAND [conn3] command local.oplog.rs command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", pv: 1, v: 1, from: "mongovm16:20011", fromId: 0, checkEmpty: true } numYields:0 reslen:438 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:57.830-0500 c20011| 2016-04-06T02:51:57.543-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:58405 #3 (3 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:51:57.832-0500 c20013| 2016-04-06T02:51:57.542-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 2 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:51:57.833-0500 c20011| 2016-04-06T02:51:57.543-0500 D COMMAND [conn3] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20013" } [js_test:multi_coll_drop] 2016-04-06T02:51:57.834-0500 c20011| 2016-04-06T02:51:57.543-0500 I COMMAND [conn3] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20013" } numYields:0 reslen:342 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:57.835-0500 c20013| 2016-04-06T02:51:57.543-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:51:57.836-0500 c20013| 2016-04-06T02:51:57.543-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 2 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:51:57.840-0500 c20013| 2016-04-06T02:51:57.543-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:51:57.840-0500 c20011| 2016-04-06T02:51:57.543-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", pv: 1, v: -2, from: "", checkEmpty: false } [js_test:multi_coll_drop] 2016-04-06T02:51:57.842-0500 c20011| 2016-04-06T02:51:57.543-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:51:57.850-0500 c20011| 2016-04-06T02:51:57.544-0500 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", pv: 1, v: -2, from: "", checkEmpty: false } numYields:0 reslen:428 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:57.853-0500 c20013| 2016-04-06T02:51:57.544-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1 finished with response: { ok: 1.0, time: 1459929117, e: false, rs: true, state: 0, v: -2, hbmsg: "", set: "multidrop-configRS", durableOpTime: { ts: Timestamp 0|0, t: -1 }, opTime: new Date(0) } [js_test:multi_coll_drop] 2016-04-06T02:51:57.856-0500 c20013| 2016-04-06T02:51:57.544-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:51:59.544Z [js_test:multi_coll_drop] 2016-04-06T02:51:57.861-0500 c20011| 2016-04-06T02:51:57.548-0500 D STORAGE [conn1] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-4--6404702321693896372 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:51:57.865-0500 c20011| 2016-04-06T02:51:57.548-0500 I STORAGE [conn1] Starting WiredTigerRecordStoreThread local.oplog.rs [js_test:multi_coll_drop] 2016-04-06T02:51:57.868-0500 c20011| 2016-04-06T02:51:57.549-0500 I STORAGE [conn1] The size storer reports that the oplog contains 0 records totaling to 0 bytes [js_test:multi_coll_drop] 2016-04-06T02:51:57.869-0500 c20011| 2016-04-06T02:51:57.549-0500 I STORAGE [conn1] Scanning the oplog to determine where to place markers for truncation [js_test:multi_coll_drop] 2016-04-06T02:51:57.871-0500 c20011| 2016-04-06T02:51:57.549-0500 D COMMAND [WT RecordStoreThread: local.oplog.rs] BackgroundJob starting: WT RecordStoreThread: local.oplog.rs [js_test:multi_coll_drop] 2016-04-06T02:51:57.872-0500 c20011| 2016-04-06T02:51:57.549-0500 D STORAGE [conn1] local.oplog.rs: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:51:57.872-0500 c20011| 2016-04-06T02:51:57.549-0500 D STORAGE [conn1] WiredTigerKVEngine::flushAllFiles [js_test:multi_coll_drop] 2016-04-06T02:51:57.874-0500 c20011| 2016-04-06T02:51:57.549-0500 D STORAGE [conn1] WiredTigerSizeStorer::storeInto table:_mdb_catalog -> { numRecords: 3, dataSize: 782 } [js_test:multi_coll_drop] 2016-04-06T02:51:57.875-0500 c20011| 2016-04-06T02:51:57.549-0500 D STORAGE [conn1] WiredTigerSizeStorer::storeInto table:collection-0--6404702321693896372 -> { numRecords: 1, dataSize: 42 } [js_test:multi_coll_drop] 2016-04-06T02:51:57.876-0500 c20011| 2016-04-06T02:51:57.549-0500 D STORAGE [conn1] WiredTigerSizeStorer::storeInto table:collection-2--6404702321693896372 -> { numRecords: 1, dataSize: 1835 } [js_test:multi_coll_drop] 2016-04-06T02:51:57.878-0500 c20011| 2016-04-06T02:51:57.549-0500 D STORAGE [conn1] WiredTigerSizeStorer::storeInto table:collection-4--6404702321693896372 -> { numRecords: 0, dataSize: 0 } [js_test:multi_coll_drop] 2016-04-06T02:51:57.878-0500 c20011| 2016-04-06T02:51:57.586-0500 I REPL [conn1] ****** [js_test:multi_coll_drop] 2016-04-06T02:51:57.879-0500 c20011| 2016-04-06T02:51:57.586-0500 D STORAGE [conn1] stored meta data for local.system.replset @ RecordId(4) [js_test:multi_coll_drop] 2016-04-06T02:51:57.888-0500 c20011| 2016-04-06T02:51:57.586-0500 D STORAGE [conn1] WiredTigerKVEngine::createRecordStore uri: table:collection-5--6404702321693896372 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:51:57.889-0500 c20011| 2016-04-06T02:51:57.590-0500 D STORAGE [conn1] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-5--6404702321693896372 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:51:57.896-0500 c20011| 2016-04-06T02:51:57.591-0500 D STORAGE [conn1] local.system.replset: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:51:57.898-0500 c20011| 2016-04-06T02:51:57.591-0500 D STORAGE [conn1] WiredTigerKVEngine::createSortedDataInterface ident: index-6--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.system.replset" }), [js_test:multi_coll_drop] 2016-04-06T02:51:57.901-0500 c20011| 2016-04-06T02:51:57.591-0500 D STORAGE [conn1] create uri: table:index-6--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.system.replset" }), [js_test:multi_coll_drop] 2016-04-06T02:51:57.905-0500 c20011| 2016-04-06T02:51:57.593-0500 D STORAGE [conn1] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-6--6404702321693896372 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:51:57.907-0500 c20011| 2016-04-06T02:51:57.593-0500 D STORAGE [conn1] local.system.replset: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:51:57.908-0500 c20011| 2016-04-06T02:51:57.593-0500 D QUERY [conn1] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:51:57.910-0500 c20011| 2016-04-06T02:51:57.593-0500 D REPL [ReplicationExecutor] Updated term in topology coordinator to 0 due to new config [js_test:multi_coll_drop] 2016-04-06T02:51:57.916-0500 c20011| 2016-04-06T02:51:57.593-0500 I REPL [ReplicationExecutor] New replica set config in use: { _id: "multidrop-configRS", version: 1, configsvr: true, protocolVersion: 1, members: [ { _id: 0, host: "mongovm16:20011", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "mongovm16:20012", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "mongovm16:20013", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 5000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5704c01d3876c4cfd2eb3eb9') } } [js_test:multi_coll_drop] 2016-04-06T02:51:57.919-0500 c20011| 2016-04-06T02:51:57.593-0500 I REPL [ReplicationExecutor] This node is mongovm16:20011 in the config [js_test:multi_coll_drop] 2016-04-06T02:51:57.919-0500 c20011| 2016-04-06T02:51:57.593-0500 I REPL [ReplicationExecutor] transition to STARTUP2 [js_test:multi_coll_drop] 2016-04-06T02:51:57.920-0500 c20011| 2016-04-06T02:51:57.593-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:51:57.593Z [js_test:multi_coll_drop] 2016-04-06T02:51:57.922-0500 c20011| 2016-04-06T02:51:57.593-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:51:57.593Z [js_test:multi_coll_drop] 2016-04-06T02:51:57.925-0500 c20011| 2016-04-06T02:51:57.593-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 7 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:07.593-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:51:57.928-0500 c20011| 2016-04-06T02:51:57.593-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 8 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:07.593-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:51:57.932-0500 c20011| 2016-04-06T02:51:57.593-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 7 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:51:57.936-0500 c20011| 2016-04-06T02:51:57.593-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 8 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:51:57.941-0500 c20013| 2016-04-06T02:51:57.594-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:51:57.944-0500 c20013| 2016-04-06T02:51:57.594-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:51:57.946-0500 c20012| 2016-04-06T02:51:57.594-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:51:57.949-0500 c20012| 2016-04-06T02:51:57.594-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:51:57.950-0500 c20011| 2016-04-06T02:51:57.594-0500 I REPL [conn1] Starting replication storage threads [js_test:multi_coll_drop] 2016-04-06T02:51:57.951-0500 c20012| 2016-04-06T02:51:57.594-0500 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 0 } numYields:0 reslen:425 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:57.955-0500 c20011| 2016-04-06T02:51:57.594-0500 I REPL [conn1] Initial sync done, starting steady state replication. [js_test:multi_coll_drop] 2016-04-06T02:51:57.959-0500 c20011| 2016-04-06T02:51:57.594-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 7 finished with response: { ok: 1.0, state: 0, v: -2, hbmsg: "", set: "multidrop-configRS", durableOpTime: { ts: Timestamp 0|0, t: -1 }, opTime: { ts: Timestamp 0|0, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:51:57.959-0500 c20011| 2016-04-06T02:51:57.594-0500 I REPL [conn1] Starting replication fetcher thread [js_test:multi_coll_drop] 2016-04-06T02:51:57.961-0500 c20011| 2016-04-06T02:51:57.594-0500 I REPL [conn1] Starting replication applier threads [js_test:multi_coll_drop] 2016-04-06T02:51:57.963-0500 c20011| 2016-04-06T02:51:57.594-0500 I REPL [ReplicationExecutor] Member mongovm16:20012 is now in state STARTUP [js_test:multi_coll_drop] 2016-04-06T02:51:57.964-0500 c20011| 2016-04-06T02:51:57.594-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:00.094Z [js_test:multi_coll_drop] 2016-04-06T02:51:57.965-0500 c20011| 2016-04-06T02:51:57.594-0500 I REPL [conn1] Starting replication reporter thread [js_test:multi_coll_drop] 2016-04-06T02:51:57.967-0500 c20011| 2016-04-06T02:51:57.594-0500 I REPL [ReplicationExecutor] transition to RECOVERING [js_test:multi_coll_drop] 2016-04-06T02:51:57.970-0500 c20011| 2016-04-06T02:51:57.594-0500 I COMMAND [conn1] command local.oplog.rs command: replSetInitiate { replSetInitiate: { _id: "multidrop-configRS", members: [ { _id: 0.0, host: "mongovm16:20011" }, { _id: 1.0, host: "mongovm16:20012" }, { _id: 2.0, host: "mongovm16:20013" } ], settings: { electionTimeoutMillis: 5000.0 }, configsvr: true } } numYields:0 reslen:82 locks:{ Global: { acquireCount: { r: 5, w: 3, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 86 } }, Database: { acquireCount: { w: 2, W: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 63ms [js_test:multi_coll_drop] 2016-04-06T02:51:57.973-0500 c20011| 2016-04-06T02:51:57.594-0500 D EXECUTOR [rsBackgroundSync-0] starting thread in pool rsBackgroundSync [js_test:multi_coll_drop] 2016-04-06T02:51:57.974-0500 c20011| 2016-04-06T02:51:57.594-0500 D REPL [rsBackgroundSync] bgsync fetch queue set to: { ts: Timestamp 1459929117000|1, t: -1 } 1169182228640141205 [js_test:multi_coll_drop] 2016-04-06T02:51:57.975-0500 c20011| 2016-04-06T02:51:57.594-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:57.975-0500 c20011| 2016-04-06T02:51:57.594-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:51:57.976-0500 c20011| 2016-04-06T02:51:57.594-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:57.978-0500 c20011| 2016-04-06T02:51:57.594-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:57.978-0500 c20011| 2016-04-06T02:51:57.595-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:57.979-0500 c20011| 2016-04-06T02:51:57.595-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:57.980-0500 c20011| 2016-04-06T02:51:57.595-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:57.982-0500 c20011| 2016-04-06T02:51:57.595-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:51:57.982-0500 c20011| 2016-04-06T02:51:57.595-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:57.984-0500 c20011| 2016-04-06T02:51:57.595-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:57.985-0500 c20011| 2016-04-06T02:51:57.595-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:57.987-0500 c20011| 2016-04-06T02:51:57.595-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:57.990-0500 c20011| 2016-04-06T02:51:57.596-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:57.993-0500 c20011| 2016-04-06T02:51:57.596-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:57.994-0500 c20011| 2016-04-06T02:51:57.596-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:57.995-0500 c20011| 2016-04-06T02:51:57.596-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:57.997-0500 c20011| 2016-04-06T02:51:57.596-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:58.001-0500 c20011| 2016-04-06T02:51:57.596-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:58.002-0500 c20011| 2016-04-06T02:51:57.596-0500 D EXECUTOR [repl prefetch worker 0] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:58.007-0500 c20011| 2016-04-06T02:51:57.596-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:58.009-0500 c20011| 2016-04-06T02:51:57.596-0500 D EXECUTOR [repl prefetch worker 2] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:58.009-0500 c20011| 2016-04-06T02:51:57.596-0500 D EXECUTOR [repl prefetch worker 3] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:58.012-0500 c20011| 2016-04-06T02:51:57.596-0500 D EXECUTOR [repl prefetch worker 1] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:58.018-0500 c20011| 2016-04-06T02:51:57.596-0500 D EXECUTOR [repl prefetch worker 4] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:58.018-0500 c20011| 2016-04-06T02:51:57.596-0500 D EXECUTOR [repl prefetch worker 5] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:58.019-0500 c20011| 2016-04-06T02:51:57.596-0500 D EXECUTOR [repl prefetch worker 6] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:58.024-0500 c20011| 2016-04-06T02:51:57.596-0500 D EXECUTOR [repl prefetch worker 7] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:58.025-0500 c20011| 2016-04-06T02:51:57.596-0500 D EXECUTOR [repl prefetch worker 8] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:58.026-0500 c20011| 2016-04-06T02:51:57.596-0500 D EXECUTOR [repl prefetch worker 9] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:58.027-0500 c20011| 2016-04-06T02:51:57.597-0500 D EXECUTOR [repl prefetch worker 10] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:58.030-0500 c20011| 2016-04-06T02:51:57.597-0500 D EXECUTOR [repl prefetch worker 11] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:58.030-0500 c20011| 2016-04-06T02:51:57.597-0500 D EXECUTOR [repl prefetch worker 12] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:58.033-0500 c20011| 2016-04-06T02:51:57.597-0500 D EXECUTOR [repl prefetch worker 13] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:58.034-0500 c20011| 2016-04-06T02:51:57.597-0500 D EXECUTOR [repl prefetch worker 14] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:58.038-0500 c20011| 2016-04-06T02:51:57.597-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:51:58.041-0500 c20011| 2016-04-06T02:51:57.597-0500 D EXECUTOR [repl prefetch worker 15] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:51:58.044-0500 c20011| 2016-04-06T02:51:57.597-0500 I REPL [ReplicationExecutor] transition to SECONDARY [js_test:multi_coll_drop] 2016-04-06T02:51:58.047-0500 c20012| 2016-04-06T02:51:57.598-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:58.047-0500 c20013| 2016-04-06T02:51:57.598-0500 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 0 } numYields:0 reslen:425 locks:{} protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:51:58.052-0500 c20011| 2016-04-06T02:51:57.598-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 8 finished with response: { ok: 1.0, state: 0, v: -2, hbmsg: "", set: "multidrop-configRS", durableOpTime: { ts: Timestamp 0|0, t: -1 }, opTime: { ts: Timestamp 0|0, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:51:58.055-0500 c20012| 2016-04-06T02:51:57.599-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:327 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:51:58.055-0500 c20011| 2016-04-06T02:51:57.599-0500 I REPL [ReplicationExecutor] Member mongovm16:20013 is now in state STARTUP [js_test:multi_coll_drop] 2016-04-06T02:51:58.059-0500 c20011| 2016-04-06T02:51:57.599-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:00.099Z [js_test:multi_coll_drop] 2016-04-06T02:51:58.059-0500 c20013| 2016-04-06T02:51:57.599-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:58.060-0500 c20013| 2016-04-06T02:51:57.599-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:327 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:58.061-0500 c20011| 2016-04-06T02:51:57.799-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:58.063-0500 c20011| 2016-04-06T02:51:57.800-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:58.063-0500 c20012| 2016-04-06T02:51:57.800-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:58.064-0500 c20012| 2016-04-06T02:51:57.801-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:327 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:58.065-0500 c20013| 2016-04-06T02:51:57.801-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:58.068-0500 c20013| 2016-04-06T02:51:57.801-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:327 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:58.069-0500 c20011| 2016-04-06T02:51:58.020-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:58.069-0500 c20012| 2016-04-06T02:51:58.021-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:58.070-0500 c20012| 2016-04-06T02:51:58.021-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:327 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:58.071-0500 c20011| 2016-04-06T02:51:58.021-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:58.071-0500 c20013| 2016-04-06T02:51:58.022-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:58.074-0500 c20013| 2016-04-06T02:51:58.022-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:327 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:58.232-0500 c20011| 2016-04-06T02:51:58.223-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:58.235-0500 c20011| 2016-04-06T02:51:58.224-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:58.236-0500 c20012| 2016-04-06T02:51:58.224-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:58.238-0500 c20012| 2016-04-06T02:51:58.224-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:327 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:58.240-0500 c20013| 2016-04-06T02:51:58.225-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:58.242-0500 c20013| 2016-04-06T02:51:58.225-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:327 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:58.428-0500 c20011| 2016-04-06T02:51:58.427-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:58.429-0500 c20011| 2016-04-06T02:51:58.428-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:51:58.430-0500 c20012| 2016-04-06T02:51:58.428-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:58.431-0500 c20012| 2016-04-06T02:51:58.428-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:327 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:58.431-0500 c20013| 2016-04-06T02:51:58.429-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:58.441-0500 c20013| 2016-04-06T02:51:58.429-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:327 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:58.630-0500 c20011| 2016-04-06T02:51:58.629-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:58.631-0500 c20011| 2016-04-06T02:51:58.629-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:58.632-0500 c20012| 2016-04-06T02:51:58.630-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:58.638-0500 c20012| 2016-04-06T02:51:58.630-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:327 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:58.638-0500 c20013| 2016-04-06T02:51:58.630-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:58.641-0500 c20013| 2016-04-06T02:51:58.630-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:327 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:58.831-0500 c20011| 2016-04-06T02:51:58.830-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:58.833-0500 c20011| 2016-04-06T02:51:58.831-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:58.833-0500 c20012| 2016-04-06T02:51:58.831-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:58.835-0500 c20012| 2016-04-06T02:51:58.831-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:327 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:58.836-0500 c20013| 2016-04-06T02:51:58.831-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:58.837-0500 c20013| 2016-04-06T02:51:58.831-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:327 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:59.033-0500 c20011| 2016-04-06T02:51:59.033-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.036-0500 c20011| 2016-04-06T02:51:59.033-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:59.037-0500 c20012| 2016-04-06T02:51:59.033-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.038-0500 c20012| 2016-04-06T02:51:59.034-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:327 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:59.038-0500 c20013| 2016-04-06T02:51:59.034-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.043-0500 c20013| 2016-04-06T02:51:59.034-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:327 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:59.236-0500 c20011| 2016-04-06T02:51:59.235-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.242-0500 c20011| 2016-04-06T02:51:59.235-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:59.243-0500 c20012| 2016-04-06T02:51:59.235-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.245-0500 c20012| 2016-04-06T02:51:59.235-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:327 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:59.245-0500 c20013| 2016-04-06T02:51:59.236-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.247-0500 c20013| 2016-04-06T02:51:59.236-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:327 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:59.437-0500 c20011| 2016-04-06T02:51:59.436-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.440-0500 c20011| 2016-04-06T02:51:59.437-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:59.442-0500 c20012| 2016-04-06T02:51:59.437-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.443-0500 c20012| 2016-04-06T02:51:59.437-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:327 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:59.445-0500 c20013| 2016-04-06T02:51:59.437-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.448-0500 c20013| 2016-04-06T02:51:59.437-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:327 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:59.546-0500 c20012| 2016-04-06T02:51:59.541-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 4 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:09.541-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", pv: 1, v: -2, from: "", checkEmpty: false } [js_test:multi_coll_drop] 2016-04-06T02:51:59.546-0500 c20012| 2016-04-06T02:51:59.541-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 4 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:51:59.549-0500 c20011| 2016-04-06T02:51:59.541-0500 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", pv: 1, v: -2, from: "", checkEmpty: false } [js_test:multi_coll_drop] 2016-04-06T02:51:59.549-0500 c20011| 2016-04-06T02:51:59.541-0500 D COMMAND [conn2] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:51:59.554-0500 c20011| 2016-04-06T02:51:59.541-0500 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", pv: 1, v: -2, from: "", checkEmpty: false } numYields:0 reslen:1169 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:59.566-0500 c20012| 2016-04-06T02:51:59.542-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 4 finished with response: { ok: 1.0, time: 1459929119, config: { _id: "multidrop-configRS", version: 1, configsvr: true, protocolVersion: 1, members: [ { _id: 0, host: "mongovm16:20011", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "mongovm16:20012", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "mongovm16:20013", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 5000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5704c01d3876c4cfd2eb3eb9') } }, e: true, rs: true, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, opTime: new Date(6270347811993157633) } [js_test:multi_coll_drop] 2016-04-06T02:51:59.572-0500 c20012| 2016-04-06T02:51:59.542-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:01.542Z [js_test:multi_coll_drop] 2016-04-06T02:51:59.576-0500 c20012| 2016-04-06T02:51:59.542-0500 D REPL [ReplicationExecutor] Received new config via heartbeat with version 1 [js_test:multi_coll_drop] 2016-04-06T02:51:59.577-0500 c20012| 2016-04-06T02:51:59.542-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:51:59.579-0500 c20011| 2016-04-06T02:51:59.542-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:58531 #4 (4 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:51:59.582-0500 c20012| 2016-04-06T02:51:59.542-0500 D NETWORK [replExecDBWorker-0] connected to server mongovm16:20011 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:51:59.584-0500 c20011| 2016-04-06T02:51:59.542-0500 D COMMAND [conn4] run command admin.$cmd { _isSelf: 1 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.586-0500 c20011| 2016-04-06T02:51:59.543-0500 I COMMAND [conn4] command admin.$cmd command: _isSelf { _isSelf: 1 } numYields:0 reslen:113 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:59.586-0500 c20011| 2016-04-06T02:51:59.543-0500 D NETWORK [conn4] SocketException: remote: 192.168.100.28:58531 error: 9001 socket exception [CLOSED] server [192.168.100.28:58531] [js_test:multi_coll_drop] 2016-04-06T02:51:59.586-0500 c20011| 2016-04-06T02:51:59.543-0500 I NETWORK [conn4] end connection 192.168.100.28:58531 (3 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:51:59.588-0500 c20012| 2016-04-06T02:51:59.543-0500 D NETWORK [replExecDBWorker-0] getBoundAddrs(): [ 127.0.0.1] [ 192.168.100.28] [ 192.168.2.13] [js_test:multi_coll_drop] 2016-04-06T02:51:59.589-0500 c20012| 2016-04-06T02:51:59.543-0500 D NETWORK [replExecDBWorker-0] getAddrsForHost("mongovm16:20012"): [ 192.168.100.28] [js_test:multi_coll_drop] 2016-04-06T02:51:59.591-0500 c20012| 2016-04-06T02:51:59.543-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:51:59.592-0500 c20013| 2016-04-06T02:51:59.543-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:49466 #4 (3 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:51:59.593-0500 c20012| 2016-04-06T02:51:59.543-0500 D NETWORK [replExecDBWorker-0] connected to server mongovm16:20013 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:51:59.594-0500 c20013| 2016-04-06T02:51:59.543-0500 D COMMAND [conn4] run command admin.$cmd { _isSelf: 1 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.595-0500 c20013| 2016-04-06T02:51:59.543-0500 I COMMAND [conn4] command admin.$cmd command: _isSelf { _isSelf: 1 } numYields:0 reslen:113 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:59.596-0500 c20013| 2016-04-06T02:51:59.543-0500 D NETWORK [conn4] SocketException: remote: 192.168.100.28:49466 error: 9001 socket exception [CLOSED] server [192.168.100.28:49466] [js_test:multi_coll_drop] 2016-04-06T02:51:59.597-0500 c20013| 2016-04-06T02:51:59.543-0500 I NETWORK [conn4] end connection 192.168.100.28:49466 (2 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:51:59.599-0500 c20012| 2016-04-06T02:51:59.543-0500 D STORAGE [replExecDBWorker-0] stored meta data for local.system.replset @ RecordId(3) [js_test:multi_coll_drop] 2016-04-06T02:51:59.603-0500 c20012| 2016-04-06T02:51:59.544-0500 D STORAGE [replExecDBWorker-0] WiredTigerKVEngine::createRecordStore uri: table:collection-4-6577373056560964212 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:51:59.606-0500 c20013| 2016-04-06T02:51:59.545-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 4 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:09.545-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", pv: 1, v: -2, from: "", checkEmpty: false } [js_test:multi_coll_drop] 2016-04-06T02:51:59.608-0500 c20013| 2016-04-06T02:51:59.545-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 4 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:51:59.610-0500 c20011| 2016-04-06T02:51:59.545-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", pv: 1, v: -2, from: "", checkEmpty: false } [js_test:multi_coll_drop] 2016-04-06T02:51:59.610-0500 c20011| 2016-04-06T02:51:59.545-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:51:59.623-0500 c20013| 2016-04-06T02:51:59.545-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 4 finished with response: { ok: 1.0, time: 1459929119, config: { _id: "multidrop-configRS", version: 1, configsvr: true, protocolVersion: 1, members: [ { _id: 0, host: "mongovm16:20011", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "mongovm16:20012", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "mongovm16:20013", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 5000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5704c01d3876c4cfd2eb3eb9') } }, e: true, rs: true, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, opTime: new Date(6270347811993157633) } [js_test:multi_coll_drop] 2016-04-06T02:51:59.626-0500 c20011| 2016-04-06T02:51:59.545-0500 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", pv: 1, v: -2, from: "", checkEmpty: false } numYields:0 reslen:1169 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:59.629-0500 c20013| 2016-04-06T02:51:59.545-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:01.545Z [js_test:multi_coll_drop] 2016-04-06T02:51:59.629-0500 c20013| 2016-04-06T02:51:59.545-0500 D REPL [ReplicationExecutor] Received new config via heartbeat with version 1 [js_test:multi_coll_drop] 2016-04-06T02:51:59.629-0500 c20013| 2016-04-06T02:51:59.545-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:51:59.630-0500 c20011| 2016-04-06T02:51:59.545-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:58533 #5 (4 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:51:59.634-0500 c20013| 2016-04-06T02:51:59.546-0500 D NETWORK [replExecDBWorker-0] connected to server mongovm16:20011 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:51:59.634-0500 c20011| 2016-04-06T02:51:59.546-0500 D COMMAND [conn5] run command admin.$cmd { _isSelf: 1 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.635-0500 c20011| 2016-04-06T02:51:59.546-0500 I COMMAND [conn5] command admin.$cmd command: _isSelf { _isSelf: 1 } numYields:0 reslen:113 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:59.636-0500 c20011| 2016-04-06T02:51:59.546-0500 D NETWORK [conn5] SocketException: remote: 192.168.100.28:58533 error: 9001 socket exception [CLOSED] server [192.168.100.28:58533] [js_test:multi_coll_drop] 2016-04-06T02:51:59.640-0500 c20011| 2016-04-06T02:51:59.546-0500 I NETWORK [conn5] end connection 192.168.100.28:58533 (3 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:51:59.642-0500 c20012| 2016-04-06T02:51:59.546-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:36203 #4 (3 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:51:59.643-0500 c20013| 2016-04-06T02:51:59.546-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:51:59.644-0500 c20013| 2016-04-06T02:51:59.547-0500 D NETWORK [replExecDBWorker-0] connected to server mongovm16:20012 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:51:59.645-0500 c20012| 2016-04-06T02:51:59.547-0500 D COMMAND [conn4] run command admin.$cmd { _isSelf: 1 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.648-0500 c20012| 2016-04-06T02:51:59.547-0500 I COMMAND [conn4] command admin.$cmd command: _isSelf { _isSelf: 1 } numYields:0 reslen:113 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:59.651-0500 c20012| 2016-04-06T02:51:59.547-0500 D NETWORK [conn4] SocketException: remote: 192.168.100.28:36203 error: 9001 socket exception [CLOSED] server [192.168.100.28:36203] [js_test:multi_coll_drop] 2016-04-06T02:51:59.652-0500 c20012| 2016-04-06T02:51:59.547-0500 I NETWORK [conn4] end connection 192.168.100.28:36203 (2 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:51:59.656-0500 c20013| 2016-04-06T02:51:59.547-0500 D NETWORK [replExecDBWorker-0] getBoundAddrs(): [ 127.0.0.1] [ 192.168.100.28] [ 192.168.2.13] [js_test:multi_coll_drop] 2016-04-06T02:51:59.658-0500 c20013| 2016-04-06T02:51:59.547-0500 D NETWORK [replExecDBWorker-0] getAddrsForHost("mongovm16:20013"): [ 192.168.100.28] [js_test:multi_coll_drop] 2016-04-06T02:51:59.660-0500 c20013| 2016-04-06T02:51:59.547-0500 D STORAGE [replExecDBWorker-0] stored meta data for local.system.replset @ RecordId(3) [js_test:multi_coll_drop] 2016-04-06T02:51:59.663-0500 c20013| 2016-04-06T02:51:59.547-0500 D STORAGE [replExecDBWorker-0] WiredTigerKVEngine::createRecordStore uri: table:collection-4-751336887848580549 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:51:59.665-0500 c20012| 2016-04-06T02:51:59.550-0500 D STORAGE [replExecDBWorker-0] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-4-6577373056560964212 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:51:59.668-0500 c20012| 2016-04-06T02:51:59.550-0500 D STORAGE [replExecDBWorker-0] local.system.replset: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:51:59.670-0500 c20012| 2016-04-06T02:51:59.550-0500 D STORAGE [replExecDBWorker-0] WiredTigerKVEngine::createSortedDataInterface ident: index-5-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.system.replset" }), [js_test:multi_coll_drop] 2016-04-06T02:51:59.672-0500 c20012| 2016-04-06T02:51:59.550-0500 D STORAGE [replExecDBWorker-0] create uri: table:index-5-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.system.replset" }), [js_test:multi_coll_drop] 2016-04-06T02:51:59.675-0500 c20013| 2016-04-06T02:51:59.558-0500 D STORAGE [replExecDBWorker-0] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-4-751336887848580549 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:51:59.678-0500 c20012| 2016-04-06T02:51:59.558-0500 D STORAGE [replExecDBWorker-0] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-5-6577373056560964212 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:51:59.680-0500 c20012| 2016-04-06T02:51:59.558-0500 D STORAGE [replExecDBWorker-0] local.system.replset: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:51:59.682-0500 c20012| 2016-04-06T02:51:59.558-0500 D QUERY [replExecDBWorker-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:51:59.685-0500 c20013| 2016-04-06T02:51:59.558-0500 D STORAGE [replExecDBWorker-0] local.system.replset: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:51:59.690-0500 c20013| 2016-04-06T02:51:59.558-0500 D STORAGE [replExecDBWorker-0] WiredTigerKVEngine::createSortedDataInterface ident: index-5-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.system.replset" }), [js_test:multi_coll_drop] 2016-04-06T02:51:59.694-0500 c20013| 2016-04-06T02:51:59.558-0500 D STORAGE [replExecDBWorker-0] create uri: table:index-5-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.system.replset" }), [js_test:multi_coll_drop] 2016-04-06T02:51:59.697-0500 c20012| 2016-04-06T02:51:59.558-0500 I REPL [replExecDBWorker-0] Starting replication storage threads [js_test:multi_coll_drop] 2016-04-06T02:51:59.698-0500 c20012| 2016-04-06T02:51:59.558-0500 I REPL [initial sync] Starting replication fetcher thread [js_test:multi_coll_drop] 2016-04-06T02:51:59.699-0500 c20012| 2016-04-06T02:51:59.558-0500 D REPL [ReplicationExecutor] Updated term in topology coordinator to 0 due to new config [js_test:multi_coll_drop] 2016-04-06T02:51:59.701-0500 c20012| 2016-04-06T02:51:59.558-0500 I REPL [initial sync] ****** [js_test:multi_coll_drop] 2016-04-06T02:51:59.702-0500 c20012| 2016-04-06T02:51:59.559-0500 I REPL [initial sync] creating replication oplog of size: 40MB... [js_test:multi_coll_drop] 2016-04-06T02:51:59.709-0500 c20012| 2016-04-06T02:51:59.559-0500 I REPL [ReplicationExecutor] New replica set config in use: { _id: "multidrop-configRS", version: 1, configsvr: true, protocolVersion: 1, members: [ { _id: 0, host: "mongovm16:20011", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "mongovm16:20012", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "mongovm16:20013", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 5000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5704c01d3876c4cfd2eb3eb9') } } [js_test:multi_coll_drop] 2016-04-06T02:51:59.709-0500 c20012| 2016-04-06T02:51:59.559-0500 I REPL [ReplicationExecutor] This node is mongovm16:20012 in the config [js_test:multi_coll_drop] 2016-04-06T02:51:59.712-0500 c20012| 2016-04-06T02:51:59.559-0500 D STORAGE [initial sync] stored meta data for local.oplog.rs @ RecordId(4) [js_test:multi_coll_drop] 2016-04-06T02:51:59.715-0500 c20012| 2016-04-06T02:51:59.559-0500 I REPL [ReplicationExecutor] transition to STARTUP2 [js_test:multi_coll_drop] 2016-04-06T02:51:59.715-0500 c20012| 2016-04-06T02:51:59.559-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:51:59.559Z [js_test:multi_coll_drop] 2016-04-06T02:51:59.719-0500 c20012| 2016-04-06T02:51:59.559-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:51:59.559Z [js_test:multi_coll_drop] 2016-04-06T02:51:59.721-0500 c20012| 2016-04-06T02:51:59.559-0500 D STORAGE [initial sync] WiredTigerKVEngine::createRecordStore uri: table:collection-6-6577373056560964212 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,type=file,memory_page_max=10m,key_format=q,value_format=u,app_metadata=(formatVersion=1,oplogKeyExtractionVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:51:59.723-0500 c20012| 2016-04-06T02:51:59.559-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 6 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:09.559-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.726-0500 c20012| 2016-04-06T02:51:59.559-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 6 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:51:59.733-0500 c20012| 2016-04-06T02:51:59.559-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 7 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:09.559-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.734-0500 c20012| 2016-04-06T02:51:59.559-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:51:59.737-0500 c20011| 2016-04-06T02:51:59.559-0500 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.737-0500 c20011| 2016-04-06T02:51:59.559-0500 D COMMAND [conn2] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:51:59.741-0500 c20012| 2016-04-06T02:51:59.559-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 8 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:51:59.748-0500 c20011| 2016-04-06T02:51:59.559-0500 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 0 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:59.756-0500 c20012| 2016-04-06T02:51:59.559-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 6 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 0, durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, opTime: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:51:59.758-0500 c20012| 2016-04-06T02:51:59.559-0500 I REPL [ReplicationExecutor] Member mongovm16:20011 is now in state SECONDARY [js_test:multi_coll_drop] 2016-04-06T02:51:59.761-0500 c20012| 2016-04-06T02:51:59.559-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:02.059Z [js_test:multi_coll_drop] 2016-04-06T02:51:59.765-0500 c20012| 2016-04-06T02:51:59.559-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:51:59.766-0500 c20012| 2016-04-06T02:51:59.559-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 8 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:51:59.769-0500 c20012| 2016-04-06T02:51:59.559-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 7 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:51:59.774-0500 c20012| 2016-04-06T02:51:59.560-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 7 finished with response: { ok: 1.0, state: 0, v: -2, hbmsg: "", set: "multidrop-configRS", durableOpTime: { ts: Timestamp 0|0, t: -1 }, opTime: { ts: Timestamp 0|0, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:51:59.775-0500 c20012| 2016-04-06T02:51:59.560-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:36205 #5 (3 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:51:59.792-0500 c20012| 2016-04-06T02:51:59.560-0500 I REPL [ReplicationExecutor] Member mongovm16:20013 is now in state STARTUP [js_test:multi_coll_drop] 2016-04-06T02:51:59.794-0500 c20012| 2016-04-06T02:51:59.560-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:02.060Z [js_test:multi_coll_drop] 2016-04-06T02:51:59.799-0500 c20012| 2016-04-06T02:51:59.560-0500 D COMMAND [conn5] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20013" } [js_test:multi_coll_drop] 2016-04-06T02:51:59.801-0500 c20012| 2016-04-06T02:51:59.560-0500 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20013" } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:59.805-0500 c20013| 2016-04-06T02:51:59.559-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:49469 #5 (3 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:51:59.806-0500 c20013| 2016-04-06T02:51:59.559-0500 D COMMAND [conn5] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20012" } [js_test:multi_coll_drop] 2016-04-06T02:51:59.808-0500 c20013| 2016-04-06T02:51:59.559-0500 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20012" } numYields:0 reslen:342 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:59.810-0500 c20013| 2016-04-06T02:51:59.559-0500 D COMMAND [conn5] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.811-0500 c20013| 2016-04-06T02:51:59.559-0500 D COMMAND [conn5] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:51:59.813-0500 c20013| 2016-04-06T02:51:59.560-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:51:59.560Z [js_test:multi_coll_drop] 2016-04-06T02:51:59.818-0500 c20013| 2016-04-06T02:51:59.560-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 6 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:09.560-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", pv: 1, v: -2, from: "", checkEmpty: false } [js_test:multi_coll_drop] 2016-04-06T02:51:59.819-0500 c20013| 2016-04-06T02:51:59.560-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:51:59.820-0500 c20013| 2016-04-06T02:51:59.560-0500 I COMMAND [conn5] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 0 } numYields:0 reslen:425 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:59.821-0500 c20013| 2016-04-06T02:51:59.560-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 7 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:51:59.822-0500 c20013| 2016-04-06T02:51:59.561-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:51:59.824-0500 c20013| 2016-04-06T02:51:59.561-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 7 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:51:59.825-0500 c20013| 2016-04-06T02:51:59.561-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 6 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:51:59.826-0500 c20012| 2016-04-06T02:51:59.561-0500 D COMMAND [conn5] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", pv: 1, v: -2, from: "", checkEmpty: false } [js_test:multi_coll_drop] 2016-04-06T02:51:59.828-0500 c20012| 2016-04-06T02:51:59.561-0500 D COMMAND [conn5] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:51:59.830-0500 c20012| 2016-04-06T02:51:59.561-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:51:59.833-0500 c20012| 2016-04-06T02:51:59.561-0500 D EXECUTOR [rsBackgroundSync-0] starting thread in pool rsBackgroundSync [js_test:multi_coll_drop] 2016-04-06T02:51:59.835-0500 c20012| 2016-04-06T02:51:59.561-0500 I COMMAND [conn5] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", pv: 1, v: -2, from: "", checkEmpty: false } numYields:0 reslen:1169 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:59.842-0500 c20012| 2016-04-06T02:51:59.562-0500 D STORAGE [initial sync] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-6-6577373056560964212 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:51:59.842-0500 c20012| 2016-04-06T02:51:59.562-0500 I STORAGE [initial sync] Starting WiredTigerRecordStoreThread local.oplog.rs [js_test:multi_coll_drop] 2016-04-06T02:51:59.845-0500 c20012| 2016-04-06T02:51:59.563-0500 I STORAGE [initial sync] The size storer reports that the oplog contains 0 records totaling to 0 bytes [js_test:multi_coll_drop] 2016-04-06T02:51:59.845-0500 c20012| 2016-04-06T02:51:59.563-0500 I STORAGE [initial sync] Scanning the oplog to determine where to place markers for truncation [js_test:multi_coll_drop] 2016-04-06T02:51:59.846-0500 c20012| 2016-04-06T02:51:59.563-0500 D STORAGE [initial sync] local.oplog.rs: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:51:59.847-0500 c20012| 2016-04-06T02:51:59.563-0500 D STORAGE [initial sync] WiredTigerKVEngine::flushAllFiles [js_test:multi_coll_drop] 2016-04-06T02:51:59.849-0500 c20012| 2016-04-06T02:51:59.563-0500 D STORAGE [initial sync] WiredTigerSizeStorer::storeInto table:_mdb_catalog -> { numRecords: 4, dataSize: 1096 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.850-0500 c20012| 2016-04-06T02:51:59.563-0500 D STORAGE [initial sync] WiredTigerSizeStorer::storeInto table:collection-0-6577373056560964212 -> { numRecords: 1, dataSize: 42 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.851-0500 c20012| 2016-04-06T02:51:59.563-0500 D STORAGE [initial sync] WiredTigerSizeStorer::storeInto table:collection-2-6577373056560964212 -> { numRecords: 1, dataSize: 1835 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.853-0500 c20012| 2016-04-06T02:51:59.563-0500 D STORAGE [initial sync] WiredTigerSizeStorer::storeInto table:collection-4-6577373056560964212 -> { numRecords: 1, dataSize: 733 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.855-0500 c20012| 2016-04-06T02:51:59.563-0500 D STORAGE [initial sync] WiredTigerSizeStorer::storeInto table:collection-6-6577373056560964212 -> { numRecords: 0, dataSize: 0 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.856-0500 c20011| 2016-04-06T02:51:59.563-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.858-0500 c20011| 2016-04-06T02:51:59.563-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:51:59.860-0500 c20012| 2016-04-06T02:51:59.563-0500 D COMMAND [conn5] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.861-0500 c20012| 2016-04-06T02:51:59.563-0500 D COMMAND [conn5] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:51:59.867-0500 c20012| 2016-04-06T02:51:59.563-0500 I COMMAND [conn5] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 0 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:59.868-0500 c20011| 2016-04-06T02:51:59.563-0500 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 0 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:51:59.877-0500 c20013| 2016-04-06T02:51:59.561-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 6 finished with response: { ok: 1.0, time: 1459929119, config: { _id: "multidrop-configRS", version: 1, configsvr: true, protocolVersion: 1, members: [ { _id: 0, host: "mongovm16:20011", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "mongovm16:20012", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "mongovm16:20013", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 5000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5704c01d3876c4cfd2eb3eb9') } }, e: false, rs: true, state: 5, v: 1, hbmsg: "", set: "multidrop-configRS", durableOpTime: { ts: Timestamp 0|0, t: -1 }, opTime: new Date(0) } [js_test:multi_coll_drop] 2016-04-06T02:51:59.878-0500 c20013| 2016-04-06T02:51:59.562-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:01.562Z [js_test:multi_coll_drop] 2016-04-06T02:51:59.881-0500 c20013| 2016-04-06T02:51:59.562-0500 D REPL [ReplicationExecutor] Ignoring new configuration with version 1 because already in the midst of a configuration process [js_test:multi_coll_drop] 2016-04-06T02:51:59.883-0500 c20013| 2016-04-06T02:51:59.562-0500 D STORAGE [replExecDBWorker-0] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-5-751336887848580549 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:51:59.885-0500 c20013| 2016-04-06T02:51:59.562-0500 D STORAGE [replExecDBWorker-0] local.system.replset: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:51:59.888-0500 c20013| 2016-04-06T02:51:59.563-0500 D QUERY [replExecDBWorker-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:51:59.890-0500 c20013| 2016-04-06T02:51:59.563-0500 I REPL [replExecDBWorker-0] Starting replication storage threads [js_test:multi_coll_drop] 2016-04-06T02:51:59.891-0500 c20013| 2016-04-06T02:51:59.563-0500 I REPL [initial sync] Starting replication fetcher thread [js_test:multi_coll_drop] 2016-04-06T02:51:59.893-0500 c20013| 2016-04-06T02:51:59.563-0500 D REPL [ReplicationExecutor] Updated term in topology coordinator to 0 due to new config [js_test:multi_coll_drop] 2016-04-06T02:51:59.899-0500 c20013| 2016-04-06T02:51:59.563-0500 I REPL [ReplicationExecutor] New replica set config in use: { _id: "multidrop-configRS", version: 1, configsvr: true, protocolVersion: 1, members: [ { _id: 0, host: "mongovm16:20011", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "mongovm16:20012", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "mongovm16:20013", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 5000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5704c01d3876c4cfd2eb3eb9') } } [js_test:multi_coll_drop] 2016-04-06T02:51:59.900-0500 c20013| 2016-04-06T02:51:59.563-0500 I REPL [ReplicationExecutor] This node is mongovm16:20013 in the config [js_test:multi_coll_drop] 2016-04-06T02:51:59.900-0500 c20013| 2016-04-06T02:51:59.563-0500 I REPL [ReplicationExecutor] transition to STARTUP2 [js_test:multi_coll_drop] 2016-04-06T02:51:59.901-0500 c20013| 2016-04-06T02:51:59.563-0500 I REPL [initial sync] ****** [js_test:multi_coll_drop] 2016-04-06T02:51:59.903-0500 c20013| 2016-04-06T02:51:59.563-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:51:59.563Z [js_test:multi_coll_drop] 2016-04-06T02:51:59.905-0500 c20013| 2016-04-06T02:51:59.563-0500 I REPL [initial sync] creating replication oplog of size: 40MB... [js_test:multi_coll_drop] 2016-04-06T02:51:59.908-0500 c20013| 2016-04-06T02:51:59.563-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:51:59.563Z [js_test:multi_coll_drop] 2016-04-06T02:51:59.910-0500 c20013| 2016-04-06T02:51:59.563-0500 D STORAGE [initial sync] stored meta data for local.oplog.rs @ RecordId(4) [js_test:multi_coll_drop] 2016-04-06T02:51:59.912-0500 c20013| 2016-04-06T02:51:59.563-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 9 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:09.563-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.914-0500 c20013| 2016-04-06T02:51:59.563-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 10 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:09.563-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.922-0500 c20013| 2016-04-06T02:51:59.563-0500 D STORAGE [initial sync] WiredTigerKVEngine::createRecordStore uri: table:collection-6-751336887848580549 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,type=file,memory_page_max=10m,key_format=q,value_format=u,app_metadata=(formatVersion=1,oplogKeyExtractionVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:51:59.923-0500 c20013| 2016-04-06T02:51:59.563-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 9 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:51:59.926-0500 c20013| 2016-04-06T02:51:59.563-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 10 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:51:59.928-0500 c20013| 2016-04-06T02:51:59.563-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:51:59.929-0500 c20013| 2016-04-06T02:51:59.564-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 10 finished with response: { ok: 1.0, state: 5, v: 1, hbmsg: "", set: "multidrop-configRS", term: 0, durableOpTime: { ts: Timestamp 0|0, t: -1 }, opTime: { ts: Timestamp 0|0, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:51:59.933-0500 c20013| 2016-04-06T02:51:59.564-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 9 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 0, durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, opTime: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:51:59.933-0500 c20013| 2016-04-06T02:51:59.564-0500 I REPL [ReplicationExecutor] Member mongovm16:20012 is now in state STARTUP2 [js_test:multi_coll_drop] 2016-04-06T02:51:59.934-0500 c20013| 2016-04-06T02:51:59.564-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:02.064Z [js_test:multi_coll_drop] 2016-04-06T02:51:59.938-0500 c20013| 2016-04-06T02:51:59.564-0500 I REPL [ReplicationExecutor] Member mongovm16:20011 is now in state SECONDARY [js_test:multi_coll_drop] 2016-04-06T02:51:59.941-0500 c20013| 2016-04-06T02:51:59.564-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:02.064Z [js_test:multi_coll_drop] 2016-04-06T02:51:59.943-0500 c20012| 2016-04-06T02:51:59.565-0500 D COMMAND [WT RecordStoreThread: local.oplog.rs] BackgroundJob starting: WT RecordStoreThread: local.oplog.rs [js_test:multi_coll_drop] 2016-04-06T02:51:59.948-0500 c20013| 2016-04-06T02:51:59.567-0500 D STORAGE [initial sync] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-6-751336887848580549 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:51:59.953-0500 c20013| 2016-04-06T02:51:59.567-0500 I STORAGE [initial sync] Starting WiredTigerRecordStoreThread local.oplog.rs [js_test:multi_coll_drop] 2016-04-06T02:51:59.955-0500 c20013| 2016-04-06T02:51:59.567-0500 I STORAGE [initial sync] The size storer reports that the oplog contains 0 records totaling to 0 bytes [js_test:multi_coll_drop] 2016-04-06T02:51:59.957-0500 c20013| 2016-04-06T02:51:59.567-0500 I STORAGE [initial sync] Scanning the oplog to determine where to place markers for truncation [js_test:multi_coll_drop] 2016-04-06T02:51:59.959-0500 c20013| 2016-04-06T02:51:59.567-0500 D COMMAND [WT RecordStoreThread: local.oplog.rs] BackgroundJob starting: WT RecordStoreThread: local.oplog.rs [js_test:multi_coll_drop] 2016-04-06T02:51:59.962-0500 c20013| 2016-04-06T02:51:59.567-0500 D STORAGE [initial sync] local.oplog.rs: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:51:59.963-0500 c20013| 2016-04-06T02:51:59.567-0500 D STORAGE [initial sync] WiredTigerKVEngine::flushAllFiles [js_test:multi_coll_drop] 2016-04-06T02:51:59.964-0500 c20013| 2016-04-06T02:51:59.567-0500 D STORAGE [initial sync] WiredTigerSizeStorer::storeInto table:_mdb_catalog -> { numRecords: 4, dataSize: 1089 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.968-0500 c20013| 2016-04-06T02:51:59.567-0500 D STORAGE [initial sync] WiredTigerSizeStorer::storeInto table:collection-0-751336887848580549 -> { numRecords: 1, dataSize: 42 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.969-0500 c20013| 2016-04-06T02:51:59.567-0500 D STORAGE [initial sync] WiredTigerSizeStorer::storeInto table:collection-2-751336887848580549 -> { numRecords: 1, dataSize: 1835 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.973-0500 c20013| 2016-04-06T02:51:59.567-0500 D STORAGE [initial sync] WiredTigerSizeStorer::storeInto table:collection-4-751336887848580549 -> { numRecords: 1, dataSize: 733 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.975-0500 c20013| 2016-04-06T02:51:59.567-0500 D STORAGE [initial sync] WiredTigerSizeStorer::storeInto table:collection-6-751336887848580549 -> { numRecords: 0, dataSize: 0 } [js_test:multi_coll_drop] 2016-04-06T02:51:59.976-0500 c20013| 2016-04-06T02:51:59.575-0500 D EXECUTOR [rsBackgroundSync-0] starting thread in pool rsBackgroundSync [js_test:multi_coll_drop] 2016-04-06T02:51:59.977-0500 c20012| 2016-04-06T02:51:59.586-0500 I REPL [initial sync] ****** [js_test:multi_coll_drop] 2016-04-06T02:51:59.977-0500 c20012| 2016-04-06T02:51:59.586-0500 I REPL [initial sync] initial sync pending [js_test:multi_coll_drop] 2016-04-06T02:51:59.979-0500 c20012| 2016-04-06T02:51:59.586-0500 D STORAGE [initial sync] stored meta data for local.replset.minvalid @ RecordId(5) [js_test:multi_coll_drop] 2016-04-06T02:51:59.982-0500 c20012| 2016-04-06T02:51:59.586-0500 D STORAGE [initial sync] WiredTigerKVEngine::createRecordStore uri: table:collection-7-6577373056560964212 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:51:59.986-0500 c20012| 2016-04-06T02:51:59.590-0500 D STORAGE [initial sync] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-7-6577373056560964212 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:51:59.986-0500 c20012| 2016-04-06T02:51:59.590-0500 D STORAGE [initial sync] local.replset.minvalid: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:51:59.990-0500 c20012| 2016-04-06T02:51:59.590-0500 D STORAGE [initial sync] WiredTigerKVEngine::createSortedDataInterface ident: index-8-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.replset.minvalid" }), [js_test:multi_coll_drop] 2016-04-06T02:51:59.995-0500 c20012| 2016-04-06T02:51:59.590-0500 D STORAGE [initial sync] create uri: table:index-8-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.replset.minvalid" }), [js_test:multi_coll_drop] 2016-04-06T02:51:59.995-0500 c20012| 2016-04-06T02:51:59.593-0500 D STORAGE [initial sync] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-8-6577373056560964212 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:51:59.997-0500 c20012| 2016-04-06T02:51:59.593-0500 D STORAGE [initial sync] local.replset.minvalid: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:00.015-0500 c20012| 2016-04-06T02:51:59.594-0500 D QUERY [initial sync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:00.016-0500 c20012| 2016-04-06T02:51:59.594-0500 I REPL [ReplicationExecutor] syncing from: mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:00.021-0500 c20011| 2016-04-06T02:51:59.594-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:58537 #6 (4 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:00.022-0500 c20012| 2016-04-06T02:51:59.594-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:52:00.025-0500 c20012| 2016-04-06T02:51:59.595-0500 D NETWORK [initial sync] connected to server mongovm16:20011 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:52:00.026-0500 c20011| 2016-04-06T02:51:59.595-0500 D COMMAND [conn6] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20012" } [js_test:multi_coll_drop] 2016-04-06T02:52:00.028-0500 c20011| 2016-04-06T02:51:59.595-0500 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20012" } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:00.028-0500 c20011| 2016-04-06T02:51:59.595-0500 D QUERY [conn6] Running query: query: {} sort: {} projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:00.029-0500 c20011| 2016-04-06T02:51:59.595-0500 D QUERY [conn6] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} ntoreturn=1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:00.033-0500 c20011| 2016-04-06T02:51:59.596-0500 I COMMAND [conn6] query local.oplog.rs planSummary: COLLSCAN ntoreturn:1 ntoskip:0 keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:106 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:00.033-0500 c20013| 2016-04-06T02:51:59.596-0500 I REPL [initial sync] ****** [js_test:multi_coll_drop] 2016-04-06T02:52:00.035-0500 c20012| 2016-04-06T02:51:59.596-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.038-0500 c20012| 2016-04-06T02:51:59.596-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.039-0500 c20012| 2016-04-06T02:51:59.596-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.040-0500 c20012| 2016-04-06T02:51:59.596-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.041-0500 c20013| 2016-04-06T02:51:59.596-0500 I REPL [initial sync] initial sync pending [js_test:multi_coll_drop] 2016-04-06T02:52:00.041-0500 c20013| 2016-04-06T02:51:59.596-0500 D STORAGE [initial sync] stored meta data for local.replset.minvalid @ RecordId(5) [js_test:multi_coll_drop] 2016-04-06T02:52:00.046-0500 c20012| 2016-04-06T02:51:59.596-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.050-0500 c20013| 2016-04-06T02:51:59.596-0500 D STORAGE [initial sync] WiredTigerKVEngine::createRecordStore uri: table:collection-7-751336887848580549 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:00.052-0500 c20012| 2016-04-06T02:51:59.596-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.061-0500 c20012| 2016-04-06T02:51:59.596-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.062-0500 c20012| 2016-04-06T02:51:59.596-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.063-0500 c20012| 2016-04-06T02:51:59.596-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.067-0500 c20012| 2016-04-06T02:51:59.596-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.070-0500 c20012| 2016-04-06T02:51:59.596-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.074-0500 c20012| 2016-04-06T02:51:59.596-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.075-0500 c20012| 2016-04-06T02:51:59.596-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.078-0500 c20012| 2016-04-06T02:51:59.596-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.078-0500 c20012| 2016-04-06T02:51:59.596-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.079-0500 c20012| 2016-04-06T02:51:59.597-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.080-0500 c20012| 2016-04-06T02:51:59.597-0500 D EXECUTOR [repl prefetch worker 0] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.082-0500 c20012| 2016-04-06T02:51:59.597-0500 D EXECUTOR [repl prefetch worker 1] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.085-0500 c20012| 2016-04-06T02:51:59.597-0500 D EXECUTOR [repl prefetch worker 2] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.085-0500 c20012| 2016-04-06T02:51:59.597-0500 D EXECUTOR [repl prefetch worker 3] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.086-0500 c20012| 2016-04-06T02:51:59.597-0500 D EXECUTOR [repl prefetch worker 4] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.087-0500 c20012| 2016-04-06T02:51:59.597-0500 D EXECUTOR [repl prefetch worker 5] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.088-0500 c20012| 2016-04-06T02:51:59.597-0500 D EXECUTOR [repl prefetch worker 7] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.092-0500 c20012| 2016-04-06T02:51:59.597-0500 D EXECUTOR [repl prefetch worker 6] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.093-0500 c20012| 2016-04-06T02:51:59.597-0500 D EXECUTOR [repl prefetch worker 8] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.095-0500 c20012| 2016-04-06T02:51:59.597-0500 D EXECUTOR [repl prefetch worker 9] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.096-0500 c20012| 2016-04-06T02:51:59.597-0500 D EXECUTOR [repl prefetch worker 10] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.098-0500 c20012| 2016-04-06T02:51:59.597-0500 D EXECUTOR [repl prefetch worker 11] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.099-0500 c20012| 2016-04-06T02:51:59.597-0500 D EXECUTOR [repl prefetch worker 12] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.099-0500 c20012| 2016-04-06T02:51:59.597-0500 D EXECUTOR [repl prefetch worker 13] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.101-0500 c20012| 2016-04-06T02:51:59.597-0500 D EXECUTOR [repl prefetch worker 14] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.102-0500 c20012| 2016-04-06T02:51:59.597-0500 D EXECUTOR [repl prefetch worker 15] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.104-0500 c20011| 2016-04-06T02:51:59.597-0500 D QUERY [conn6] Running query: query: {} sort: { $natural: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:00.107-0500 c20011| 2016-04-06T02:51:59.597-0500 D QUERY [conn6] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: -1 } projection: {} ntoreturn=1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:00.109-0500 c20011| 2016-04-06T02:51:59.597-0500 I COMMAND [conn6] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:106 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:00.112-0500 c20012| 2016-04-06T02:51:59.597-0500 D QUERY [initial sync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:00.114-0500 c20013| 2016-04-06T02:51:59.598-0500 D STORAGE [initial sync] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-7-751336887848580549 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:00.117-0500 c20013| 2016-04-06T02:51:59.599-0500 D STORAGE [initial sync] local.replset.minvalid: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:00.120-0500 c20013| 2016-04-06T02:51:59.599-0500 D STORAGE [initial sync] WiredTigerKVEngine::createSortedDataInterface ident: index-8-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.replset.minvalid" }), [js_test:multi_coll_drop] 2016-04-06T02:52:00.124-0500 c20013| 2016-04-06T02:51:59.599-0500 D STORAGE [initial sync] create uri: table:index-8-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.replset.minvalid" }), [js_test:multi_coll_drop] 2016-04-06T02:52:00.125-0500 c20012| 2016-04-06T02:51:59.599-0500 I REPL [initial sync] initial sync drop all databases [js_test:multi_coll_drop] 2016-04-06T02:52:00.127-0500 c20012| 2016-04-06T02:51:59.599-0500 I STORAGE [initial sync] dropAllDatabasesExceptLocal 1 [js_test:multi_coll_drop] 2016-04-06T02:52:00.128-0500 c20012| 2016-04-06T02:51:59.599-0500 I REPL [initial sync] initial sync clone all databases [js_test:multi_coll_drop] 2016-04-06T02:52:00.129-0500 c20011| 2016-04-06T02:51:59.599-0500 D COMMAND [conn6] run command admin.$cmd { listDatabases: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:00.129-0500 c20011| 2016-04-06T02:51:59.599-0500 D COMMAND [conn6] command: listDatabases [js_test:multi_coll_drop] 2016-04-06T02:52:00.131-0500 c20011| 2016-04-06T02:51:59.599-0500 I COMMAND [conn6] command admin.$cmd command: listDatabases { listDatabases: 1 } numYields:0 reslen:169 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:00.134-0500 c20012| 2016-04-06T02:51:59.599-0500 I REPL [initial sync] initial sync data copy, starting syncup [js_test:multi_coll_drop] 2016-04-06T02:52:00.134-0500 c20012| 2016-04-06T02:51:59.599-0500 I REPL [initial sync] oplog sync 1 of 3 [js_test:multi_coll_drop] 2016-04-06T02:52:00.136-0500 c20011| 2016-04-06T02:51:59.599-0500 D QUERY [conn6] Running query: query: {} sort: { $natural: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:00.141-0500 c20011| 2016-04-06T02:51:59.599-0500 D QUERY [conn6] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: -1 } projection: {} ntoreturn=1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:00.142-0500 c20011| 2016-04-06T02:51:59.600-0500 I COMMAND [conn6] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:106 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:00.143-0500 c20012| 2016-04-06T02:51:59.600-0500 I REPL [initial sync] oplog sync 2 of 3 [js_test:multi_coll_drop] 2016-04-06T02:52:00.147-0500 c20011| 2016-04-06T02:51:59.600-0500 D QUERY [conn6] Running query: query: {} sort: { $natural: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:00.150-0500 c20011| 2016-04-06T02:51:59.600-0500 D QUERY [conn6] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: -1 } projection: {} ntoreturn=1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:00.155-0500 c20011| 2016-04-06T02:51:59.600-0500 I COMMAND [conn6] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:106 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:00.156-0500 c20012| 2016-04-06T02:51:59.600-0500 I REPL [initial sync] initial sync building indexes [js_test:multi_coll_drop] 2016-04-06T02:52:00.161-0500 c20012| 2016-04-06T02:51:59.600-0500 I REPL [initial sync] oplog sync 3 of 3 [js_test:multi_coll_drop] 2016-04-06T02:52:00.162-0500 c20012| 2016-04-06T02:51:59.600-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.164-0500 c20012| 2016-04-06T02:51:59.600-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.165-0500 c20012| 2016-04-06T02:51:59.600-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.166-0500 c20012| 2016-04-06T02:51:59.600-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.168-0500 c20012| 2016-04-06T02:51:59.600-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.168-0500 c20012| 2016-04-06T02:51:59.600-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.169-0500 c20012| 2016-04-06T02:51:59.600-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.170-0500 c20012| 2016-04-06T02:51:59.600-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.173-0500 c20012| 2016-04-06T02:51:59.600-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.174-0500 c20012| 2016-04-06T02:51:59.600-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.178-0500 c20012| 2016-04-06T02:51:59.600-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.180-0500 c20012| 2016-04-06T02:51:59.600-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.184-0500 c20012| 2016-04-06T02:51:59.600-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.184-0500 c20012| 2016-04-06T02:51:59.601-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.185-0500 c20012| 2016-04-06T02:51:59.601-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.188-0500 c20012| 2016-04-06T02:51:59.601-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.189-0500 c20012| 2016-04-06T02:51:59.601-0500 D EXECUTOR [repl prefetch worker 0] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.190-0500 c20012| 2016-04-06T02:51:59.601-0500 D EXECUTOR [repl prefetch worker 1] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.192-0500 c20012| 2016-04-06T02:51:59.601-0500 D EXECUTOR [repl prefetch worker 2] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.194-0500 c20012| 2016-04-06T02:51:59.601-0500 D EXECUTOR [repl prefetch worker 3] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.196-0500 c20012| 2016-04-06T02:51:59.601-0500 D EXECUTOR [repl prefetch worker 4] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.199-0500 c20012| 2016-04-06T02:51:59.601-0500 D EXECUTOR [repl prefetch worker 5] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.201-0500 c20012| 2016-04-06T02:51:59.601-0500 D EXECUTOR [repl prefetch worker 6] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.204-0500 c20012| 2016-04-06T02:51:59.601-0500 D EXECUTOR [repl prefetch worker 7] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.205-0500 c20012| 2016-04-06T02:51:59.601-0500 D EXECUTOR [repl prefetch worker 8] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.205-0500 c20012| 2016-04-06T02:51:59.601-0500 D EXECUTOR [repl prefetch worker 9] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.206-0500 c20012| 2016-04-06T02:51:59.602-0500 D EXECUTOR [repl prefetch worker 10] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.208-0500 c20012| 2016-04-06T02:51:59.602-0500 D EXECUTOR [repl prefetch worker 11] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.209-0500 c20012| 2016-04-06T02:51:59.602-0500 D EXECUTOR [repl prefetch worker 12] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.210-0500 c20012| 2016-04-06T02:51:59.602-0500 D EXECUTOR [repl prefetch worker 13] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.213-0500 c20012| 2016-04-06T02:51:59.602-0500 D EXECUTOR [repl prefetch worker 14] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.214-0500 c20012| 2016-04-06T02:51:59.602-0500 D EXECUTOR [repl prefetch worker 15] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.217-0500 c20011| 2016-04-06T02:51:59.602-0500 D QUERY [conn6] Running query: query: {} sort: { $natural: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:00.221-0500 c20011| 2016-04-06T02:51:59.602-0500 D QUERY [conn6] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: -1 } projection: {} ntoreturn=1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:00.226-0500 c20011| 2016-04-06T02:51:59.602-0500 I COMMAND [conn6] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:106 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:00.227-0500 c20012| 2016-04-06T02:51:59.602-0500 D QUERY [initial sync] Running query: query: {} sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:52:00.228-0500 c20012| 2016-04-06T02:51:59.603-0500 D QUERY [initial sync] Collection admin.system.roles does not exist. Using EOF plan: query: {} sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:52:00.232-0500 c20012| 2016-04-06T02:51:59.603-0500 I COMMAND [initial sync] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:20 locks:{ Global: { acquireCount: { r: 8, w: 5, W: 1 } }, Database: { acquireCount: { r: 1, W: 5 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 90 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:00.233-0500 c20012| 2016-04-06T02:51:59.603-0500 I REPL [initial sync] initial sync finishing up [js_test:multi_coll_drop] 2016-04-06T02:52:00.238-0500 c20012| 2016-04-06T02:51:59.603-0500 I REPL [initial sync] set minValid={ ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:00.240-0500 c20012| 2016-04-06T02:51:59.603-0500 D QUERY [initial sync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:00.241-0500 c20012| 2016-04-06T02:51:59.603-0500 D QUERY [initial sync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:00.243-0500 c20012| 2016-04-06T02:51:59.604-0500 I REPL [initial sync] initial sync done [js_test:multi_coll_drop] 2016-04-06T02:52:00.243-0500 c20012| 2016-04-06T02:51:59.604-0500 D EXECUTOR [repl prefetch worker 0] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.247-0500 c20012| 2016-04-06T02:51:59.604-0500 D EXECUTOR [repl prefetch worker 1] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.249-0500 c20012| 2016-04-06T02:51:59.604-0500 D EXECUTOR [repl prefetch worker 2] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.251-0500 c20012| 2016-04-06T02:51:59.604-0500 D EXECUTOR [repl prefetch worker 3] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.252-0500 c20012| 2016-04-06T02:51:59.604-0500 D EXECUTOR [repl prefetch worker 4] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.254-0500 c20012| 2016-04-06T02:51:59.604-0500 D EXECUTOR [repl prefetch worker 5] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.257-0500 c20012| 2016-04-06T02:51:59.604-0500 D EXECUTOR [repl prefetch worker 6] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.259-0500 c20012| 2016-04-06T02:51:59.604-0500 D EXECUTOR [repl prefetch worker 7] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.262-0500 c20012| 2016-04-06T02:51:59.604-0500 D EXECUTOR [repl prefetch worker 8] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.264-0500 c20012| 2016-04-06T02:51:59.604-0500 D EXECUTOR [repl prefetch worker 9] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.265-0500 c20012| 2016-04-06T02:51:59.604-0500 D EXECUTOR [repl prefetch worker 10] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.269-0500 c20012| 2016-04-06T02:51:59.604-0500 D EXECUTOR [repl prefetch worker 11] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.270-0500 c20012| 2016-04-06T02:51:59.604-0500 D EXECUTOR [repl prefetch worker 12] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.271-0500 c20012| 2016-04-06T02:51:59.604-0500 D EXECUTOR [repl prefetch worker 13] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.272-0500 c20012| 2016-04-06T02:51:59.604-0500 D EXECUTOR [repl prefetch worker 14] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.275-0500 c20012| 2016-04-06T02:51:59.604-0500 D EXECUTOR [repl prefetch worker 15] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.277-0500 c20013| 2016-04-06T02:51:59.604-0500 D STORAGE [initial sync] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-8-751336887848580549 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:00.279-0500 c20013| 2016-04-06T02:51:59.604-0500 D STORAGE [initial sync] local.replset.minvalid: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:00.280-0500 c20013| 2016-04-06T02:51:59.604-0500 D QUERY [initial sync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:00.281-0500 c20013| 2016-04-06T02:51:59.605-0500 I REPL [ReplicationExecutor] syncing from: mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:00.284-0500 c20012| 2016-04-06T02:51:59.605-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.286-0500 c20012| 2016-04-06T02:51:59.605-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.288-0500 c20012| 2016-04-06T02:51:59.605-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.289-0500 c20012| 2016-04-06T02:51:59.605-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.291-0500 c20012| 2016-04-06T02:51:59.605-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.292-0500 c20012| 2016-04-06T02:51:59.605-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.293-0500 c20012| 2016-04-06T02:51:59.605-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.295-0500 c20012| 2016-04-06T02:51:59.605-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.297-0500 c20012| 2016-04-06T02:51:59.605-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.298-0500 c20012| 2016-04-06T02:51:59.605-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.299-0500 c20012| 2016-04-06T02:51:59.605-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.301-0500 c20012| 2016-04-06T02:51:59.605-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.303-0500 c20011| 2016-04-06T02:51:59.605-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:58538 #7 (5 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:00.305-0500 c20011| 2016-04-06T02:51:59.605-0500 D COMMAND [conn7] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20013" } [js_test:multi_coll_drop] 2016-04-06T02:52:00.309-0500 c20012| 2016-04-06T02:51:59.605-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.312-0500 c20012| 2016-04-06T02:51:59.605-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.312-0500 c20012| 2016-04-06T02:51:59.605-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.314-0500 c20012| 2016-04-06T02:51:59.605-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.316-0500 c20011| 2016-04-06T02:51:59.605-0500 I COMMAND [conn7] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20013" } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:00.318-0500 c20013| 2016-04-06T02:51:59.605-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:52:00.320-0500 c20013| 2016-04-06T02:51:59.605-0500 D NETWORK [initial sync] connected to server mongovm16:20011 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:52:00.322-0500 c20011| 2016-04-06T02:51:59.605-0500 D QUERY [conn7] Running query: query: {} sort: {} projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:00.323-0500 c20011| 2016-04-06T02:51:59.606-0500 D QUERY [conn7] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} ntoreturn=1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:00.327-0500 c20011| 2016-04-06T02:51:59.606-0500 I COMMAND [conn7] query local.oplog.rs planSummary: COLLSCAN ntoreturn:1 ntoskip:0 keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:106 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:00.329-0500 c20012| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl prefetch worker 0] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.330-0500 c20012| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl prefetch worker 15] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.331-0500 c20012| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl prefetch worker 1] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.332-0500 c20012| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl prefetch worker 2] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.334-0500 c20012| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl prefetch worker 3] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.335-0500 c20012| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl prefetch worker 4] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.340-0500 c20012| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl prefetch worker 5] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.343-0500 c20012| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl prefetch worker 7] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.345-0500 c20012| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl prefetch worker 6] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.346-0500 c20013| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.348-0500 c20013| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.351-0500 c20013| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.353-0500 c20013| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.357-0500 c20013| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.358-0500 c20013| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.364-0500 c20013| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.367-0500 c20013| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.368-0500 c20012| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl prefetch worker 8] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.371-0500 c20012| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl prefetch worker 9] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.375-0500 c20012| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl prefetch worker 10] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.375-0500 c20012| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl prefetch worker 11] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.379-0500 c20012| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl prefetch worker 12] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.380-0500 c20012| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl prefetch worker 13] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.381-0500 c20013| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.387-0500 c20013| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.390-0500 c20013| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.392-0500 c20013| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.396-0500 c20012| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl prefetch worker 14] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.397-0500 c20013| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.399-0500 c20013| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.403-0500 c20012| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.403-0500 c20012| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.404-0500 c20012| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.405-0500 c20012| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.405-0500 c20012| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.406-0500 c20013| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.408-0500 c20013| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl prefetch worker 0] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.411-0500 c20013| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl prefetch worker 2] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.411-0500 c20013| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl prefetch worker 3] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.412-0500 c20013| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl prefetch worker 4] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.413-0500 c20013| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl prefetch worker 5] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.415-0500 c20013| 2016-04-06T02:51:59.606-0500 D EXECUTOR [repl prefetch worker 1] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.418-0500 c20013| 2016-04-06T02:51:59.607-0500 D EXECUTOR [repl prefetch worker 6] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.420-0500 c20012| 2016-04-06T02:51:59.607-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.426-0500 c20012| 2016-04-06T02:51:59.607-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.427-0500 c20013| 2016-04-06T02:51:59.607-0500 D EXECUTOR [repl prefetch worker 9] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.431-0500 c20012| 2016-04-06T02:51:59.607-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.434-0500 c20013| 2016-04-06T02:51:59.607-0500 D EXECUTOR [repl prefetch worker 8] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.435-0500 c20013| 2016-04-06T02:51:59.607-0500 D EXECUTOR [repl prefetch worker 10] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.436-0500 c20012| 2016-04-06T02:51:59.607-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.436-0500 c20012| 2016-04-06T02:51:59.607-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.438-0500 c20012| 2016-04-06T02:51:59.607-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.440-0500 c20012| 2016-04-06T02:51:59.607-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.441-0500 c20013| 2016-04-06T02:51:59.607-0500 D EXECUTOR [repl prefetch worker 12] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.443-0500 c20013| 2016-04-06T02:51:59.607-0500 D EXECUTOR [repl prefetch worker 13] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.445-0500 c20013| 2016-04-06T02:51:59.607-0500 D EXECUTOR [repl prefetch worker 11] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.447-0500 c20013| 2016-04-06T02:51:59.607-0500 D EXECUTOR [repl prefetch worker 14] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.448-0500 c20013| 2016-04-06T02:51:59.607-0500 D EXECUTOR [repl prefetch worker 15] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.454-0500 c20011| 2016-04-06T02:51:59.607-0500 D QUERY [conn7] Running query: query: {} sort: { $natural: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:00.456-0500 c20011| 2016-04-06T02:51:59.607-0500 D QUERY [conn7] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: -1 } projection: {} ntoreturn=1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:00.459-0500 c20011| 2016-04-06T02:51:59.607-0500 I COMMAND [conn7] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:106 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:00.461-0500 c20013| 2016-04-06T02:51:59.607-0500 D QUERY [initial sync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:00.462-0500 c20012| 2016-04-06T02:51:59.607-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.463-0500 c20012| 2016-04-06T02:51:59.607-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.465-0500 c20012| 2016-04-06T02:51:59.607-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.466-0500 c20012| 2016-04-06T02:51:59.607-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.467-0500 c20013| 2016-04-06T02:51:59.607-0500 D EXECUTOR [repl prefetch worker 7] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.469-0500 c20012| 2016-04-06T02:51:59.608-0500 I REPL [initial sync] Initial sync done, starting steady state replication. [js_test:multi_coll_drop] 2016-04-06T02:52:00.469-0500 c20012| 2016-04-06T02:51:59.608-0500 I REPL [initial sync] Starting replication applier threads [js_test:multi_coll_drop] 2016-04-06T02:52:00.472-0500 c20012| 2016-04-06T02:51:59.608-0500 I REPL [initial sync] Starting replication reporter thread [js_test:multi_coll_drop] 2016-04-06T02:52:00.472-0500 c20012| 2016-04-06T02:51:59.608-0500 I REPL [ReplicationExecutor] transition to RECOVERING [js_test:multi_coll_drop] 2016-04-06T02:52:00.474-0500 c20012| 2016-04-06T02:51:59.608-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:00.475-0500 c20011| 2016-04-06T02:51:59.608-0500 D NETWORK [conn6] SocketException: remote: 192.168.100.28:58537 error: 9001 socket exception [CLOSED] server [192.168.100.28:58537] [js_test:multi_coll_drop] 2016-04-06T02:52:00.476-0500 c20011| 2016-04-06T02:51:59.608-0500 I NETWORK [conn6] end connection 192.168.100.28:58537 (4 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:00.480-0500 c20012| 2016-04-06T02:51:59.608-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.482-0500 c20012| 2016-04-06T02:51:59.608-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.484-0500 c20012| 2016-04-06T02:51:59.608-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.485-0500 c20012| 2016-04-06T02:51:59.608-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.487-0500 c20012| 2016-04-06T02:51:59.608-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.488-0500 c20012| 2016-04-06T02:51:59.608-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.491-0500 c20012| 2016-04-06T02:51:59.608-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.491-0500 c20012| 2016-04-06T02:51:59.608-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.496-0500 c20012| 2016-04-06T02:51:59.608-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.499-0500 c20012| 2016-04-06T02:51:59.608-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.500-0500 c20012| 2016-04-06T02:51:59.608-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.501-0500 c20012| 2016-04-06T02:51:59.608-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.503-0500 c20012| 2016-04-06T02:51:59.608-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.505-0500 c20012| 2016-04-06T02:51:59.608-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.507-0500 c20012| 2016-04-06T02:51:59.608-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.508-0500 c20012| 2016-04-06T02:51:59.608-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.511-0500 c20012| 2016-04-06T02:51:59.608-0500 D EXECUTOR [repl prefetch worker 0] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.514-0500 c20012| 2016-04-06T02:51:59.608-0500 D EXECUTOR [repl prefetch worker 1] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.514-0500 c20012| 2016-04-06T02:51:59.608-0500 D EXECUTOR [repl prefetch worker 2] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.516-0500 c20012| 2016-04-06T02:51:59.608-0500 D EXECUTOR [repl prefetch worker 3] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.518-0500 c20012| 2016-04-06T02:51:59.608-0500 D EXECUTOR [repl prefetch worker 5] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.526-0500 c20012| 2016-04-06T02:51:59.608-0500 D EXECUTOR [repl prefetch worker 6] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.528-0500 c20012| 2016-04-06T02:51:59.609-0500 D EXECUTOR [repl prefetch worker 7] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.529-0500 c20012| 2016-04-06T02:51:59.609-0500 D EXECUTOR [repl prefetch worker 9] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.531-0500 c20012| 2016-04-06T02:51:59.609-0500 D EXECUTOR [repl prefetch worker 8] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.532-0500 c20012| 2016-04-06T02:51:59.609-0500 D EXECUTOR [repl prefetch worker 11] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.533-0500 c20012| 2016-04-06T02:51:59.609-0500 D EXECUTOR [repl prefetch worker 13] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.533-0500 c20012| 2016-04-06T02:51:59.609-0500 D EXECUTOR [repl prefetch worker 14] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.534-0500 c20012| 2016-04-06T02:51:59.609-0500 D EXECUTOR [repl prefetch worker 15] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.536-0500 c20012| 2016-04-06T02:51:59.609-0500 D EXECUTOR [repl prefetch worker 12] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.537-0500 c20012| 2016-04-06T02:51:59.609-0500 D EXECUTOR [repl prefetch worker 10] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.538-0500 c20012| 2016-04-06T02:51:59.609-0500 D EXECUTOR [repl prefetch worker 4] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.538-0500 c20012| 2016-04-06T02:51:59.609-0500 I REPL [ReplicationExecutor] transition to SECONDARY [js_test:multi_coll_drop] 2016-04-06T02:52:00.539-0500 c20013| 2016-04-06T02:51:59.609-0500 I REPL [initial sync] initial sync drop all databases [js_test:multi_coll_drop] 2016-04-06T02:52:00.539-0500 c20013| 2016-04-06T02:51:59.609-0500 I STORAGE [initial sync] dropAllDatabasesExceptLocal 1 [js_test:multi_coll_drop] 2016-04-06T02:52:00.541-0500 c20013| 2016-04-06T02:51:59.610-0500 I REPL [initial sync] initial sync clone all databases [js_test:multi_coll_drop] 2016-04-06T02:52:00.543-0500 c20013| 2016-04-06T02:51:59.610-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.544-0500 c20011| 2016-04-06T02:51:59.610-0500 D COMMAND [conn7] run command admin.$cmd { listDatabases: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:00.546-0500 c20011| 2016-04-06T02:51:59.610-0500 D COMMAND [conn7] command: listDatabases [js_test:multi_coll_drop] 2016-04-06T02:52:00.550-0500 c20011| 2016-04-06T02:51:59.610-0500 I COMMAND [conn7] command admin.$cmd command: listDatabases { listDatabases: 1 } numYields:0 reslen:169 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:00.550-0500 c20013| 2016-04-06T02:51:59.610-0500 I REPL [initial sync] initial sync data copy, starting syncup [js_test:multi_coll_drop] 2016-04-06T02:52:00.551-0500 c20013| 2016-04-06T02:51:59.610-0500 I REPL [initial sync] oplog sync 1 of 3 [js_test:multi_coll_drop] 2016-04-06T02:52:00.554-0500 c20011| 2016-04-06T02:51:59.610-0500 D QUERY [conn7] Running query: query: {} sort: { $natural: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:00.556-0500 c20011| 2016-04-06T02:51:59.610-0500 D QUERY [conn7] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: -1 } projection: {} ntoreturn=1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:00.575-0500 c20011| 2016-04-06T02:51:59.610-0500 I COMMAND [conn7] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:106 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:00.576-0500 c20013| 2016-04-06T02:51:59.610-0500 I REPL [initial sync] oplog sync 2 of 3 [js_test:multi_coll_drop] 2016-04-06T02:52:00.580-0500 c20011| 2016-04-06T02:51:59.610-0500 D QUERY [conn7] Running query: query: {} sort: { $natural: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:00.586-0500 c20011| 2016-04-06T02:51:59.610-0500 D QUERY [conn7] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: -1 } projection: {} ntoreturn=1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:00.590-0500 c20011| 2016-04-06T02:51:59.610-0500 I COMMAND [conn7] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:106 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:00.592-0500 c20013| 2016-04-06T02:51:59.610-0500 I REPL [initial sync] initial sync building indexes [js_test:multi_coll_drop] 2016-04-06T02:52:00.592-0500 c20013| 2016-04-06T02:51:59.610-0500 I REPL [initial sync] oplog sync 3 of 3 [js_test:multi_coll_drop] 2016-04-06T02:52:00.593-0500 c20013| 2016-04-06T02:51:59.610-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.595-0500 c20013| 2016-04-06T02:51:59.610-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.598-0500 c20013| 2016-04-06T02:51:59.610-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.602-0500 c20013| 2016-04-06T02:51:59.610-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.603-0500 c20013| 2016-04-06T02:51:59.610-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.605-0500 c20013| 2016-04-06T02:51:59.610-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.607-0500 c20013| 2016-04-06T02:51:59.611-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.609-0500 c20013| 2016-04-06T02:51:59.611-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.610-0500 c20013| 2016-04-06T02:51:59.611-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.615-0500 c20013| 2016-04-06T02:51:59.611-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.619-0500 c20013| 2016-04-06T02:51:59.611-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.621-0500 c20013| 2016-04-06T02:51:59.611-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.621-0500 c20013| 2016-04-06T02:51:59.611-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.624-0500 c20013| 2016-04-06T02:51:59.611-0500 D EXECUTOR [repl prefetch worker 0] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.625-0500 c20013| 2016-04-06T02:51:59.611-0500 D EXECUTOR [repl prefetch worker 1] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.632-0500 c20013| 2016-04-06T02:51:59.611-0500 D EXECUTOR [repl prefetch worker 2] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.632-0500 c20013| 2016-04-06T02:51:59.611-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.633-0500 c20013| 2016-04-06T02:51:59.611-0500 D EXECUTOR [repl prefetch worker 3] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.634-0500 c20013| 2016-04-06T02:51:59.611-0500 D EXECUTOR [repl prefetch worker 4] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.635-0500 c20013| 2016-04-06T02:51:59.611-0500 D EXECUTOR [repl prefetch worker 5] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.636-0500 c20013| 2016-04-06T02:51:59.611-0500 D EXECUTOR [repl prefetch worker 6] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.641-0500 c20013| 2016-04-06T02:51:59.611-0500 D EXECUTOR [repl prefetch worker 7] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.645-0500 c20013| 2016-04-06T02:51:59.611-0500 D EXECUTOR [repl prefetch worker 8] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.645-0500 c20013| 2016-04-06T02:51:59.611-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.647-0500 c20013| 2016-04-06T02:51:59.611-0500 D EXECUTOR [repl prefetch worker 11] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.648-0500 c20013| 2016-04-06T02:51:59.611-0500 D EXECUTOR [repl prefetch worker 9] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.649-0500 c20013| 2016-04-06T02:51:59.611-0500 D EXECUTOR [repl prefetch worker 10] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.651-0500 c20013| 2016-04-06T02:51:59.611-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.652-0500 c20013| 2016-04-06T02:51:59.611-0500 D EXECUTOR [repl prefetch worker 12] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.653-0500 c20013| 2016-04-06T02:51:59.611-0500 D EXECUTOR [repl prefetch worker 13] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.657-0500 c20011| 2016-04-06T02:51:59.611-0500 D QUERY [conn7] Running query: query: {} sort: { $natural: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:00.660-0500 c20011| 2016-04-06T02:51:59.611-0500 D QUERY [conn7] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: -1 } projection: {} ntoreturn=1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:00.663-0500 c20011| 2016-04-06T02:51:59.611-0500 I COMMAND [conn7] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:106 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:00.664-0500 c20013| 2016-04-06T02:51:59.611-0500 D EXECUTOR [repl prefetch worker 14] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.665-0500 c20013| 2016-04-06T02:51:59.612-0500 D EXECUTOR [repl prefetch worker 15] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.665-0500 c20013| 2016-04-06T02:51:59.612-0500 D QUERY [initial sync] Running query: query: {} sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:52:00.669-0500 c20013| 2016-04-06T02:51:59.612-0500 D QUERY [initial sync] Collection admin.system.roles does not exist. Using EOF plan: query: {} sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:52:00.673-0500 c20013| 2016-04-06T02:51:59.612-0500 I COMMAND [initial sync] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:20 locks:{ Global: { acquireCount: { r: 8, w: 5, W: 1 } }, Database: { acquireCount: { r: 1, W: 5 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:00.674-0500 c20013| 2016-04-06T02:51:59.612-0500 I REPL [initial sync] initial sync finishing up [js_test:multi_coll_drop] 2016-04-06T02:52:00.674-0500 c20013| 2016-04-06T02:51:59.612-0500 I REPL [initial sync] set minValid={ ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:00.676-0500 c20013| 2016-04-06T02:51:59.612-0500 D QUERY [initial sync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:00.682-0500 c20013| 2016-04-06T02:51:59.612-0500 D QUERY [initial sync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:00.693-0500 c20013| 2016-04-06T02:51:59.612-0500 I REPL [initial sync] initial sync done [js_test:multi_coll_drop] 2016-04-06T02:52:00.699-0500 c20013| 2016-04-06T02:51:59.612-0500 D EXECUTOR [repl prefetch worker 12] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.700-0500 c20013| 2016-04-06T02:51:59.612-0500 D EXECUTOR [repl prefetch worker 13] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.701-0500 c20013| 2016-04-06T02:51:59.612-0500 D EXECUTOR [repl prefetch worker 14] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.701-0500 c20013| 2016-04-06T02:51:59.612-0500 D EXECUTOR [repl prefetch worker 0] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.702-0500 c20013| 2016-04-06T02:51:59.612-0500 D EXECUTOR [repl prefetch worker 1] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.707-0500 c20013| 2016-04-06T02:51:59.613-0500 D EXECUTOR [repl prefetch worker 15] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.708-0500 c20013| 2016-04-06T02:51:59.613-0500 D EXECUTOR [repl prefetch worker 2] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.710-0500 c20013| 2016-04-06T02:51:59.613-0500 D EXECUTOR [repl prefetch worker 3] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.711-0500 c20013| 2016-04-06T02:51:59.613-0500 D EXECUTOR [repl prefetch worker 4] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.713-0500 c20013| 2016-04-06T02:51:59.613-0500 D EXECUTOR [repl prefetch worker 5] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.719-0500 c20013| 2016-04-06T02:51:59.613-0500 D EXECUTOR [repl prefetch worker 6] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.721-0500 c20013| 2016-04-06T02:51:59.613-0500 D EXECUTOR [repl prefetch worker 7] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.724-0500 c20013| 2016-04-06T02:51:59.613-0500 D EXECUTOR [repl prefetch worker 8] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.725-0500 c20013| 2016-04-06T02:51:59.613-0500 D EXECUTOR [repl prefetch worker 11] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.728-0500 c20013| 2016-04-06T02:51:59.613-0500 D EXECUTOR [repl prefetch worker 9] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.730-0500 c20013| 2016-04-06T02:51:59.613-0500 D EXECUTOR [repl prefetch worker 10] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.733-0500 c20013| 2016-04-06T02:51:59.613-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.734-0500 c20013| 2016-04-06T02:51:59.613-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.738-0500 c20013| 2016-04-06T02:51:59.613-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.739-0500 c20013| 2016-04-06T02:51:59.613-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.743-0500 c20013| 2016-04-06T02:51:59.613-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.745-0500 c20013| 2016-04-06T02:51:59.613-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.753-0500 c20013| 2016-04-06T02:51:59.613-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.754-0500 c20013| 2016-04-06T02:51:59.613-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.757-0500 c20013| 2016-04-06T02:51:59.613-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.758-0500 c20013| 2016-04-06T02:51:59.613-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.765-0500 c20013| 2016-04-06T02:51:59.613-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.766-0500 c20013| 2016-04-06T02:51:59.613-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.770-0500 c20013| 2016-04-06T02:51:59.613-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.774-0500 c20013| 2016-04-06T02:51:59.613-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.778-0500 c20013| 2016-04-06T02:51:59.614-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.782-0500 c20013| 2016-04-06T02:51:59.614-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.784-0500 c20013| 2016-04-06T02:51:59.614-0500 D EXECUTOR [repl prefetch worker 11] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.786-0500 c20013| 2016-04-06T02:51:59.614-0500 D EXECUTOR [repl prefetch worker 14] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.793-0500 c20013| 2016-04-06T02:51:59.614-0500 D EXECUTOR [repl prefetch worker 15] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.798-0500 c20013| 2016-04-06T02:51:59.614-0500 D EXECUTOR [repl prefetch worker 0] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.803-0500 c20013| 2016-04-06T02:51:59.614-0500 D EXECUTOR [repl prefetch worker 7] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.804-0500 c20013| 2016-04-06T02:51:59.614-0500 D EXECUTOR [repl prefetch worker 2] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.804-0500 c20013| 2016-04-06T02:51:59.614-0500 D EXECUTOR [repl prefetch worker 3] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.807-0500 c20013| 2016-04-06T02:51:59.614-0500 D EXECUTOR [repl prefetch worker 4] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.809-0500 c20013| 2016-04-06T02:51:59.614-0500 D EXECUTOR [repl prefetch worker 5] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.810-0500 c20013| 2016-04-06T02:51:59.614-0500 D EXECUTOR [repl prefetch worker 1] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.814-0500 c20013| 2016-04-06T02:51:59.614-0500 D EXECUTOR [repl prefetch worker 6] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.815-0500 c20013| 2016-04-06T02:51:59.614-0500 D EXECUTOR [repl prefetch worker 9] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.816-0500 c20013| 2016-04-06T02:51:59.614-0500 D EXECUTOR [repl prefetch worker 8] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.819-0500 c20013| 2016-04-06T02:51:59.614-0500 D EXECUTOR [repl prefetch worker 10] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.821-0500 c20013| 2016-04-06T02:51:59.614-0500 D EXECUTOR [repl prefetch worker 12] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.823-0500 c20013| 2016-04-06T02:51:59.614-0500 D EXECUTOR [repl prefetch worker 13] shutting down thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.827-0500 c20013| 2016-04-06T02:51:59.615-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.831-0500 c20013| 2016-04-06T02:51:59.615-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.832-0500 c20013| 2016-04-06T02:51:59.615-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.834-0500 c20013| 2016-04-06T02:51:59.615-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.834-0500 c20013| 2016-04-06T02:51:59.615-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.835-0500 c20013| 2016-04-06T02:51:59.615-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.841-0500 c20013| 2016-04-06T02:51:59.615-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.843-0500 c20013| 2016-04-06T02:51:59.615-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.844-0500 c20013| 2016-04-06T02:51:59.615-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.847-0500 c20013| 2016-04-06T02:51:59.615-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.848-0500 c20013| 2016-04-06T02:51:59.615-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.850-0500 c20013| 2016-04-06T02:51:59.615-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.855-0500 c20013| 2016-04-06T02:51:59.615-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.860-0500 c20013| 2016-04-06T02:51:59.615-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.860-0500 c20013| 2016-04-06T02:51:59.615-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.861-0500 c20013| 2016-04-06T02:51:59.616-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.863-0500 c20013| 2016-04-06T02:51:59.617-0500 I REPL [initial sync] Initial sync done, starting steady state replication. [js_test:multi_coll_drop] 2016-04-06T02:52:00.865-0500 c20013| 2016-04-06T02:51:59.617-0500 I REPL [initial sync] Starting replication applier threads [js_test:multi_coll_drop] 2016-04-06T02:52:00.866-0500 c20013| 2016-04-06T02:51:59.617-0500 I REPL [initial sync] Starting replication reporter thread [js_test:multi_coll_drop] 2016-04-06T02:52:00.867-0500 c20013| 2016-04-06T02:51:59.617-0500 I REPL [ReplicationExecutor] transition to RECOVERING [js_test:multi_coll_drop] 2016-04-06T02:52:00.871-0500 c20011| 2016-04-06T02:51:59.617-0500 D NETWORK [conn7] SocketException: remote: 192.168.100.28:58538 error: 9001 socket exception [CLOSED] server [192.168.100.28:58538] [js_test:multi_coll_drop] 2016-04-06T02:52:00.873-0500 c20011| 2016-04-06T02:51:59.617-0500 I NETWORK [conn7] end connection 192.168.100.28:58538 (3 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:00.874-0500 c20013| 2016-04-06T02:51:59.617-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.875-0500 c20013| 2016-04-06T02:51:59.617-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.878-0500 c20013| 2016-04-06T02:51:59.617-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.880-0500 c20013| 2016-04-06T02:51:59.617-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.882-0500 c20013| 2016-04-06T02:51:59.617-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.884-0500 c20013| 2016-04-06T02:51:59.617-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.886-0500 c20013| 2016-04-06T02:51:59.617-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.887-0500 c20013| 2016-04-06T02:51:59.617-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.888-0500 c20013| 2016-04-06T02:51:59.617-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.892-0500 c20013| 2016-04-06T02:51:59.617-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.895-0500 c20013| 2016-04-06T02:51:59.617-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.896-0500 c20013| 2016-04-06T02:51:59.617-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.897-0500 c20013| 2016-04-06T02:51:59.617-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.899-0500 c20013| 2016-04-06T02:51:59.617-0500 D EXECUTOR [repl prefetch worker 0] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.902-0500 c20013| 2016-04-06T02:51:59.617-0500 D EXECUTOR [repl prefetch worker 1] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.906-0500 c20013| 2016-04-06T02:51:59.617-0500 D EXECUTOR [repl prefetch worker 3] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.911-0500 c20013| 2016-04-06T02:51:59.617-0500 D EXECUTOR [repl prefetch worker 2] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.913-0500 c20013| 2016-04-06T02:51:59.618-0500 D EXECUTOR [repl prefetch worker 5] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.915-0500 c20013| 2016-04-06T02:51:59.618-0500 D EXECUTOR [repl prefetch worker 4] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.916-0500 c20013| 2016-04-06T02:51:59.617-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.917-0500 c20013| 2016-04-06T02:51:59.618-0500 D EXECUTOR [repl prefetch worker 6] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.918-0500 c20013| 2016-04-06T02:51:59.618-0500 D EXECUTOR [repl prefetch worker 7] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.920-0500 c20013| 2016-04-06T02:51:59.618-0500 D EXECUTOR [repl prefetch worker 8] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.921-0500 c20013| 2016-04-06T02:51:59.618-0500 D EXECUTOR [repl prefetch worker 9] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.923-0500 c20013| 2016-04-06T02:51:59.618-0500 D EXECUTOR [repl prefetch worker 10] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.926-0500 c20013| 2016-04-06T02:51:59.618-0500 D EXECUTOR [repl prefetch worker 11] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.926-0500 c20013| 2016-04-06T02:51:59.618-0500 D EXECUTOR [repl prefetch worker 12] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.927-0500 c20013| 2016-04-06T02:51:59.618-0500 D EXECUTOR [repl prefetch worker 13] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.927-0500 c20013| 2016-04-06T02:51:59.618-0500 D EXECUTOR [repl prefetch worker 15] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.929-0500 c20013| 2016-04-06T02:51:59.618-0500 D EXECUTOR [repl prefetch worker 14] starting thread in pool repl prefetch worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.931-0500 c20013| 2016-04-06T02:51:59.618-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.934-0500 c20013| 2016-04-06T02:51:59.618-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:00.938-0500 c20013| 2016-04-06T02:51:59.618-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:00.942-0500 c20013| 2016-04-06T02:51:59.618-0500 I REPL [ReplicationExecutor] transition to SECONDARY [js_test:multi_coll_drop] 2016-04-06T02:52:00.943-0500 c20011| 2016-04-06T02:51:59.638-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:00.948-0500 c20011| 2016-04-06T02:51:59.638-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:00.949-0500 c20012| 2016-04-06T02:51:59.638-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:00.953-0500 c20012| 2016-04-06T02:51:59.639-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:00.956-0500 c20013| 2016-04-06T02:51:59.639-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:00.957-0500 c20013| 2016-04-06T02:51:59.639-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:00.961-0500 c20011| 2016-04-06T02:51:59.840-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:00.963-0500 c20011| 2016-04-06T02:51:59.840-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:00.963-0500 c20012| 2016-04-06T02:51:59.841-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:00.965-0500 c20012| 2016-04-06T02:51:59.841-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:00.965-0500 c20013| 2016-04-06T02:51:59.841-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:00.967-0500 c20013| 2016-04-06T02:51:59.841-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:00.972-0500 c20011| 2016-04-06T02:52:00.042-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:00.973-0500 c20011| 2016-04-06T02:52:00.042-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:00.974-0500 c20012| 2016-04-06T02:52:00.042-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:00.978-0500 c20012| 2016-04-06T02:52:00.043-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:00.978-0500 c20013| 2016-04-06T02:52:00.044-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:00.980-0500 c20013| 2016-04-06T02:52:00.044-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:00.982-0500 c20011| 2016-04-06T02:52:00.094-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 11 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:10.094-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:52:00.990-0500 c20011| 2016-04-06T02:52:00.094-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 11 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:00.992-0500 c20012| 2016-04-06T02:52:00.095-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:52:00.993-0500 c20012| 2016-04-06T02:52:00.095-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:00.997-0500 c20011| 2016-04-06T02:52:00.095-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 11 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 0, durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, opTime: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:01.000-0500 c20012| 2016-04-06T02:52:00.095-0500 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 0 } numYields:0 reslen:470 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.016-0500 c20011| 2016-04-06T02:52:00.095-0500 I REPL [ReplicationExecutor] Member mongovm16:20012 is now in state SECONDARY [js_test:multi_coll_drop] 2016-04-06T02:52:01.019-0500 c20011| 2016-04-06T02:52:00.095-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:02.595Z [js_test:multi_coll_drop] 2016-04-06T02:52:01.021-0500 c20011| 2016-04-06T02:52:00.099-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 13 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:10.099-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.022-0500 c20013| 2016-04-06T02:52:00.099-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.024-0500 c20011| 2016-04-06T02:52:00.099-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 13 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:01.024-0500 c20013| 2016-04-06T02:52:00.099-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:01.036-0500 c20013| 2016-04-06T02:52:00.102-0500 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 0 } numYields:0 reslen:470 locks:{} protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.040-0500 c20011| 2016-04-06T02:52:00.102-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 13 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 0, durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, opTime: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:01.042-0500 c20011| 2016-04-06T02:52:00.102-0500 I REPL [ReplicationExecutor] Member mongovm16:20013 is now in state SECONDARY [js_test:multi_coll_drop] 2016-04-06T02:52:01.044-0500 c20011| 2016-04-06T02:52:00.102-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:02.602Z [js_test:multi_coll_drop] 2016-04-06T02:52:01.056-0500 c20011| 2016-04-06T02:52:00.244-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.062-0500 c20011| 2016-04-06T02:52:00.244-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.066-0500 c20012| 2016-04-06T02:52:00.245-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.081-0500 c20012| 2016-04-06T02:52:00.245-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.083-0500 c20013| 2016-04-06T02:52:00.245-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.088-0500 c20013| 2016-04-06T02:52:00.245-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.088-0500 c20011| 2016-04-06T02:52:00.446-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.090-0500 c20011| 2016-04-06T02:52:00.446-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.091-0500 c20012| 2016-04-06T02:52:00.446-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.093-0500 c20012| 2016-04-06T02:52:00.446-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.095-0500 c20013| 2016-04-06T02:52:00.447-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.096-0500 c20013| 2016-04-06T02:52:00.447-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.097-0500 c20012| 2016-04-06T02:52:00.561-0500 D REPL [rsBackgroundSync] bgsync fetch queue set to: { ts: Timestamp 1459929117000|1, t: -1 } 1169182228640141205 [js_test:multi_coll_drop] 2016-04-06T02:52:01.098-0500 c20012| 2016-04-06T02:52:00.561-0500 I REPL [ReplicationExecutor] could not find member to sync from [js_test:multi_coll_drop] 2016-04-06T02:52:01.102-0500 c20012| 2016-04-06T02:52:00.561-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:00.561Z [js_test:multi_coll_drop] 2016-04-06T02:52:01.106-0500 c20012| 2016-04-06T02:52:00.561-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:00.561Z [js_test:multi_coll_drop] 2016-04-06T02:52:01.114-0500 c20012| 2016-04-06T02:52:00.562-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 11 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:10.562-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.116-0500 c20012| 2016-04-06T02:52:00.562-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 12 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:10.562-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.118-0500 c20012| 2016-04-06T02:52:00.562-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 11 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:01.119-0500 c20012| 2016-04-06T02:52:00.562-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 12 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:01.121-0500 c20011| 2016-04-06T02:52:00.562-0500 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.122-0500 c20011| 2016-04-06T02:52:00.562-0500 D COMMAND [conn2] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:01.124-0500 c20011| 2016-04-06T02:52:00.562-0500 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 0 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.130-0500 c20012| 2016-04-06T02:52:00.562-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 11 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 0, durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, opTime: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:01.132-0500 c20012| 2016-04-06T02:52:00.562-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:03.062Z [js_test:multi_coll_drop] 2016-04-06T02:52:01.134-0500 c20013| 2016-04-06T02:52:00.564-0500 D REPL [rsBackgroundSync] bgsync fetch queue set to: { ts: Timestamp 1459929117000|1, t: -1 } 1169182228640141205 [js_test:multi_coll_drop] 2016-04-06T02:52:01.135-0500 c20013| 2016-04-06T02:52:00.564-0500 I REPL [ReplicationExecutor] could not find member to sync from [js_test:multi_coll_drop] 2016-04-06T02:52:01.136-0500 c20013| 2016-04-06T02:52:00.564-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:00.564Z [js_test:multi_coll_drop] 2016-04-06T02:52:01.140-0500 c20013| 2016-04-06T02:52:00.564-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:00.564Z [js_test:multi_coll_drop] 2016-04-06T02:52:01.142-0500 c20013| 2016-04-06T02:52:00.564-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 13 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:10.564-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.146-0500 c20013| 2016-04-06T02:52:00.564-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 14 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:10.564-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.149-0500 c20013| 2016-04-06T02:52:00.565-0500 D COMMAND [conn5] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.150-0500 c20013| 2016-04-06T02:52:00.565-0500 D COMMAND [conn5] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:01.152-0500 c20013| 2016-04-06T02:52:00.565-0500 I COMMAND [conn5] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 0 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.161-0500 c20012| 2016-04-06T02:52:00.566-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 12 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 0, durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, opTime: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:01.162-0500 c20012| 2016-04-06T02:52:00.566-0500 I REPL [ReplicationExecutor] Member mongovm16:20013 is now in state SECONDARY [js_test:multi_coll_drop] 2016-04-06T02:52:01.167-0500 c20012| 2016-04-06T02:52:00.566-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:03.066Z [js_test:multi_coll_drop] 2016-04-06T02:52:01.175-0500 c20013| 2016-04-06T02:52:00.568-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 13 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:01.177-0500 c20013| 2016-04-06T02:52:00.568-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 14 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:01.183-0500 c20012| 2016-04-06T02:52:00.568-0500 D COMMAND [conn5] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.183-0500 c20012| 2016-04-06T02:52:00.568-0500 D COMMAND [conn5] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:01.188-0500 c20011| 2016-04-06T02:52:00.568-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.197-0500 c20011| 2016-04-06T02:52:00.568-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:01.205-0500 c20011| 2016-04-06T02:52:00.569-0500 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 0 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.210-0500 c20013| 2016-04-06T02:52:00.569-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 13 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 0, durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, opTime: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:01.212-0500 c20013| 2016-04-06T02:52:00.569-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:03.069Z [js_test:multi_coll_drop] 2016-04-06T02:52:01.214-0500 c20012| 2016-04-06T02:52:00.570-0500 I COMMAND [conn5] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 0 } numYields:0 reslen:439 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.216-0500 c20013| 2016-04-06T02:52:00.570-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 14 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 0, durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, opTime: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:01.217-0500 c20013| 2016-04-06T02:52:00.570-0500 I REPL [ReplicationExecutor] Member mongovm16:20012 is now in state SECONDARY [js_test:multi_coll_drop] 2016-04-06T02:52:01.218-0500 c20013| 2016-04-06T02:52:00.570-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:03.070Z [js_test:multi_coll_drop] 2016-04-06T02:52:01.219-0500 c20011| 2016-04-06T02:52:00.647-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.224-0500 c20011| 2016-04-06T02:52:00.647-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.226-0500 c20012| 2016-04-06T02:52:00.648-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.228-0500 c20012| 2016-04-06T02:52:00.648-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.229-0500 c20013| 2016-04-06T02:52:00.648-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.231-0500 c20013| 2016-04-06T02:52:00.648-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.233-0500 c20011| 2016-04-06T02:52:00.849-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.237-0500 c20011| 2016-04-06T02:52:00.849-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.238-0500 c20012| 2016-04-06T02:52:00.849-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.240-0500 c20012| 2016-04-06T02:52:00.849-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.242-0500 c20013| 2016-04-06T02:52:00.853-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.246-0500 c20013| 2016-04-06T02:52:00.854-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.247-0500 c20011| 2016-04-06T02:52:01.055-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.250-0500 c20011| 2016-04-06T02:52:01.055-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.251-0500 c20012| 2016-04-06T02:52:01.059-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.254-0500 c20012| 2016-04-06T02:52:01.059-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.255-0500 c20013| 2016-04-06T02:52:01.059-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.257-0500 c20013| 2016-04-06T02:52:01.059-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.262-0500 c20011| 2016-04-06T02:52:01.260-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.263-0500 c20011| 2016-04-06T02:52:01.260-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.266-0500 c20012| 2016-04-06T02:52:01.260-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.269-0500 c20012| 2016-04-06T02:52:01.261-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.269-0500 c20013| 2016-04-06T02:52:01.261-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.271-0500 c20013| 2016-04-06T02:52:01.261-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.462-0500 c20011| 2016-04-06T02:52:01.461-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.465-0500 c20011| 2016-04-06T02:52:01.461-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.476-0500 c20012| 2016-04-06T02:52:01.462-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.481-0500 c20012| 2016-04-06T02:52:01.462-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.481-0500 c20013| 2016-04-06T02:52:01.462-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.484-0500 c20013| 2016-04-06T02:52:01.462-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.664-0500 c20011| 2016-04-06T02:52:01.663-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.665-0500 c20011| 2016-04-06T02:52:01.663-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.667-0500 c20012| 2016-04-06T02:52:01.663-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.671-0500 c20012| 2016-04-06T02:52:01.663-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.676-0500 c20013| 2016-04-06T02:52:01.664-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.680-0500 c20013| 2016-04-06T02:52:01.664-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.865-0500 c20011| 2016-04-06T02:52:01.865-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.867-0500 c20011| 2016-04-06T02:52:01.865-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.868-0500 c20012| 2016-04-06T02:52:01.866-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.871-0500 c20012| 2016-04-06T02:52:01.869-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:01.872-0500 c20013| 2016-04-06T02:52:01.869-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:01.875-0500 c20013| 2016-04-06T02:52:01.870-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:02.071-0500 c20011| 2016-04-06T02:52:02.070-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:02.072-0500 c20011| 2016-04-06T02:52:02.070-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:02.074-0500 c20012| 2016-04-06T02:52:02.071-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:02.074-0500 c20012| 2016-04-06T02:52:02.071-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:02.075-0500 c20013| 2016-04-06T02:52:02.071-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:02.076-0500 c20013| 2016-04-06T02:52:02.071-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:02.272-0500 c20011| 2016-04-06T02:52:02.272-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:02.273-0500 c20011| 2016-04-06T02:52:02.272-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:02.276-0500 c20012| 2016-04-06T02:52:02.272-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:02.278-0500 c20012| 2016-04-06T02:52:02.272-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:02.278-0500 c20013| 2016-04-06T02:52:02.273-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:02.280-0500 c20013| 2016-04-06T02:52:02.273-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:02.475-0500 c20011| 2016-04-06T02:52:02.473-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:02.477-0500 c20011| 2016-04-06T02:52:02.473-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:02.477-0500 c20012| 2016-04-06T02:52:02.474-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:02.480-0500 c20012| 2016-04-06T02:52:02.474-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:02.480-0500 c20013| 2016-04-06T02:52:02.474-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:02.481-0500 c20013| 2016-04-06T02:52:02.474-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:02.609-0500 c20011| 2016-04-06T02:52:02.605-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 15 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:12.605-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:52:02.614-0500 c20011| 2016-04-06T02:52:02.605-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 16 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:12.605-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:52:02.619-0500 c20012| 2016-04-06T02:52:02.606-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:52:02.620-0500 c20012| 2016-04-06T02:52:02.606-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:02.625-0500 c20011| 2016-04-06T02:52:02.605-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 15 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:02.626-0500 c20011| 2016-04-06T02:52:02.605-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 16 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:02.629-0500 c20013| 2016-04-06T02:52:02.606-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:52:02.631-0500 c20013| 2016-04-06T02:52:02.606-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:02.638-0500 c20011| 2016-04-06T02:52:02.606-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 15 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 0, durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, opTime: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:02.642-0500 c20013| 2016-04-06T02:52:02.606-0500 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 0 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:02.645-0500 c20011| 2016-04-06T02:52:02.606-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 16 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 0, durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, opTime: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:02.650-0500 c20012| 2016-04-06T02:52:02.606-0500 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 0 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:02.655-0500 c20011| 2016-04-06T02:52:02.607-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:05.107Z [js_test:multi_coll_drop] 2016-04-06T02:52:02.657-0500 c20011| 2016-04-06T02:52:02.607-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:05.107Z [js_test:multi_coll_drop] 2016-04-06T02:52:02.676-0500 c20011| 2016-04-06T02:52:02.675-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:02.678-0500 c20011| 2016-04-06T02:52:02.675-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:02.679-0500 c20012| 2016-04-06T02:52:02.676-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:02.680-0500 c20012| 2016-04-06T02:52:02.676-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:02.683-0500 c20013| 2016-04-06T02:52:02.676-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:02.684-0500 c20013| 2016-04-06T02:52:02.677-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:02.878-0500 c20011| 2016-04-06T02:52:02.877-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:02.881-0500 c20011| 2016-04-06T02:52:02.877-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:02.881-0500 c20012| 2016-04-06T02:52:02.877-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:02.884-0500 c20012| 2016-04-06T02:52:02.877-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:02.885-0500 c20013| 2016-04-06T02:52:02.878-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:02.888-0500 c20013| 2016-04-06T02:52:02.878-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.072-0500 c20012| 2016-04-06T02:52:03.064-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 15 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:13.064-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.074-0500 c20012| 2016-04-06T02:52:03.064-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 15 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:03.079-0500 c20011| 2016-04-06T02:52:03.064-0500 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.080-0500 c20011| 2016-04-06T02:52:03.064-0500 D COMMAND [conn2] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:03.083-0500 c20012| 2016-04-06T02:52:03.064-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 15 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 0, durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, opTime: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:03.086-0500 c20011| 2016-04-06T02:52:03.064-0500 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 0 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.086-0500 c20012| 2016-04-06T02:52:03.064-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:05.564Z [js_test:multi_coll_drop] 2016-04-06T02:52:03.091-0500 c20013| 2016-04-06T02:52:03.070-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 17 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:13.070-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.093-0500 c20013| 2016-04-06T02:52:03.070-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 18 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:13.070-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.095-0500 c20011| 2016-04-06T02:52:03.070-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.095-0500 c20011| 2016-04-06T02:52:03.070-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:03.097-0500 c20013| 2016-04-06T02:52:03.070-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 17 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:03.098-0500 c20013| 2016-04-06T02:52:03.070-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 18 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:03.100-0500 c20012| 2016-04-06T02:52:03.071-0500 D COMMAND [conn5] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.103-0500 c20012| 2016-04-06T02:52:03.071-0500 D COMMAND [conn5] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:03.105-0500 c20011| 2016-04-06T02:52:03.071-0500 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 0 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.106-0500 c20013| 2016-04-06T02:52:03.071-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 17 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 0, durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, opTime: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:03.110-0500 c20013| 2016-04-06T02:52:03.071-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:05.571Z [js_test:multi_coll_drop] 2016-04-06T02:52:03.112-0500 c20013| 2016-04-06T02:52:03.071-0500 D COMMAND [conn5] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.112-0500 c20013| 2016-04-06T02:52:03.071-0500 D COMMAND [conn5] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:03.114-0500 c20012| 2016-04-06T02:52:03.071-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 17 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:13.071-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.116-0500 c20012| 2016-04-06T02:52:03.071-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 17 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:03.123-0500 c20013| 2016-04-06T02:52:03.072-0500 I COMMAND [conn5] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 0 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.127-0500 c20012| 2016-04-06T02:52:03.072-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 17 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 0, durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, opTime: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:03.131-0500 c20012| 2016-04-06T02:52:03.072-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:05.572Z [js_test:multi_coll_drop] 2016-04-06T02:52:03.139-0500 c20012| 2016-04-06T02:52:03.072-0500 I COMMAND [conn5] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 0 } numYields:0 reslen:439 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.148-0500 c20013| 2016-04-06T02:52:03.075-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 18 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 0, durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, opTime: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:03.149-0500 c20013| 2016-04-06T02:52:03.075-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:05.575Z [js_test:multi_coll_drop] 2016-04-06T02:52:03.149-0500 c20011| 2016-04-06T02:52:03.078-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.150-0500 c20011| 2016-04-06T02:52:03.079-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.151-0500 c20012| 2016-04-06T02:52:03.079-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.151-0500 c20012| 2016-04-06T02:52:03.079-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.152-0500 c20013| 2016-04-06T02:52:03.079-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.154-0500 c20013| 2016-04-06T02:52:03.079-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.154-0500 c20011| 2016-04-06T02:52:03.117-0500 I REPL [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 5000ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.158-0500 c20011| 2016-04-06T02:52:03.117-0500 I REPL [ReplicationExecutor] conducting a dry run election to see if we could be elected [js_test:multi_coll_drop] 2016-04-06T02:52:03.161-0500 c20011| 2016-04-06T02:52:03.117-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 19 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:08.117-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 0, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:03.164-0500 c20011| 2016-04-06T02:52:03.117-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 20 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:08.117-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 0, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:03.169-0500 c20011| 2016-04-06T02:52:03.117-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 19 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:03.170-0500 c20011| 2016-04-06T02:52:03.117-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 20 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:03.172-0500 c20012| 2016-04-06T02:52:03.117-0500 D COMMAND [conn3] run command admin.$cmd { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 0, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:03.174-0500 c20012| 2016-04-06T02:52:03.117-0500 D COMMAND [conn3] command: replSetRequestVotes [js_test:multi_coll_drop] 2016-04-06T02:52:03.182-0500 c20013| 2016-04-06T02:52:03.117-0500 D COMMAND [conn3] run command admin.$cmd { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 0, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:03.183-0500 c20013| 2016-04-06T02:52:03.117-0500 D COMMAND [conn3] command: replSetRequestVotes [js_test:multi_coll_drop] 2016-04-06T02:52:03.198-0500 c20012| 2016-04-06T02:52:03.117-0500 D STORAGE [conn3] stored meta data for local.replset.election @ RecordId(6) [js_test:multi_coll_drop] 2016-04-06T02:52:03.200-0500 c20012| 2016-04-06T02:52:03.118-0500 D STORAGE [conn3] WiredTigerKVEngine::createRecordStore uri: table:collection-9-6577373056560964212 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:03.202-0500 c20013| 2016-04-06T02:52:03.119-0500 D STORAGE [conn3] stored meta data for local.replset.election @ RecordId(6) [js_test:multi_coll_drop] 2016-04-06T02:52:03.204-0500 c20013| 2016-04-06T02:52:03.119-0500 D STORAGE [conn3] WiredTigerKVEngine::createRecordStore uri: table:collection-9-751336887848580549 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:03.207-0500 c20012| 2016-04-06T02:52:03.121-0500 D STORAGE [conn3] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-9-6577373056560964212 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:03.207-0500 c20012| 2016-04-06T02:52:03.121-0500 D STORAGE [conn3] local.replset.election: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:03.212-0500 c20012| 2016-04-06T02:52:03.121-0500 D STORAGE [conn3] WiredTigerKVEngine::createSortedDataInterface ident: index-10-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.replset.election" }), [js_test:multi_coll_drop] 2016-04-06T02:52:03.218-0500 c20012| 2016-04-06T02:52:03.121-0500 D STORAGE [conn3] create uri: table:index-10-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.replset.election" }), [js_test:multi_coll_drop] 2016-04-06T02:52:03.220-0500 c20013| 2016-04-06T02:52:03.124-0500 D STORAGE [conn3] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-9-751336887848580549 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:03.221-0500 c20013| 2016-04-06T02:52:03.124-0500 D STORAGE [conn3] local.replset.election: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:03.225-0500 c20013| 2016-04-06T02:52:03.124-0500 D STORAGE [conn3] WiredTigerKVEngine::createSortedDataInterface ident: index-10-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.replset.election" }), [js_test:multi_coll_drop] 2016-04-06T02:52:03.232-0500 c20013| 2016-04-06T02:52:03.124-0500 D STORAGE [conn3] create uri: table:index-10-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.replset.election" }), [js_test:multi_coll_drop] 2016-04-06T02:52:03.234-0500 c20011| 2016-04-06T02:52:03.125-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 19 finished with response: { term: 0, voteGranted: true, reason: "", ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.245-0500 c20012| 2016-04-06T02:52:03.124-0500 D STORAGE [conn3] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-10-6577373056560964212 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:03.251-0500 c20012| 2016-04-06T02:52:03.124-0500 D STORAGE [conn3] local.replset.election: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:03.265-0500 c20012| 2016-04-06T02:52:03.124-0500 D QUERY [conn3] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:03.273-0500 c20012| 2016-04-06T02:52:03.124-0500 I COMMAND [conn3] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 0, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929117000|1, t: -1 } } numYields:0 reslen:123 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { W: 2 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.276-0500 c20011| 2016-04-06T02:52:03.125-0500 D ASIO [ReplicationExecutor] Canceling operation; original request was: RemoteCommand 20 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:08.117-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 0, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:03.277-0500 c20011| 2016-04-06T02:52:03.125-0500 I REPL [ReplicationExecutor] dry election run succeeded, running for election [js_test:multi_coll_drop] 2016-04-06T02:52:03.280-0500 c20011| 2016-04-06T02:52:03.125-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to execute command: RemoteCommand 20 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:08.117-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 0, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929117000|1, t: -1 } } reason: CallbackCanceled: Callback canceled [js_test:multi_coll_drop] 2016-04-06T02:52:03.282-0500 c20011| 2016-04-06T02:52:03.125-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 20 finished with response: CallbackCanceled: Callback canceled [js_test:multi_coll_drop] 2016-04-06T02:52:03.283-0500 c20011| 2016-04-06T02:52:03.125-0500 D STORAGE [replExecDBWorker-0] stored meta data for local.replset.election @ RecordId(5) [js_test:multi_coll_drop] 2016-04-06T02:52:03.286-0500 c20011| 2016-04-06T02:52:03.125-0500 D STORAGE [replExecDBWorker-0] WiredTigerKVEngine::createRecordStore uri: table:collection-7--6404702321693896372 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:03.288-0500 c20013| 2016-04-06T02:52:03.129-0500 D STORAGE [conn3] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-10-751336887848580549 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:03.289-0500 c20013| 2016-04-06T02:52:03.129-0500 D STORAGE [conn3] local.replset.election: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:03.291-0500 c20013| 2016-04-06T02:52:03.129-0500 D QUERY [conn3] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:03.293-0500 c20013| 2016-04-06T02:52:03.129-0500 I COMMAND [conn3] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 0, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929117000|1, t: -1 } } numYields:0 reslen:123 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { W: 2 } } } protocol:op_command 12ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.296-0500 c20011| 2016-04-06T02:52:03.129-0500 D STORAGE [replExecDBWorker-0] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-7--6404702321693896372 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:03.296-0500 c20011| 2016-04-06T02:52:03.129-0500 D STORAGE [replExecDBWorker-0] local.replset.election: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:03.298-0500 c20011| 2016-04-06T02:52:03.129-0500 D STORAGE [replExecDBWorker-0] WiredTigerKVEngine::createSortedDataInterface ident: index-8--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.replset.election" }), [js_test:multi_coll_drop] 2016-04-06T02:52:03.299-0500 c20011| 2016-04-06T02:52:03.129-0500 D STORAGE [replExecDBWorker-0] create uri: table:index-8--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.replset.election" }), [js_test:multi_coll_drop] 2016-04-06T02:52:03.302-0500 c20013| 2016-04-06T02:52:03.129-0500 D NETWORK [conn3] SocketException: remote: 192.168.100.28:49337 error: 9001 socket exception [CLOSED] server [192.168.100.28:49337] [js_test:multi_coll_drop] 2016-04-06T02:52:03.305-0500 c20013| 2016-04-06T02:52:03.129-0500 I NETWORK [conn3] end connection 192.168.100.28:49337 (2 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:03.313-0500 c20011| 2016-04-06T02:52:03.131-0500 D STORAGE [replExecDBWorker-0] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-8--6404702321693896372 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:03.315-0500 c20011| 2016-04-06T02:52:03.131-0500 D STORAGE [replExecDBWorker-0] local.replset.election: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:03.316-0500 c20011| 2016-04-06T02:52:03.131-0500 D QUERY [replExecDBWorker-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:03.320-0500 c20011| 2016-04-06T02:52:03.131-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 23 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:08.131-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 1, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:03.322-0500 c20011| 2016-04-06T02:52:03.131-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 24 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:08.131-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 1, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:03.324-0500 c20011| 2016-04-06T02:52:03.131-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 23 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:03.324-0500 c20011| 2016-04-06T02:52:03.131-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:03.327-0500 c20012| 2016-04-06T02:52:03.131-0500 D COMMAND [conn3] run command admin.$cmd { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 1, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:03.329-0500 c20012| 2016-04-06T02:52:03.131-0500 D COMMAND [conn3] command: replSetRequestVotes [js_test:multi_coll_drop] 2016-04-06T02:52:03.332-0500 c20013| 2016-04-06T02:52:03.132-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:49611 #6 (3 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:03.347-0500 c20011| 2016-04-06T02:52:03.132-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 25 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:03.350-0500 c20013| 2016-04-06T02:52:03.132-0500 D COMMAND [conn6] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20011" } [js_test:multi_coll_drop] 2016-04-06T02:52:03.356-0500 c20013| 2016-04-06T02:52:03.132-0500 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20011" } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.360-0500 c20011| 2016-04-06T02:52:03.132-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 23 finished with response: { term: 1, voteGranted: true, reason: "", ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.365-0500 c20011| 2016-04-06T02:52:03.132-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:03.370-0500 c20011| 2016-04-06T02:52:03.132-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 25 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:52:03.372-0500 c20012| 2016-04-06T02:52:03.132-0500 D QUERY [conn3] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:03.375-0500 c20012| 2016-04-06T02:52:03.132-0500 I COMMAND [conn3] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 1, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929117000|1, t: -1 } } numYields:0 reslen:123 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.377-0500 c20011| 2016-04-06T02:52:03.132-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 24 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:03.379-0500 c20011| 2016-04-06T02:52:03.132-0500 D ASIO [ReplicationExecutor] Canceling operation; original request was: RemoteCommand 24 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:08.131-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 1, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:03.381-0500 c20011| 2016-04-06T02:52:03.132-0500 I REPL [ReplicationExecutor] election succeeded, assuming primary role in term 1 [js_test:multi_coll_drop] 2016-04-06T02:52:03.381-0500 c20011| 2016-04-06T02:52:03.132-0500 I REPL [ReplicationExecutor] transition to PRIMARY [js_test:multi_coll_drop] 2016-04-06T02:52:03.384-0500 c20011| 2016-04-06T02:52:03.132-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to execute command: RemoteCommand 24 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:08.131-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 1, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929117000|1, t: -1 } } reason: CallbackCanceled: Callback canceled [js_test:multi_coll_drop] 2016-04-06T02:52:03.385-0500 c20011| 2016-04-06T02:52:03.132-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:03.132Z [js_test:multi_coll_drop] 2016-04-06T02:52:03.387-0500 c20011| 2016-04-06T02:52:03.132-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:03.132Z [js_test:multi_coll_drop] 2016-04-06T02:52:03.391-0500 c20011| 2016-04-06T02:52:03.132-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 24 finished with response: CallbackCanceled: Callback canceled [js_test:multi_coll_drop] 2016-04-06T02:52:03.394-0500 c20011| 2016-04-06T02:52:03.132-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 28 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:13.132-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.396-0500 c20011| 2016-04-06T02:52:03.132-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 29 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:13.132-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.398-0500 c20011| 2016-04-06T02:52:03.132-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 28 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:03.404-0500 c20011| 2016-04-06T02:52:03.132-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:03.412-0500 c20013| 2016-04-06T02:52:03.132-0500 D COMMAND [conn6] run command admin.$cmd { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 1, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:03.413-0500 c20013| 2016-04-06T02:52:03.132-0500 D COMMAND [conn6] command: replSetRequestVotes [js_test:multi_coll_drop] 2016-04-06T02:52:03.420-0500 c20012| 2016-04-06T02:52:03.132-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.420-0500 c20012| 2016-04-06T02:52:03.132-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:03.421-0500 c20011| 2016-04-06T02:52:03.132-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 30 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:03.422-0500 c20013| 2016-04-06T02:52:03.132-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:49612 #7 (4 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:03.423-0500 c20013| 2016-04-06T02:52:03.133-0500 D COMMAND [conn7] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20011" } [js_test:multi_coll_drop] 2016-04-06T02:52:03.426-0500 c20012| 2016-04-06T02:52:03.133-0500 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.429-0500 c20011| 2016-04-06T02:52:03.133-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 28 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 1, durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, opTime: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:03.431-0500 c20011| 2016-04-06T02:52:03.133-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:05.133Z [js_test:multi_coll_drop] 2016-04-06T02:52:03.432-0500 c20013| 2016-04-06T02:52:03.133-0500 I COMMAND [conn7] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20011" } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.435-0500 c20011| 2016-04-06T02:52:03.133-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:03.435-0500 c20011| 2016-04-06T02:52:03.133-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 30 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:52:03.437-0500 c20011| 2016-04-06T02:52:03.133-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 29 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:03.439-0500 c20013| 2016-04-06T02:52:03.133-0500 D QUERY [conn6] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:03.444-0500 c20013| 2016-04-06T02:52:03.133-0500 I COMMAND [conn6] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 1, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929117000|1, t: -1 } } numYields:0 reslen:123 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.449-0500 c20013| 2016-04-06T02:52:03.133-0500 D COMMAND [conn7] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.449-0500 c20013| 2016-04-06T02:52:03.133-0500 D COMMAND [conn7] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:03.452-0500 c20013| 2016-04-06T02:52:03.133-0500 D NETWORK [conn6] SocketException: remote: 192.168.100.28:49611 error: 9001 socket exception [CLOSED] server [192.168.100.28:49611] [js_test:multi_coll_drop] 2016-04-06T02:52:03.454-0500 c20013| 2016-04-06T02:52:03.133-0500 I NETWORK [conn6] end connection 192.168.100.28:49611 (3 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:03.458-0500 c20013| 2016-04-06T02:52:03.133-0500 I COMMAND [conn7] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.468-0500 c20011| 2016-04-06T02:52:03.133-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 29 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 1, durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, opTime: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:03.469-0500 c20011| 2016-04-06T02:52:03.134-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:05.134Z [js_test:multi_coll_drop] 2016-04-06T02:52:03.470-0500 c20011| 2016-04-06T02:52:03.280-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.472-0500 c20011| 2016-04-06T02:52:03.280-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.474-0500 c20012| 2016-04-06T02:52:03.280-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.475-0500 c20012| 2016-04-06T02:52:03.280-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.475-0500 c20013| 2016-04-06T02:52:03.280-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.477-0500 c20013| 2016-04-06T02:52:03.281-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.483-0500 c20011| 2016-04-06T02:52:03.482-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.491-0500 c20011| 2016-04-06T02:52:03.482-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.492-0500 c20012| 2016-04-06T02:52:03.482-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.500-0500 c20012| 2016-04-06T02:52:03.483-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.504-0500 c20013| 2016-04-06T02:52:03.483-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.515-0500 c20013| 2016-04-06T02:52:03.483-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.617-0500 c20011| 2016-04-06T02:52:03.610-0500 D REPL [rsSync] Ignoring older committed snapshot from before I became primary, optime: { ts: Timestamp 1459929117000|1, t: -1 }, firstOpTimeOfMyTerm: { ts: Timestamp 2147483647000|0, t: 2147483647 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.619-0500 c20011| 2016-04-06T02:52:03.610-0500 D REPL [rsSync] Ignoring older committed snapshot from before I became primary, optime: { ts: Timestamp 1459929117000|1, t: -1 }, firstOpTimeOfMyTerm: { ts: Timestamp 1459929123000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.620-0500 c20011| 2016-04-06T02:52:03.610-0500 I REPL [rsSync] transition to primary complete; database writes are now permitted [js_test:multi_coll_drop] 2016-04-06T02:52:03.635-0500 c20011| 2016-04-06T02:52:03.634-0500 D REPL [WTJournalFlusher] Ignoring older committed snapshot from before I became primary, optime: { ts: Timestamp 1459929117000|1, t: -1 }, firstOpTimeOfMyTerm: { ts: Timestamp 1459929123000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.687-0500 c20011| 2016-04-06T02:52:03.684-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.689-0500 c20011| 2016-04-06T02:52:03.684-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.689-0500 c20012| 2016-04-06T02:52:03.684-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.691-0500 c20012| 2016-04-06T02:52:03.684-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.692-0500 c20013| 2016-04-06T02:52:03.684-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.694-0500 c20013| 2016-04-06T02:52:03.685-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.697-0500 c20012| 2016-04-06T02:52:03.685-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.698-0500 c20012| 2016-04-06T02:52:03.685-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.698-0500 c20013| 2016-04-06T02:52:03.685-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.699-0500 c20013| 2016-04-06T02:52:03.685-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.700-0500 c20011| 2016-04-06T02:52:03.686-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.701-0500 c20011| 2016-04-06T02:52:03.686-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.702-0500 c20012| 2016-04-06T02:52:03.686-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.703-0500 c20012| 2016-04-06T02:52:03.686-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.706-0500 c20013| 2016-04-06T02:52:03.686-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.708-0500 c20013| 2016-04-06T02:52:03.686-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.709-0500 "config servers: multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013" [js_test:multi_coll_drop] 2016-04-06T02:52:03.710-0500 2016-04-06T02:52:03.686-0500 I NETWORK [thread1] Starting new replica set monitor for multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:03.710-0500 2016-04-06T02:52:03.687-0500 I NETWORK [ReplicaSetMonitorWatcher] starting [js_test:multi_coll_drop] 2016-04-06T02:52:03.712-0500 c20013| 2016-04-06T02:52:03.687-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:49648 #8 (4 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:03.712-0500 c20013| 2016-04-06T02:52:03.687-0500 D COMMAND [conn8] run command admin.$cmd { isMaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.714-0500 c20013| 2016-04-06T02:52:03.688-0500 I COMMAND [conn8] command admin.$cmd command: isMaster { isMaster: 1 } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.715-0500 c20013| 2016-04-06T02:52:03.688-0500 D COMMAND [conn8] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.718-0500 c20013| 2016-04-06T02:52:03.688-0500 I COMMAND [conn8] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.720-0500 c20011| 2016-04-06T02:52:03.695-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:58715 #8 (4 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:03.721-0500 c20011| 2016-04-06T02:52:03.695-0500 D COMMAND [conn8] run command admin.$cmd { isMaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.725-0500 c20011| 2016-04-06T02:52:03.695-0500 I COMMAND [conn8] command admin.$cmd command: isMaster { isMaster: 1 } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.726-0500 c20011| 2016-04-06T02:52:03.695-0500 D COMMAND [conn8] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.727-0500 c20011| 2016-04-06T02:52:03.695-0500 I COMMAND [conn8] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.728-0500 ShardingTest multidrop : [js_test:multi_coll_drop] 2016-04-06T02:52:03.728-0500 { [js_test:multi_coll_drop] 2016-04-06T02:52:03.730-0500 "config" : "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", [js_test:multi_coll_drop] 2016-04-06T02:52:03.730-0500 "shards" : [ [js_test:multi_coll_drop] 2016-04-06T02:52:03.731-0500 connection to mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:52:03.732-0500 ] [js_test:multi_coll_drop] 2016-04-06T02:52:03.733-0500 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.737-0500 2016-04-06T02:52:03.697-0500 I - [thread1] shell: started program (sh73407): /data/mci/src/mongos --configdb multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013 -vv --chunkSize 50 --port 20014 --setParameter enableTestCommands=1 [js_test:multi_coll_drop] 2016-04-06T02:52:03.739-0500 2016-04-06T02:52:03.698-0500 W NETWORK [thread1] Failed to connect to 127.0.0.1:20014, reason: Connection refused [js_test:multi_coll_drop] 2016-04-06T02:52:03.739-0500 s20014| 2016-04-06T02:52:03.714-0500 I CONTROL [main] [js_test:multi_coll_drop] 2016-04-06T02:52:03.742-0500 s20014| 2016-04-06T02:52:03.714-0500 I CONTROL [main] ** NOTE: This is a development version (3.3.4-37-g36f3ff8) of MongoDB. [js_test:multi_coll_drop] 2016-04-06T02:52:03.742-0500 s20014| 2016-04-06T02:52:03.715-0500 I CONTROL [main] ** Not recommended for production. [js_test:multi_coll_drop] 2016-04-06T02:52:03.743-0500 s20014| 2016-04-06T02:52:03.715-0500 I CONTROL [main] [js_test:multi_coll_drop] 2016-04-06T02:52:03.747-0500 s20014| 2016-04-06T02:52:03.715-0500 I CONTROL [main] ** WARNING: Insecure configuration, access control is not enabled and no --bind_ip has been specified. [js_test:multi_coll_drop] 2016-04-06T02:52:03.749-0500 s20014| 2016-04-06T02:52:03.715-0500 I CONTROL [main] ** Read and write access to data and configuration is unrestricted, [js_test:multi_coll_drop] 2016-04-06T02:52:03.750-0500 s20014| 2016-04-06T02:52:03.715-0500 I CONTROL [main] ** and the server listens on all available network interfaces. [js_test:multi_coll_drop] 2016-04-06T02:52:03.751-0500 s20014| 2016-04-06T02:52:03.715-0500 I CONTROL [main] ** WARNING: You are running this process as the root user, which is not recommended. [js_test:multi_coll_drop] 2016-04-06T02:52:03.752-0500 s20014| 2016-04-06T02:52:03.715-0500 I CONTROL [main] [js_test:multi_coll_drop] 2016-04-06T02:52:03.753-0500 s20014| 2016-04-06T02:52:03.715-0500 I SHARDING [mongosMain] MongoS version 3.3.4-37-g36f3ff8 starting: pid=73407 port=20014 64-bit host=mongovm16 (--help for usage) [js_test:multi_coll_drop] 2016-04-06T02:52:03.753-0500 s20014| 2016-04-06T02:52:03.715-0500 I CONTROL [mongosMain] db version v3.3.4-37-g36f3ff8 [js_test:multi_coll_drop] 2016-04-06T02:52:03.754-0500 s20014| 2016-04-06T02:52:03.715-0500 I CONTROL [mongosMain] git version: 36f3ff8da1f7ae3710ceacc4e13adfd4abdb99da [js_test:multi_coll_drop] 2016-04-06T02:52:03.755-0500 s20014| 2016-04-06T02:52:03.715-0500 I CONTROL [mongosMain] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 [js_test:multi_coll_drop] 2016-04-06T02:52:03.756-0500 s20014| 2016-04-06T02:52:03.715-0500 I CONTROL [mongosMain] allocator: tcmalloc [js_test:multi_coll_drop] 2016-04-06T02:52:03.757-0500 s20014| 2016-04-06T02:52:03.715-0500 I CONTROL [mongosMain] modules: enterprise [js_test:multi_coll_drop] 2016-04-06T02:52:03.757-0500 s20014| 2016-04-06T02:52:03.715-0500 I CONTROL [mongosMain] build environment: [js_test:multi_coll_drop] 2016-04-06T02:52:03.758-0500 s20014| 2016-04-06T02:52:03.715-0500 I CONTROL [mongosMain] distmod: rhel71 [js_test:multi_coll_drop] 2016-04-06T02:52:03.762-0500 s20014| 2016-04-06T02:52:03.715-0500 I CONTROL [mongosMain] distarch: ppc64le [js_test:multi_coll_drop] 2016-04-06T02:52:03.762-0500 s20014| 2016-04-06T02:52:03.715-0500 I CONTROL [mongosMain] target_arch: ppc64le [js_test:multi_coll_drop] 2016-04-06T02:52:03.764-0500 s20014| 2016-04-06T02:52:03.715-0500 I CONTROL [mongosMain] options: { net: { port: 20014 }, setParameter: { enableTestCommands: "1" }, sharding: { chunkSize: 50, configDB: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013" }, systemLog: { verbosity: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:52:03.773-0500 s20014| 2016-04-06T02:52:03.715-0500 I SHARDING [mongosMain] Updating config server connection string to: multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:03.775-0500 s20014| 2016-04-06T02:52:03.715-0500 I NETWORK [mongosMain] Starting new replica set monitor for multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:03.778-0500 s20014| 2016-04-06T02:52:03.715-0500 D COMMAND [ReplicaSetMonitorWatcher] BackgroundJob starting: ReplicaSetMonitorWatcher [js_test:multi_coll_drop] 2016-04-06T02:52:03.780-0500 s20014| 2016-04-06T02:52:03.715-0500 I NETWORK [ReplicaSetMonitorWatcher] starting [js_test:multi_coll_drop] 2016-04-06T02:52:03.783-0500 s20014| 2016-04-06T02:52:03.716-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.784-0500 s20014| 2016-04-06T02:52:03.716-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-0-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.787-0500 s20014| 2016-04-06T02:52:03.716-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-2-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.789-0500 s20014| 2016-04-06T02:52:03.716-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-TaskExecutor-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.790-0500 s20014| 2016-04-06T02:52:03.716-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-1-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.791-0500 s20014| 2016-04-06T02:52:03.716-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-4-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.792-0500 s20014| 2016-04-06T02:52:03.716-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-5-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.795-0500 s20014| 2016-04-06T02:52:03.716-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-6-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.797-0500 s20014| 2016-04-06T02:52:03.716-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-7-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.798-0500 s20014| 2016-04-06T02:52:03.716-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-3-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.800-0500 s20014| 2016-04-06T02:52:03.716-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-8-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.802-0500 s20014| 2016-04-06T02:52:03.717-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-9-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.802-0500 s20014| 2016-04-06T02:52:03.717-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-10-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.804-0500 s20014| 2016-04-06T02:52:03.717-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-12-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.805-0500 s20014| 2016-04-06T02:52:03.717-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-11-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.809-0500 s20014| 2016-04-06T02:52:03.717-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-13-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.810-0500 s20014| 2016-04-06T02:52:03.717-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-14-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.813-0500 s20014| 2016-04-06T02:52:03.717-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-15-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.814-0500 s20014| 2016-04-06T02:52:03.717-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-16-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.819-0500 s20014| 2016-04-06T02:52:03.717-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-17-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.819-0500 s20014| 2016-04-06T02:52:03.717-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-18-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.820-0500 s20014| 2016-04-06T02:52:03.717-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-19-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.823-0500 s20014| 2016-04-06T02:52:03.717-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-20-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.824-0500 s20014| 2016-04-06T02:52:03.717-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-21-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.838-0500 s20014| 2016-04-06T02:52:03.717-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-22-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.840-0500 s20014| 2016-04-06T02:52:03.717-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-23-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.842-0500 s20014| 2016-04-06T02:52:03.717-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-24-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.843-0500 s20014| 2016-04-06T02:52:03.718-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-25-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.844-0500 s20014| 2016-04-06T02:52:03.718-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-26-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.846-0500 s20014| 2016-04-06T02:52:03.718-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-27-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.847-0500 s20014| 2016-04-06T02:52:03.718-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-28-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.849-0500 s20014| 2016-04-06T02:52:03.718-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-29-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.850-0500 s20014| 2016-04-06T02:52:03.718-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-30-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.851-0500 s20014| 2016-04-06T02:52:03.718-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-31-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.853-0500 s20014| 2016-04-06T02:52:03.718-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-32-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.860-0500 s20014| 2016-04-06T02:52:03.718-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-33-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.861-0500 s20014| 2016-04-06T02:52:03.718-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-34-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.862-0500 s20014| 2016-04-06T02:52:03.718-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-35-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.863-0500 s20014| 2016-04-06T02:52:03.718-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-36-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.866-0500 s20014| 2016-04-06T02:52:03.718-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-37-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.867-0500 s20014| 2016-04-06T02:52:03.718-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-38-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.869-0500 s20014| 2016-04-06T02:52:03.718-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-39-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.870-0500 s20014| 2016-04-06T02:52:03.718-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-40-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.871-0500 s20014| 2016-04-06T02:52:03.718-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-41-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.872-0500 s20014| 2016-04-06T02:52:03.718-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-42-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.873-0500 s20014| 2016-04-06T02:52:03.718-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-43-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.873-0500 s20014| 2016-04-06T02:52:03.718-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-44-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.875-0500 s20014| 2016-04-06T02:52:03.719-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-45-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.876-0500 s20014| 2016-04-06T02:52:03.719-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-46-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.878-0500 s20014| 2016-04-06T02:52:03.719-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-47-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.878-0500 s20014| 2016-04-06T02:52:03.719-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-48-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.879-0500 s20014| 2016-04-06T02:52:03.719-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-49-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.881-0500 s20014| 2016-04-06T02:52:03.719-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-50-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.881-0500 s20014| 2016-04-06T02:52:03.719-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-51-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.884-0500 s20014| 2016-04-06T02:52:03.719-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-52-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.885-0500 s20014| 2016-04-06T02:52:03.719-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-53-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.886-0500 s20014| 2016-04-06T02:52:03.719-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-55-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.888-0500 s20014| 2016-04-06T02:52:03.719-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-54-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.888-0500 s20014| 2016-04-06T02:52:03.719-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-56-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.889-0500 s20014| 2016-04-06T02:52:03.719-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-57-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.890-0500 s20014| 2016-04-06T02:52:03.719-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-58-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.892-0500 s20014| 2016-04-06T02:52:03.719-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-59-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.893-0500 s20014| 2016-04-06T02:52:03.719-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-60-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.894-0500 s20014| 2016-04-06T02:52:03.720-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-61-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.899-0500 s20014| 2016-04-06T02:52:03.720-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-62-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.902-0500 s20014| 2016-04-06T02:52:03.720-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-63-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:03.904-0500 s20014| 2016-04-06T02:52:03.720-0500 D NETWORK [mongosMain] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:03.904-0500 s20014| 2016-04-06T02:52:03.720-0500 D NETWORK [mongosMain] creating new connection to:mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:03.909-0500 s20014| 2016-04-06T02:52:03.720-0500 I SHARDING [thread1] creating distributed lock ping thread for process mongovm16:20014:1459929123:-665935931 (sleeping for 30000ms) [js_test:multi_coll_drop] 2016-04-06T02:52:03.909-0500 s20014| 2016-04-06T02:52:03.720-0500 D NETWORK [replSetDistLockPinger] creating new connection to:mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:03.909-0500 s20014| 2016-04-06T02:52:03.721-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:52:03.910-0500 s20014| 2016-04-06T02:52:03.721-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:52:03.913-0500 c20011| 2016-04-06T02:52:03.721-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:58719 #9 (5 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:03.914-0500 s20014| 2016-04-06T02:52:03.721-0500 D NETWORK [mongosMain] connected to server mongovm16:20013 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:52:03.916-0500 c20013| 2016-04-06T02:52:03.721-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:49652 #9 (5 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:03.918-0500 c20013| 2016-04-06T02:52:03.722-0500 D COMMAND [conn9] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:52:03.922-0500 c20013| 2016-04-06T02:52:03.722-0500 I COMMAND [conn9] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20014" } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.922-0500 s20014| 2016-04-06T02:52:03.722-0500 D NETWORK [mongosMain] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:52:03.922-0500 c20013| 2016-04-06T02:52:03.722-0500 D COMMAND [conn9] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.925-0500 c20013| 2016-04-06T02:52:03.722-0500 I COMMAND [conn9] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.926-0500 c20013| 2016-04-06T02:52:03.722-0500 D COMMAND [conn9] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.928-0500 c20013| 2016-04-06T02:52:03.722-0500 I COMMAND [conn9] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.929-0500 s20014| 2016-04-06T02:52:03.722-0500 D NETWORK [mongosMain] creating new connection to:mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:03.932-0500 s20014| 2016-04-06T02:52:03.722-0500 D NETWORK [replSetDistLockPinger] connected to server mongovm16:20011 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:52:03.933-0500 c20011| 2016-04-06T02:52:03.722-0500 D COMMAND [conn9] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:52:03.934-0500 c20011| 2016-04-06T02:52:03.722-0500 I COMMAND [conn9] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20014" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.935-0500 s20014| 2016-04-06T02:52:03.723-0500 D NETWORK [replSetDistLockPinger] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:52:03.935-0500 c20011| 2016-04-06T02:52:03.723-0500 D COMMAND [conn9] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.936-0500 c20011| 2016-04-06T02:52:03.723-0500 I COMMAND [conn9] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.936-0500 c20011| 2016-04-06T02:52:03.723-0500 D COMMAND [conn9] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.937-0500 c20011| 2016-04-06T02:52:03.723-0500 I COMMAND [conn9] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.940-0500 s20014| 2016-04-06T02:52:03.723-0500 D ASIO [replSetDistLockPinger] startCommand: RemoteCommand 1 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:33.723-0500 cmd:{ findAndModify: "lockpings", query: { _id: "mongovm16:20014:1459929123:-665935931" }, update: { $set: { ping: new Date(1459929123720) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.941-0500 s20014| 2016-04-06T02:52:03.724-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:03.943-0500 s20014| 2016-04-06T02:52:03.724-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:52:03.945-0500 c20012| 2016-04-06T02:52:03.724-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:36389 #6 (4 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:03.946-0500 s20014| 2016-04-06T02:52:03.724-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 2 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:03.947-0500 s20014| 2016-04-06T02:52:03.724-0500 D NETWORK [mongosMain] connected to server mongovm16:20012 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:52:03.949-0500 c20011| 2016-04-06T02:52:03.724-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:58721 #10 (6 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:03.951-0500 c20012| 2016-04-06T02:52:03.724-0500 D COMMAND [conn6] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:52:03.953-0500 c20012| 2016-04-06T02:52:03.724-0500 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20014" } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.954-0500 s20014| 2016-04-06T02:52:03.724-0500 D NETWORK [mongosMain] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:52:03.969-0500 c20012| 2016-04-06T02:52:03.724-0500 D COMMAND [conn6] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.970-0500 c20012| 2016-04-06T02:52:03.724-0500 I COMMAND [conn6] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.971-0500 c20011| 2016-04-06T02:52:03.724-0500 D COMMAND [conn10] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:52:03.971-0500 c20011| 2016-04-06T02:52:03.724-0500 I COMMAND [conn10] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20014" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.972-0500 s20014| 2016-04-06T02:52:03.725-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:03.974-0500 s20014| 2016-04-06T02:52:03.725-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 2 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:52:03.976-0500 s20014| 2016-04-06T02:52:03.725-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 1 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:03.977-0500 c20012| 2016-04-06T02:52:03.725-0500 D COMMAND [conn6] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.978-0500 c20012| 2016-04-06T02:52:03.725-0500 I COMMAND [conn6] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.979-0500 s20014| 2016-04-06T02:52:03.725-0500 D ASIO [mongosMain] startCommand: RemoteCommand 3 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:33.725-0500 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 0|0, t: -1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.980-0500 s20014| 2016-04-06T02:52:03.725-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:03.981-0500 s20014| 2016-04-06T02:52:03.726-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 4 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:03.984-0500 c20011| 2016-04-06T02:52:03.725-0500 D COMMAND [conn10] run command config.$cmd { findAndModify: "lockpings", query: { _id: "mongovm16:20014:1459929123:-665935931" }, update: { $set: { ping: new Date(1459929123720) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:03.985-0500 c20011| 2016-04-06T02:52:03.725-0500 D STORAGE [conn10] create collection config.lockpings {} [js_test:multi_coll_drop] 2016-04-06T02:52:03.986-0500 c20011| 2016-04-06T02:52:03.725-0500 D STORAGE [conn10] stored meta data for config.lockpings @ RecordId(6) [js_test:multi_coll_drop] 2016-04-06T02:52:03.989-0500 c20011| 2016-04-06T02:52:03.725-0500 D STORAGE [conn10] WiredTigerKVEngine::createRecordStore uri: table:collection-9--6404702321693896372 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:03.990-0500 c20011| 2016-04-06T02:52:03.726-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:58722 #11 (7 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:03.993-0500 c20011| 2016-04-06T02:52:03.726-0500 D COMMAND [conn11] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:52:03.995-0500 c20011| 2016-04-06T02:52:03.726-0500 I COMMAND [conn11] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20014" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:03.996-0500 s20014| 2016-04-06T02:52:03.726-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:03.997-0500 s20014| 2016-04-06T02:52:03.726-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 4 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:52:04.017-0500 s20014| 2016-04-06T02:52:03.726-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 3 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:04.020-0500 c20011| 2016-04-06T02:52:03.727-0500 D COMMAND [conn11] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 0|0, t: -1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:04.020-0500 c20011| 2016-04-06T02:52:03.727-0500 D COMMAND [conn11] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 0|0, t: -1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:04.022-0500 c20011| 2016-04-06T02:52:03.727-0500 D COMMAND [conn11] Snapshot not available for readConcern: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 0|0, t: -1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:04.023-0500 c20011| 2016-04-06T02:52:03.743-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-9--6404702321693896372 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:04.024-0500 c20011| 2016-04-06T02:52:03.743-0500 D STORAGE [conn10] config.lockpings: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:04.028-0500 c20011| 2016-04-06T02:52:03.743-0500 D STORAGE [conn10] WiredTigerKVEngine::createSortedDataInterface ident: index-10--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.lockpings" }), [js_test:multi_coll_drop] 2016-04-06T02:52:04.035-0500 c20011| 2016-04-06T02:52:03.743-0500 D STORAGE [conn10] create uri: table:index-10--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.lockpings" }), [js_test:multi_coll_drop] 2016-04-06T02:52:04.040-0500 c20011| 2016-04-06T02:52:03.754-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-10--6404702321693896372 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:04.041-0500 c20011| 2016-04-06T02:52:03.754-0500 D STORAGE [conn10] config.lockpings: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:04.047-0500 c20011| 2016-04-06T02:52:03.754-0500 D REPL [conn10] Ignoring older committed snapshot from before I became primary, optime: { ts: Timestamp 1459929117000|1, t: -1 }, firstOpTimeOfMyTerm: { ts: Timestamp 1459929123000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:04.053-0500 c20011| 2016-04-06T02:52:03.754-0500 D QUERY [conn10] Using idhack: { _id: "mongovm16:20014:1459929123:-665935931" } [js_test:multi_coll_drop] 2016-04-06T02:52:04.059-0500 c20011| 2016-04-06T02:52:03.754-0500 D REPL [conn10] Ignoring older committed snapshot from before I became primary, optime: { ts: Timestamp 1459929117000|1, t: -1 }, firstOpTimeOfMyTerm: { ts: Timestamp 1459929123000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:04.064-0500 c20011| 2016-04-06T02:52:03.757-0500 D REPL [conn10] Ignoring older committed snapshot from before I became primary, optime: { ts: Timestamp 1459929117000|1, t: -1 }, firstOpTimeOfMyTerm: { ts: Timestamp 1459929123000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:04.064-0500 2016-04-06T02:52:03.898-0500 W NETWORK [thread1] Failed to connect to 127.0.0.1:20014, reason: Connection refused [js_test:multi_coll_drop] 2016-04-06T02:52:04.100-0500 2016-04-06T02:52:04.099-0500 W NETWORK [thread1] Failed to connect to 127.0.0.1:20014, reason: Connection refused [js_test:multi_coll_drop] 2016-04-06T02:52:04.302-0500 2016-04-06T02:52:04.300-0500 W NETWORK [thread1] Failed to connect to 127.0.0.1:20014, reason: Connection refused [js_test:multi_coll_drop] 2016-04-06T02:52:04.501-0500 2016-04-06T02:52:04.501-0500 W NETWORK [thread1] Failed to connect to 127.0.0.1:20014, reason: Connection refused [js_test:multi_coll_drop] 2016-04-06T02:52:04.702-0500 2016-04-06T02:52:04.701-0500 W NETWORK [thread1] Failed to connect to 127.0.0.1:20014, reason: Connection refused [js_test:multi_coll_drop] 2016-04-06T02:52:04.903-0500 2016-04-06T02:52:04.902-0500 W NETWORK [thread1] Failed to connect to 127.0.0.1:20014, reason: Connection refused [js_test:multi_coll_drop] 2016-04-06T02:52:05.106-0500 2016-04-06T02:52:05.105-0500 W NETWORK [thread1] Failed to connect to 127.0.0.1:20014, reason: Connection refused [js_test:multi_coll_drop] 2016-04-06T02:52:05.139-0500 c20011| 2016-04-06T02:52:05.133-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 33 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:15.133-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:05.142-0500 c20011| 2016-04-06T02:52:05.134-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 33 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:05.146-0500 c20011| 2016-04-06T02:52:05.135-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 34 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:15.135-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:05.147-0500 c20013| 2016-04-06T02:52:05.135-0500 D COMMAND [conn7] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:05.148-0500 c20011| 2016-04-06T02:52:05.135-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 34 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:05.149-0500 c20013| 2016-04-06T02:52:05.135-0500 D COMMAND [conn7] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:05.151-0500 c20013| 2016-04-06T02:52:05.137-0500 I COMMAND [conn7] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } numYields:0 reslen:439 locks:{} protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:05.152-0500 c20011| 2016-04-06T02:52:05.137-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 34 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 1, durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, opTime: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:05.153-0500 c20011| 2016-04-06T02:52:05.137-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:07.137Z [js_test:multi_coll_drop] 2016-04-06T02:52:05.155-0500 c20012| 2016-04-06T02:52:05.145-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:05.156-0500 c20012| 2016-04-06T02:52:05.145-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:05.159-0500 c20011| 2016-04-06T02:52:05.156-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 33 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 1, durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, opTime: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:05.161-0500 c20011| 2016-04-06T02:52:05.156-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:07.156Z [js_test:multi_coll_drop] 2016-04-06T02:52:05.165-0500 c20012| 2016-04-06T02:52:05.155-0500 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:05.306-0500 2016-04-06T02:52:05.306-0500 W NETWORK [thread1] Failed to connect to 127.0.0.1:20014, reason: Connection refused [js_test:multi_coll_drop] 2016-04-06T02:52:05.509-0500 2016-04-06T02:52:05.506-0500 W NETWORK [thread1] Failed to connect to 127.0.0.1:20014, reason: Connection refused [js_test:multi_coll_drop] 2016-04-06T02:52:05.568-0500 c20012| 2016-04-06T02:52:05.564-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 19 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:15.564-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:05.569-0500 c20012| 2016-04-06T02:52:05.564-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 19 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:05.576-0500 c20011| 2016-04-06T02:52:05.564-0500 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:05.576-0500 c20011| 2016-04-06T02:52:05.564-0500 D COMMAND [conn2] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:05.581-0500 c20012| 2016-04-06T02:52:05.564-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 19 finished with response: { ok: 1.0, electionTime: new Date(6270347837762961409), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, opTime: { ts: Timestamp 1459929123000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:05.587-0500 c20011| 2016-04-06T02:52:05.564-0500 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } numYields:0 reslen:480 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:05.588-0500 c20012| 2016-04-06T02:52:05.564-0500 I REPL [ReplicationExecutor] Member mongovm16:20011 is now in state PRIMARY [js_test:multi_coll_drop] 2016-04-06T02:52:05.588-0500 c20012| 2016-04-06T02:52:05.564-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:08.064Z [js_test:multi_coll_drop] 2016-04-06T02:52:05.593-0500 c20013| 2016-04-06T02:52:05.572-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 21 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:15.572-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:05.595-0500 c20013| 2016-04-06T02:52:05.572-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 21 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:05.597-0500 c20011| 2016-04-06T02:52:05.572-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:05.599-0500 c20011| 2016-04-06T02:52:05.572-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:05.616-0500 c20012| 2016-04-06T02:52:05.572-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 21 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:15.572-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:05.619-0500 c20011| 2016-04-06T02:52:05.573-0500 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } numYields:0 reslen:480 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:05.627-0500 c20013| 2016-04-06T02:52:05.573-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 21 finished with response: { ok: 1.0, electionTime: new Date(6270347837762961409), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, opTime: { ts: Timestamp 1459929123000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:05.628-0500 c20012| 2016-04-06T02:52:05.573-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 21 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:05.630-0500 c20013| 2016-04-06T02:52:05.573-0500 I REPL [ReplicationExecutor] Member mongovm16:20011 is now in state PRIMARY [js_test:multi_coll_drop] 2016-04-06T02:52:05.632-0500 c20013| 2016-04-06T02:52:05.573-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:08.073Z [js_test:multi_coll_drop] 2016-04-06T02:52:05.634-0500 c20013| 2016-04-06T02:52:05.573-0500 D COMMAND [conn5] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:05.636-0500 c20013| 2016-04-06T02:52:05.573-0500 D COMMAND [conn5] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:05.640-0500 c20013| 2016-04-06T02:52:05.573-0500 I COMMAND [conn5] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } numYields:0 reslen:458 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:05.642-0500 c20012| 2016-04-06T02:52:05.573-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 21 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, opTime: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:05.644-0500 c20012| 2016-04-06T02:52:05.573-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:08.073Z [js_test:multi_coll_drop] 2016-04-06T02:52:05.646-0500 c20013| 2016-04-06T02:52:05.575-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 23 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:15.575-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:05.646-0500 c20013| 2016-04-06T02:52:05.575-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 23 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:05.647-0500 c20012| 2016-04-06T02:52:05.575-0500 D COMMAND [conn5] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:05.648-0500 c20012| 2016-04-06T02:52:05.575-0500 D COMMAND [conn5] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:05.652-0500 c20012| 2016-04-06T02:52:05.575-0500 I COMMAND [conn5] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } numYields:0 reslen:458 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:05.656-0500 c20013| 2016-04-06T02:52:05.576-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 23 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, opTime: { ts: Timestamp 1459929117000|1, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:05.657-0500 c20013| 2016-04-06T02:52:05.576-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:08.076Z [js_test:multi_coll_drop] 2016-04-06T02:52:05.711-0500 2016-04-06T02:52:05.709-0500 W NETWORK [thread1] Failed to connect to 127.0.0.1:20014, reason: Connection refused [js_test:multi_coll_drop] 2016-04-06T02:52:05.910-0500 2016-04-06T02:52:05.910-0500 W NETWORK [thread1] Failed to connect to 127.0.0.1:20014, reason: Connection refused [js_test:multi_coll_drop] 2016-04-06T02:52:06.115-0500 2016-04-06T02:52:06.112-0500 W NETWORK [thread1] Failed to connect to 127.0.0.1:20014, reason: Connection refused [js_test:multi_coll_drop] 2016-04-06T02:52:06.313-0500 2016-04-06T02:52:06.312-0500 W NETWORK [thread1] Failed to connect to 127.0.0.1:20014, reason: Connection refused [js_test:multi_coll_drop] 2016-04-06T02:52:06.518-0500 2016-04-06T02:52:06.513-0500 W NETWORK [thread1] Failed to connect to 127.0.0.1:20014, reason: Connection refused [js_test:multi_coll_drop] 2016-04-06T02:52:06.615-0500 c20012| 2016-04-06T02:52:06.563-0500 I REPL [ReplicationExecutor] syncing from: mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:06.618-0500 c20012| 2016-04-06T02:52:06.563-0500 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 23 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:36.563-0500 cmd:{ find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:06.620-0500 c20011| 2016-04-06T02:52:06.563-0500 D COMMAND [conn2] run command local.$cmd { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:06.623-0500 c20011| 2016-04-06T02:52:06.564-0500 D QUERY [conn2] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: 1 } projection: {} limit: 1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:06.626-0500 c20012| 2016-04-06T02:52:06.563-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 23 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:06.631-0500 c20012| 2016-04-06T02:52:06.564-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 23 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1459929117000|1, h: 1169182228640141205, v: 2, op: "n", ns: "", o: { msg: "initiating set" } } ], id: 0, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:06.633-0500 c20011| 2016-04-06T02:52:06.564-0500 I COMMAND [conn2] command local.oplog.rs command: find { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:254 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:06.636-0500 c20012| 2016-04-06T02:52:06.564-0500 D REPL [rsBackgroundSync] scheduling fetcher to read remote oplog on mongovm16:20011 starting at filter: { ts: { $gte: Timestamp 1459929117000|1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:06.638-0500 c20012| 2016-04-06T02:52:06.564-0500 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 25 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.564-0500 cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929117000|1 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:06.641-0500 c20012| 2016-04-06T02:52:06.564-0500 D REPL [SyncSourceFeedback] setting syncSourceFeedback to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:06.654-0500 c20012| 2016-04-06T02:52:06.564-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:06.663-0500 c20012| 2016-04-06T02:52:06.564-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 27 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:06.663-0500 c20012| 2016-04-06T02:52:06.564-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:06.665-0500 c20012| 2016-04-06T02:52:06.564-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:06.665-0500 c20012| 2016-04-06T02:52:06.565-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 28 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:06.669-0500 c20012| 2016-04-06T02:52:06.565-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 26 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:06.671-0500 c20011| 2016-04-06T02:52:06.565-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:58940 #12 (8 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:06.671-0500 c20011| 2016-04-06T02:52:06.565-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:58941 #13 (9 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:06.672-0500 c20011| 2016-04-06T02:52:06.565-0500 D COMMAND [conn13] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20012" } [js_test:multi_coll_drop] 2016-04-06T02:52:06.674-0500 c20011| 2016-04-06T02:52:06.565-0500 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20012" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:06.676-0500 c20011| 2016-04-06T02:52:06.565-0500 D COMMAND [conn12] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20012" } [js_test:multi_coll_drop] 2016-04-06T02:52:06.679-0500 c20012| 2016-04-06T02:52:06.565-0500 I ASIO [NetworkInterfaceASIO-BGSync-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:06.679-0500 c20012| 2016-04-06T02:52:06.565-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 26 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:52:06.680-0500 c20012| 2016-04-06T02:52:06.565-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 25 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:06.684-0500 c20011| 2016-04-06T02:52:06.565-0500 D COMMAND [conn13] run command local.$cmd { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929117000|1 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:06.689-0500 c20011| 2016-04-06T02:52:06.566-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20012" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:06.692-0500 c20012| 2016-04-06T02:52:06.566-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:06.696-0500 c20012| 2016-04-06T02:52:06.566-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 28 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:52:06.696-0500 c20012| 2016-04-06T02:52:06.566-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 27 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:06.705-0500 c20011| 2016-04-06T02:52:06.566-0500 I COMMAND [conn13] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929117000|1 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 1 } planSummary: COLLSCAN cursorid:20785203637 keysExamined:0 docsExamined:4 numYields:0 nreturned:4 reslen:819 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:06.713-0500 c20012| 2016-04-06T02:52:06.566-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 25 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1459929117000|1, h: 1169182228640141205, v: 2, op: "n", ns: "", o: { msg: "initiating set" } }, { ts: Timestamp 1459929123000|2, t: 1, h: 6452190736163510723, v: 2, op: "n", ns: "", o: { msg: "new primary" } }, { ts: Timestamp 1459929123000|3, t: 1, h: -3830276443629906377, v: 2, op: "c", ns: "config.$cmd", o: { create: "lockpings" } }, { ts: Timestamp 1459929123000|4, t: 1, h: 5632815493348496991, v: 2, op: "i", ns: "config.lockpings", o: { _id: "mongovm16:20014:1459929123:-665935931", ping: new Date(1459929123720) } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:06.716-0500 c20012| 2016-04-06T02:52:06.566-0500 D REPL [rsBackgroundSync-0] fetcher read 4 operations from remote oplog starting at ts: Timestamp 1459929117000|1 and ending at ts: Timestamp 1459929123000|4 [js_test:multi_coll_drop] 2016-04-06T02:52:06.721-0500 c20011| 2016-04-06T02:52:06.566-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:06.722-0500 c20011| 2016-04-06T02:52:06.566-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:06.725-0500 c20011| 2016-04-06T02:52:06.566-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:06.727-0500 c20011| 2016-04-06T02:52:06.566-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:06.733-0500 c20011| 2016-04-06T02:52:06.566-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:06.740-0500 c20012| 2016-04-06T02:52:06.566-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 27 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:06.742-0500 c20012| 2016-04-06T02:52:06.566-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:06.745-0500 c20012| 2016-04-06T02:52:06.567-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.747-0500 c20012| 2016-04-06T02:52:06.567-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.749-0500 c20012| 2016-04-06T02:52:06.567-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.750-0500 c20012| 2016-04-06T02:52:06.567-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.754-0500 c20012| 2016-04-06T02:52:06.567-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.756-0500 c20012| 2016-04-06T02:52:06.567-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.762-0500 c20012| 2016-04-06T02:52:06.567-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.763-0500 c20012| 2016-04-06T02:52:06.567-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.764-0500 c20012| 2016-04-06T02:52:06.567-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.765-0500 c20012| 2016-04-06T02:52:06.567-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.767-0500 c20012| 2016-04-06T02:52:06.567-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.767-0500 c20012| 2016-04-06T02:52:06.567-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.769-0500 c20012| 2016-04-06T02:52:06.567-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.770-0500 c20012| 2016-04-06T02:52:06.567-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.771-0500 c20012| 2016-04-06T02:52:06.567-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.772-0500 c20012| 2016-04-06T02:52:06.567-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:06.774-0500 c20012| 2016-04-06T02:52:06.567-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.775-0500 c20012| 2016-04-06T02:52:06.568-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.778-0500 c20012| 2016-04-06T02:52:06.568-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 31 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.568-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 0|0, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:06.779-0500 c20012| 2016-04-06T02:52:06.568-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 31 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:06.782-0500 c20012| 2016-04-06T02:52:06.568-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.784-0500 c20011| 2016-04-06T02:52:06.568-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 0|0, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:06.785-0500 c20012| 2016-04-06T02:52:06.569-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.791-0500 c20012| 2016-04-06T02:52:06.569-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.791-0500 c20012| 2016-04-06T02:52:06.569-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.792-0500 c20012| 2016-04-06T02:52:06.569-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.795-0500 c20012| 2016-04-06T02:52:06.569-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.796-0500 c20012| 2016-04-06T02:52:06.569-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.797-0500 c20012| 2016-04-06T02:52:06.569-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.800-0500 c20012| 2016-04-06T02:52:06.569-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.801-0500 c20012| 2016-04-06T02:52:06.569-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.802-0500 c20013| 2016-04-06T02:52:06.569-0500 I REPL [ReplicationExecutor] syncing from: mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:06.802-0500 c20012| 2016-04-06T02:52:06.569-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.805-0500 c20012| 2016-04-06T02:52:06.569-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.808-0500 c20013| 2016-04-06T02:52:06.569-0500 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 25 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:36.569-0500 cmd:{ find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:06.810-0500 c20012| 2016-04-06T02:52:06.569-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.810-0500 c20012| 2016-04-06T02:52:06.569-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.813-0500 c20011| 2016-04-06T02:52:06.569-0500 D COMMAND [conn3] run command local.$cmd { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:06.819-0500 c20011| 2016-04-06T02:52:06.569-0500 D QUERY [conn3] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: 1 } projection: {} limit: 1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:06.823-0500 c20012| 2016-04-06T02:52:06.569-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.826-0500 c20011| 2016-04-06T02:52:06.569-0500 I COMMAND [conn3] command local.oplog.rs command: find { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:254 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:06.827-0500 c20013| 2016-04-06T02:52:06.569-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 25 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:06.831-0500 c20013| 2016-04-06T02:52:06.570-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 25 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1459929117000|1, h: 1169182228640141205, v: 2, op: "n", ns: "", o: { msg: "initiating set" } } ], id: 0, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:06.833-0500 c20013| 2016-04-06T02:52:06.570-0500 D REPL [rsBackgroundSync] scheduling fetcher to read remote oplog on mongovm16:20011 starting at filter: { ts: { $gte: Timestamp 1459929117000|1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:06.839-0500 c20013| 2016-04-06T02:52:06.570-0500 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 27 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.570-0500 cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929117000|1 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:06.840-0500 c20013| 2016-04-06T02:52:06.570-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:06.844-0500 c20013| 2016-04-06T02:52:06.570-0500 D REPL [SyncSourceFeedback] setting syncSourceFeedback to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:06.845-0500 c20013| 2016-04-06T02:52:06.570-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:06.850-0500 c20013| 2016-04-06T02:52:06.570-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 29 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:06.852-0500 c20013| 2016-04-06T02:52:06.570-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:06.854-0500 c20013| 2016-04-06T02:52:06.570-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 28 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:06.856-0500 c20011| 2016-04-06T02:52:06.570-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:58942 #14 (10 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:06.858-0500 c20013| 2016-04-06T02:52:06.570-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 30 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:06.858-0500 c20011| 2016-04-06T02:52:06.571-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:58943 #15 (11 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:06.860-0500 c20011| 2016-04-06T02:52:06.571-0500 D COMMAND [conn15] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20013" } [js_test:multi_coll_drop] 2016-04-06T02:52:06.865-0500 c20011| 2016-04-06T02:52:06.571-0500 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20013" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:06.866-0500 c20011| 2016-04-06T02:52:06.571-0500 D COMMAND [conn14] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20013" } [js_test:multi_coll_drop] 2016-04-06T02:52:06.870-0500 c20011| 2016-04-06T02:52:06.571-0500 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20013" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:06.872-0500 c20012| 2016-04-06T02:52:06.571-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:06.882-0500 c20012| 2016-04-06T02:52:06.571-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:06.885-0500 c20013| 2016-04-06T02:52:06.571-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:06.891-0500 c20011| 2016-04-06T02:52:06.571-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:06.896-0500 c20013| 2016-04-06T02:52:06.571-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 30 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:52:06.901-0500 c20013| 2016-04-06T02:52:06.571-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 29 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:06.903-0500 c20013| 2016-04-06T02:52:06.571-0500 I ASIO [NetworkInterfaceASIO-BGSync-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:06.906-0500 c20013| 2016-04-06T02:52:06.571-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 28 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:52:06.909-0500 c20013| 2016-04-06T02:52:06.571-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 27 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:06.911-0500 c20011| 2016-04-06T02:52:06.571-0500 D COMMAND [conn14] run command local.$cmd { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929117000|1 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:06.918-0500 c20012| 2016-04-06T02:52:06.571-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:06.929-0500 c20012| 2016-04-06T02:52:06.571-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 32 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:06.930-0500 c20012| 2016-04-06T02:52:06.571-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 32 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:06.932-0500 c20012| 2016-04-06T02:52:06.571-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.934-0500 c20012| 2016-04-06T02:52:06.571-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.935-0500 c20012| 2016-04-06T02:52:06.571-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.937-0500 c20012| 2016-04-06T02:52:06.571-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.940-0500 c20012| 2016-04-06T02:52:06.571-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.941-0500 c20012| 2016-04-06T02:52:06.571-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.942-0500 c20012| 2016-04-06T02:52:06.571-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.943-0500 c20011| 2016-04-06T02:52:06.571-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:06.944-0500 c20011| 2016-04-06T02:52:06.571-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:06.948-0500 c20011| 2016-04-06T02:52:06.571-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:06.951-0500 c20011| 2016-04-06T02:52:06.571-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:06.954-0500 c20011| 2016-04-06T02:52:06.571-0500 I COMMAND [conn14] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929117000|1 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 1 } planSummary: COLLSCAN cursorid:17466612721 keysExamined:0 docsExamined:4 numYields:0 nreturned:4 reslen:819 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:06.960-0500 c20013| 2016-04-06T02:52:06.571-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 27 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1459929117000|1, h: 1169182228640141205, v: 2, op: "n", ns: "", o: { msg: "initiating set" } }, { ts: Timestamp 1459929123000|2, t: 1, h: 6452190736163510723, v: 2, op: "n", ns: "", o: { msg: "new primary" } }, { ts: Timestamp 1459929123000|3, t: 1, h: -3830276443629906377, v: 2, op: "c", ns: "config.$cmd", o: { create: "lockpings" } }, { ts: Timestamp 1459929123000|4, t: 1, h: 5632815493348496991, v: 2, op: "i", ns: "config.lockpings", o: { _id: "mongovm16:20014:1459929123:-665935931", ping: new Date(1459929123720) } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:06.966-0500 c20011| 2016-04-06T02:52:06.571-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:06.966-0500 c20011| 2016-04-06T02:52:06.571-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:06.969-0500 c20011| 2016-04-06T02:52:06.571-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929123000|2, t: 1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:06.971-0500 c20011| 2016-04-06T02:52:06.571-0500 D REPL [conn12] Ignoring older committed snapshot from before I became primary, optime: { ts: Timestamp 1459929117000|1, t: -1 }, firstOpTimeOfMyTerm: { ts: Timestamp 1459929123000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:06.972-0500 c20013| 2016-04-06T02:52:06.571-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 29 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:06.978-0500 c20011| 2016-04-06T02:52:06.571-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:06.989-0500 c20011| 2016-04-06T02:52:06.571-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:06.990-0500 c20012| 2016-04-06T02:52:06.572-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 32 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:06.994-0500 c20012| 2016-04-06T02:52:06.572-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.995-0500 c20012| 2016-04-06T02:52:06.572-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.998-0500 c20012| 2016-04-06T02:52:06.572-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:06.998-0500 c20012| 2016-04-06T02:52:06.572-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.010-0500 c20012| 2016-04-06T02:52:06.572-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.011-0500 c20012| 2016-04-06T02:52:06.572-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.016-0500 c20012| 2016-04-06T02:52:06.572-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.017-0500 c20012| 2016-04-06T02:52:06.572-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.019-0500 c20013| 2016-04-06T02:52:06.572-0500 D REPL [rsBackgroundSync-0] fetcher read 4 operations from remote oplog starting at ts: Timestamp 1459929117000|1 and ending at ts: Timestamp 1459929123000|4 [js_test:multi_coll_drop] 2016-04-06T02:52:07.020-0500 c20012| 2016-04-06T02:52:06.572-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:07.025-0500 c20012| 2016-04-06T02:52:06.572-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.026-0500 c20012| 2016-04-06T02:52:06.572-0500 D STORAGE [repl writer worker 2] create collection config.lockpings {} [js_test:multi_coll_drop] 2016-04-06T02:52:07.030-0500 c20012| 2016-04-06T02:52:06.572-0500 D STORAGE [repl writer worker 2] stored meta data for config.lockpings @ RecordId(7) [js_test:multi_coll_drop] 2016-04-06T02:52:07.036-0500 c20012| 2016-04-06T02:52:06.572-0500 D STORAGE [repl writer worker 2] WiredTigerKVEngine::createRecordStore uri: table:collection-11-6577373056560964212 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:07.039-0500 c20012| 2016-04-06T02:52:06.574-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.053-0500 c20012| 2016-04-06T02:52:06.574-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 34 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.053-0500 c20012| 2016-04-06T02:52:06.574-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 34 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:07.064-0500 c20011| 2016-04-06T02:52:06.574-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.070-0500 c20011| 2016-04-06T02:52:06.574-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:07.073-0500 c20011| 2016-04-06T02:52:06.574-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929123000|2, t: 1 } and is durable through: { ts: Timestamp 1459929123000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.074-0500 c20011| 2016-04-06T02:52:06.574-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929123000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.076-0500 c20011| 2016-04-06T02:52:06.574-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929123000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929123000|2, t: 1 }, name-id: "13" } [js_test:multi_coll_drop] 2016-04-06T02:52:07.081-0500 c20011| 2016-04-06T02:52:06.574-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929123000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929123000|2, t: 1 }, name-id: "13" } [js_test:multi_coll_drop] 2016-04-06T02:52:07.083-0500 c20011| 2016-04-06T02:52:06.574-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.085-0500 c20011| 2016-04-06T02:52:06.574-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:07.086-0500 c20011| 2016-04-06T02:52:06.574-0500 D COMMAND [conn11] Using 'committed' snapshot. { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 0|0, t: -1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.086-0500 c20011| 2016-04-06T02:52:06.574-0500 D QUERY [conn11] Collection config.shards does not exist. Using EOF plan: query: {} sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:52:07.089-0500 c20012| 2016-04-06T02:52:06.574-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 34 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.092-0500 c20011| 2016-04-06T02:52:06.574-0500 I COMMAND [conn11] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 0|0, t: -1 } }, maxTimeMS: 30000 } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:370 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 2846ms [js_test:multi_coll_drop] 2016-04-06T02:52:07.096-0500 c20011| 2016-04-06T02:52:06.574-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 0|0, t: -1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:07.100-0500 c20012| 2016-04-06T02:52:06.574-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 31 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.104-0500 c20012| 2016-04-06T02:52:06.574-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929123000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.106-0500 c20012| 2016-04-06T02:52:06.574-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:07.107-0500 s20014| 2016-04-06T02:52:06.574-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 3 finished with response: { waitedMS: 0, cursor: { id: 0, ns: "config.shards", firstBatch: [] }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.112-0500 c20012| 2016-04-06T02:52:06.574-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 37 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.574-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:07.115-0500 s20014| 2016-04-06T02:52:06.574-0500 D SHARDING [mongosMain] found 0 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp 1459929123000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.123-0500 s20014| 2016-04-06T02:52:06.574-0500 D ASIO [mongosMain] startCommand: RemoteCommand 6 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:52:36.574-0500 cmd:{ find: "version", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929123000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.125-0500 c20012| 2016-04-06T02:52:06.575-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 37 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:07.126-0500 c20011| 2016-04-06T02:52:06.575-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:07.126-0500 s20014| 2016-04-06T02:52:06.575-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:07.129-0500 c20013| 2016-04-06T02:52:06.575-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:49878 #10 (6 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:07.130-0500 s20014| 2016-04-06T02:52:06.575-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 7 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:07.133-0500 c20013| 2016-04-06T02:52:06.575-0500 D COMMAND [conn10] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:52:07.140-0500 c20013| 2016-04-06T02:52:06.575-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 33 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.575-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 0|0, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:07.142-0500 c20013| 2016-04-06T02:52:06.575-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 33 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:07.144-0500 c20011| 2016-04-06T02:52:06.575-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 0|0, t: -1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:07.150-0500 c20013| 2016-04-06T02:52:06.575-0500 I COMMAND [conn10] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20014" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:07.158-0500 c20011| 2016-04-06T02:52:06.575-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 0|0, t: -1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:07.161-0500 c20013| 2016-04-06T02:52:06.575-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 33 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.162-0500 s20014| 2016-04-06T02:52:06.575-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:07.164-0500 s20014| 2016-04-06T02:52:06.575-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 7 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:52:07.165-0500 s20014| 2016-04-06T02:52:06.576-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 6 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:07.166-0500 c20013| 2016-04-06T02:52:06.576-0500 D COMMAND [conn10] run command config.$cmd { find: "version", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929123000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.169-0500 c20013| 2016-04-06T02:52:06.576-0500 D REPL [conn10] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929123000|2, t: 1 } to be in a snapshot -- current snapshot: { ts: Timestamp 0|0, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.170-0500 c20013| 2016-04-06T02:52:06.576-0500 D REPL [conn10] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999985μs [js_test:multi_coll_drop] 2016-04-06T02:52:07.175-0500 c20013| 2016-04-06T02:52:06.576-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929123000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.177-0500 c20013| 2016-04-06T02:52:06.576-0500 D REPL [conn10] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999922μs [js_test:multi_coll_drop] 2016-04-06T02:52:07.179-0500 c20013| 2016-04-06T02:52:06.576-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:07.184-0500 c20013| 2016-04-06T02:52:06.576-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 35 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.576-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:07.185-0500 c20013| 2016-04-06T02:52:06.576-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 35 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:07.187-0500 c20011| 2016-04-06T02:52:06.576-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:07.190-0500 c20012| 2016-04-06T02:52:06.578-0500 D STORAGE [repl writer worker 2] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-11-6577373056560964212 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:07.194-0500 c20012| 2016-04-06T02:52:06.578-0500 D STORAGE [repl writer worker 2] config.lockpings: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:07.201-0500 c20012| 2016-04-06T02:52:06.578-0500 D STORAGE [repl writer worker 2] WiredTigerKVEngine::createSortedDataInterface ident: index-12-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.lockpings" }), [js_test:multi_coll_drop] 2016-04-06T02:52:07.205-0500 c20012| 2016-04-06T02:52:06.578-0500 D STORAGE [repl writer worker 2] create uri: table:index-12-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.lockpings" }), [js_test:multi_coll_drop] 2016-04-06T02:52:07.208-0500 c20012| 2016-04-06T02:52:06.581-0500 D STORAGE [repl writer worker 2] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-12-6577373056560964212 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:07.209-0500 c20012| 2016-04-06T02:52:06.582-0500 D STORAGE [repl writer worker 2] config.lockpings: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:07.211-0500 c20012| 2016-04-06T02:52:06.582-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.212-0500 c20012| 2016-04-06T02:52:06.582-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.213-0500 c20012| 2016-04-06T02:52:06.582-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.214-0500 c20012| 2016-04-06T02:52:06.582-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.216-0500 c20012| 2016-04-06T02:52:06.583-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.218-0500 c20012| 2016-04-06T02:52:06.583-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.220-0500 c20012| 2016-04-06T02:52:06.583-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.220-0500 c20012| 2016-04-06T02:52:06.583-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.222-0500 c20012| 2016-04-06T02:52:06.583-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.222-0500 c20012| 2016-04-06T02:52:06.583-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.223-0500 c20012| 2016-04-06T02:52:06.583-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.224-0500 c20012| 2016-04-06T02:52:06.583-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.227-0500 c20012| 2016-04-06T02:52:06.583-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.228-0500 c20012| 2016-04-06T02:52:06.583-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.229-0500 c20012| 2016-04-06T02:52:06.583-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.230-0500 c20012| 2016-04-06T02:52:06.583-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.231-0500 c20012| 2016-04-06T02:52:06.583-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:07.239-0500 c20012| 2016-04-06T02:52:06.583-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.241-0500 c20012| 2016-04-06T02:52:06.583-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:07.245-0500 c20012| 2016-04-06T02:52:06.583-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 38 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.246-0500 c20012| 2016-04-06T02:52:06.583-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 38 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:07.247-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.250-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.251-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.251-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.253-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.254-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.255-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.257-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.258-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.259-0500 c20012| 2016-04-06T02:52:06.584-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 38 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.262-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.266-0500 c20011| 2016-04-06T02:52:06.584-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.268-0500 c20011| 2016-04-06T02:52:06.584-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:07.271-0500 c20011| 2016-04-06T02:52:06.584-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929123000|3, t: 1 } and is durable through: { ts: Timestamp 1459929123000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.274-0500 c20011| 2016-04-06T02:52:06.584-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929123000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929123000|2, t: 1 }, name-id: "13" } [js_test:multi_coll_drop] 2016-04-06T02:52:07.277-0500 c20011| 2016-04-06T02:52:06.584-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.284-0500 c20011| 2016-04-06T02:52:06.584-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:07.285-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.286-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.287-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.290-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.292-0500 c20012| 2016-04-06T02:52:06.584-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:07.292-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.294-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.294-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.295-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.298-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.299-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.300-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.302-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.303-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.308-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.309-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.310-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.311-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.312-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.314-0500 c20012| 2016-04-06T02:52:06.584-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.316-0500 c20012| 2016-04-06T02:52:06.585-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.322-0500 c20013| 2016-04-06T02:52:06.585-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:07.322-0500 c20013| 2016-04-06T02:52:06.585-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.323-0500 c20013| 2016-04-06T02:52:06.585-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.324-0500 c20013| 2016-04-06T02:52:06.585-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.327-0500 c20013| 2016-04-06T02:52:06.585-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.327-0500 c20013| 2016-04-06T02:52:06.585-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.328-0500 c20013| 2016-04-06T02:52:06.585-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.331-0500 c20013| 2016-04-06T02:52:06.585-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.332-0500 c20013| 2016-04-06T02:52:06.585-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.333-0500 c20013| 2016-04-06T02:52:06.585-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.336-0500 c20013| 2016-04-06T02:52:06.586-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.337-0500 c20013| 2016-04-06T02:52:06.586-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.338-0500 c20013| 2016-04-06T02:52:06.586-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.340-0500 c20013| 2016-04-06T02:52:06.586-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.342-0500 c20013| 2016-04-06T02:52:06.586-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.342-0500 c20013| 2016-04-06T02:52:06.586-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:07.345-0500 c20013| 2016-04-06T02:52:06.586-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.347-0500 c20013| 2016-04-06T02:52:06.586-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.349-0500 c20013| 2016-04-06T02:52:06.586-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.350-0500 c20013| 2016-04-06T02:52:06.586-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.357-0500 c20012| 2016-04-06T02:52:06.586-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.364-0500 c20012| 2016-04-06T02:52:06.586-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 40 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.365-0500 c20012| 2016-04-06T02:52:06.586-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 40 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:07.367-0500 c20013| 2016-04-06T02:52:06.586-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.368-0500 c20013| 2016-04-06T02:52:06.586-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.370-0500 c20013| 2016-04-06T02:52:06.586-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.375-0500 c20013| 2016-04-06T02:52:06.586-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.375-0500 c20013| 2016-04-06T02:52:06.586-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.377-0500 c20013| 2016-04-06T02:52:06.586-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.377-0500 c20013| 2016-04-06T02:52:06.586-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.378-0500 c20013| 2016-04-06T02:52:06.586-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.379-0500 c20013| 2016-04-06T02:52:06.586-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.380-0500 c20013| 2016-04-06T02:52:06.586-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.383-0500 c20013| 2016-04-06T02:52:06.586-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.385-0500 c20013| 2016-04-06T02:52:06.586-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 35 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.386-0500 c20013| 2016-04-06T02:52:06.586-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.387-0500 c20012| 2016-04-06T02:52:06.586-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 40 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.390-0500 c20012| 2016-04-06T02:52:06.586-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 37 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.397-0500 c20011| 2016-04-06T02:52:06.586-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.399-0500 c20011| 2016-04-06T02:52:06.586-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:07.401-0500 c20012| 2016-04-06T02:52:06.586-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929123000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.403-0500 c20011| 2016-04-06T02:52:06.586-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929123000|3, t: 1 } and is durable through: { ts: Timestamp 1459929123000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.404-0500 c20011| 2016-04-06T02:52:06.586-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929123000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.405-0500 c20011| 2016-04-06T02:52:06.586-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929123000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929123000|3, t: 1 }, name-id: "16" } [js_test:multi_coll_drop] 2016-04-06T02:52:07.409-0500 c20011| 2016-04-06T02:52:06.586-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929123000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929123000|3, t: 1 }, name-id: "16" } [js_test:multi_coll_drop] 2016-04-06T02:52:07.411-0500 c20011| 2016-04-06T02:52:06.586-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.414-0500 c20011| 2016-04-06T02:52:06.586-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:07.418-0500 c20011| 2016-04-06T02:52:06.586-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|2, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 11ms [js_test:multi_coll_drop] 2016-04-06T02:52:07.427-0500 c20011| 2016-04-06T02:52:06.586-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|2, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 10ms [js_test:multi_coll_drop] 2016-04-06T02:52:07.427-0500 c20012| 2016-04-06T02:52:06.586-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:07.427-0500 c20013| 2016-04-06T02:52:06.586-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.427-0500 c20013| 2016-04-06T02:52:06.586-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.433-0500 c20013| 2016-04-06T02:52:06.587-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:07.434-0500 c20012| 2016-04-06T02:52:06.587-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 43 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.586-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:07.436-0500 c20012| 2016-04-06T02:52:06.587-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 43 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:07.438-0500 c20013| 2016-04-06T02:52:06.587-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:07.444-0500 c20013| 2016-04-06T02:52:06.587-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.452-0500 c20013| 2016-04-06T02:52:06.587-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 37 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.453-0500 c20013| 2016-04-06T02:52:06.587-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.455-0500 c20013| 2016-04-06T02:52:06.587-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 37 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:07.460-0500 c20013| 2016-04-06T02:52:06.587-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.462-0500 c20013| 2016-04-06T02:52:06.587-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.463-0500 c20013| 2016-04-06T02:52:06.587-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.464-0500 c20013| 2016-04-06T02:52:06.587-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.468-0500 c20013| 2016-04-06T02:52:06.587-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.468-0500 c20013| 2016-04-06T02:52:06.587-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.469-0500 c20013| 2016-04-06T02:52:06.587-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.470-0500 c20013| 2016-04-06T02:52:06.587-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.472-0500 c20013| 2016-04-06T02:52:06.587-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.473-0500 c20013| 2016-04-06T02:52:06.587-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.474-0500 c20013| 2016-04-06T02:52:06.587-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.475-0500 c20013| 2016-04-06T02:52:06.587-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:07.479-0500 c20013| 2016-04-06T02:52:06.587-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.481-0500 c20013| 2016-04-06T02:52:06.587-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929123000|2, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:07.484-0500 c20013| 2016-04-06T02:52:06.587-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "version", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929123000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.485-0500 c20013| 2016-04-06T02:52:06.587-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 37 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.486-0500 c20013| 2016-04-06T02:52:06.587-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.488-0500 c20013| 2016-04-06T02:52:06.587-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.489-0500 c20013| 2016-04-06T02:52:06.587-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.489-0500 c20013| 2016-04-06T02:52:06.587-0500 D STORAGE [repl writer worker 7] create collection config.lockpings {} [js_test:multi_coll_drop] 2016-04-06T02:52:07.491-0500 c20013| 2016-04-06T02:52:06.587-0500 D STORAGE [repl writer worker 7] stored meta data for config.lockpings @ RecordId(7) [js_test:multi_coll_drop] 2016-04-06T02:52:07.493-0500 c20013| 2016-04-06T02:52:06.587-0500 D STORAGE [repl writer worker 7] WiredTigerKVEngine::createRecordStore uri: table:collection-11-751336887848580549 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:07.496-0500 c20013| 2016-04-06T02:52:06.587-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929123000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.498-0500 c20013| 2016-04-06T02:52:06.587-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:07.502-0500 c20013| 2016-04-06T02:52:06.588-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 39 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.588-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:07.503-0500 c20013| 2016-04-06T02:52:06.588-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 39 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:07.506-0500 c20011| 2016-04-06T02:52:06.587-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:07.509-0500 c20011| 2016-04-06T02:52:06.587-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.510-0500 c20011| 2016-04-06T02:52:06.587-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:07.512-0500 c20011| 2016-04-06T02:52:06.587-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.514-0500 c20011| 2016-04-06T02:52:06.587-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929123000|2, t: 1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.516-0500 c20011| 2016-04-06T02:52:06.587-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929123000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929123000|3, t: 1 }, name-id: "16" } [js_test:multi_coll_drop] 2016-04-06T02:52:07.519-0500 c20011| 2016-04-06T02:52:06.587-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:07.521-0500 c20011| 2016-04-06T02:52:06.588-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:07.527-0500 c20013| 2016-04-06T02:52:06.589-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.535-0500 c20013| 2016-04-06T02:52:06.589-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 40 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.539-0500 c20013| 2016-04-06T02:52:06.589-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 40 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:07.545-0500 c20011| 2016-04-06T02:52:06.589-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.553-0500 c20011| 2016-04-06T02:52:06.589-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:07.557-0500 c20011| 2016-04-06T02:52:06.589-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.559-0500 c20011| 2016-04-06T02:52:06.589-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929123000|2, t: 1 } and is durable through: { ts: Timestamp 1459929123000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.561-0500 c20011| 2016-04-06T02:52:06.589-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929123000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929123000|3, t: 1 }, name-id: "16" } [js_test:multi_coll_drop] 2016-04-06T02:52:07.567-0500 c20011| 2016-04-06T02:52:06.589-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:07.568-0500 c20013| 2016-04-06T02:52:06.589-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 40 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.570-0500 c20012| 2016-04-06T02:52:06.589-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.574-0500 c20012| 2016-04-06T02:52:06.589-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.576-0500 c20012| 2016-04-06T02:52:06.590-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:07.583-0500 c20012| 2016-04-06T02:52:06.590-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.587-0500 c20012| 2016-04-06T02:52:06.590-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 44 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.590-0500 c20012| 2016-04-06T02:52:06.590-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 44 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:07.596-0500 c20011| 2016-04-06T02:52:06.590-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.596-0500 c20011| 2016-04-06T02:52:06.590-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:07.597-0500 c20011| 2016-04-06T02:52:06.590-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929123000|4, t: 1 } and is durable through: { ts: Timestamp 1459929123000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.600-0500 c20011| 2016-04-06T02:52:06.590-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929123000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929123000|3, t: 1 }, name-id: "16" } [js_test:multi_coll_drop] 2016-04-06T02:52:07.605-0500 c20011| 2016-04-06T02:52:06.590-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.618-0500 c20011| 2016-04-06T02:52:06.590-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:07.620-0500 c20012| 2016-04-06T02:52:06.590-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 44 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.628-0500 c20012| 2016-04-06T02:52:06.592-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.650-0500 c20012| 2016-04-06T02:52:06.592-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 46 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.652-0500 c20012| 2016-04-06T02:52:06.592-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 46 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:07.654-0500 c20011| 2016-04-06T02:52:06.592-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.656-0500 c20011| 2016-04-06T02:52:06.592-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:07.657-0500 c20011| 2016-04-06T02:52:06.592-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929123000|4, t: 1 } and is durable through: { ts: Timestamp 1459929123000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.659-0500 c20011| 2016-04-06T02:52:06.592-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929123000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.661-0500 c20011| 2016-04-06T02:52:06.592-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.663-0500 c20011| 2016-04-06T02:52:06.592-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:07.664-0500 c20011| 2016-04-06T02:52:06.592-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|3, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:07.666-0500 c20013| 2016-04-06T02:52:06.592-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 39 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.671-0500 c20011| 2016-04-06T02:52:06.592-0500 I COMMAND [conn10] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "mongovm16:20014:1459929123:-665935931" }, update: { $set: { ping: new Date(1459929123720) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ping: new Date(1459929123720) } } keysExamined:0 docsExamined:0 nMatched:0 nModified:0 upsert:1 numYields:0 reslen:414 locks:{ Global: { acquireCount: { r: 3, w: 3 } }, Database: { acquireCount: { w: 3, W: 2 } }, Collection: { acquireCount: { w: 2 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 2867ms [js_test:multi_coll_drop] 2016-04-06T02:52:07.674-0500 c20011| 2016-04-06T02:52:06.592-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|3, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:07.675-0500 s20014| 2016-04-06T02:52:06.593-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 1 finished with response: { lastErrorObject: { updatedExisting: false, n: 1, upserted: "mongovm16:20014:1459929123:-665935931" }, value: null, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.676-0500 c20012| 2016-04-06T02:52:06.592-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 46 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.678-0500 c20012| 2016-04-06T02:52:06.592-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 43 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.682-0500 c20012| 2016-04-06T02:52:06.593-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929123000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.683-0500 c20012| 2016-04-06T02:52:06.593-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:07.687-0500 s20014| 2016-04-06T02:52:06.593-0500 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: LockStateChangeFailed: findAndModify query predicate didn't match any lock document [js_test:multi_coll_drop] 2016-04-06T02:52:07.691-0500 c20013| 2016-04-06T02:52:06.593-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929123000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.691-0500 c20013| 2016-04-06T02:52:06.593-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:07.693-0500 c20013| 2016-04-06T02:52:06.593-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 43 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.593-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:07.695-0500 c20013| 2016-04-06T02:52:06.593-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 43 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:07.698-0500 c20011| 2016-04-06T02:52:06.593-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:07.699-0500 c20012| 2016-04-06T02:52:06.593-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 49 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.593-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:07.701-0500 c20012| 2016-04-06T02:52:06.593-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 49 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:07.702-0500 c20011| 2016-04-06T02:52:06.593-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:07.703-0500 c20013| 2016-04-06T02:52:06.594-0500 D STORAGE [repl writer worker 7] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-11-751336887848580549 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:07.704-0500 c20013| 2016-04-06T02:52:06.594-0500 D STORAGE [repl writer worker 7] config.lockpings: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:07.709-0500 c20013| 2016-04-06T02:52:06.594-0500 D STORAGE [repl writer worker 7] WiredTigerKVEngine::createSortedDataInterface ident: index-12-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.lockpings" }), [js_test:multi_coll_drop] 2016-04-06T02:52:07.710-0500 c20013| 2016-04-06T02:52:06.594-0500 D STORAGE [repl writer worker 7] create uri: table:index-12-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.lockpings" }), [js_test:multi_coll_drop] 2016-04-06T02:52:07.711-0500 c20013| 2016-04-06T02:52:06.601-0500 D STORAGE [repl writer worker 7] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-12-751336887848580549 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:07.713-0500 c20013| 2016-04-06T02:52:06.601-0500 D STORAGE [repl writer worker 7] config.lockpings: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:07.714-0500 c20013| 2016-04-06T02:52:06.602-0500 D QUERY [conn10] Collection config.version does not exist. Using EOF plan: query: {} sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:52:07.716-0500 c20013| 2016-04-06T02:52:06.602-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.716-0500 c20013| 2016-04-06T02:52:06.602-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.717-0500 c20013| 2016-04-06T02:52:06.602-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.718-0500 c20013| 2016-04-06T02:52:06.602-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.719-0500 c20013| 2016-04-06T02:52:06.602-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.721-0500 c20013| 2016-04-06T02:52:06.602-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.723-0500 s20014| 2016-04-06T02:52:06.602-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 6 finished with response: { waitedMS: 11, cursor: { id: 0, ns: "config.version", firstBatch: [] }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.724-0500 c20013| 2016-04-06T02:52:06.602-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.726-0500 c20013| 2016-04-06T02:52:06.602-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.728-0500 c20013| 2016-04-06T02:52:06.602-0500 I COMMAND [conn10] command config.version command: find { find: "version", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929123000|2, t: 1 } }, maxTimeMS: 30000 } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:371 locks:{ Global: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 14435 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 26ms [js_test:multi_coll_drop] 2016-04-06T02:52:07.728-0500 c20013| 2016-04-06T02:52:06.602-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.729-0500 c20013| 2016-04-06T02:52:06.602-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.732-0500 c20013| 2016-04-06T02:52:06.602-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.734-0500 c20013| 2016-04-06T02:52:06.602-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.738-0500 c20013| 2016-04-06T02:52:06.602-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.741-0500 c20013| 2016-04-06T02:52:06.602-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.746-0500 s20014| 2016-04-06T02:52:06.602-0500 D ASIO [mongosMain] startCommand: RemoteCommand 10 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:52:36.602-0500 cmd:{ count: "shards", query: {}, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929123000|4, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.747-0500 s20014| 2016-04-06T02:52:06.602-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 10 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:07.751-0500 c20013| 2016-04-06T02:52:06.602-0500 D COMMAND [conn10] run command config.$cmd { count: "shards", query: {}, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929123000|4, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.754-0500 c20013| 2016-04-06T02:52:06.602-0500 D REPL [conn10] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929123000|4, t: 1 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929123000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.757-0500 c20013| 2016-04-06T02:52:06.602-0500 D REPL [conn10] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999982μs [js_test:multi_coll_drop] 2016-04-06T02:52:07.759-0500 c20013| 2016-04-06T02:52:06.603-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.760-0500 c20013| 2016-04-06T02:52:06.603-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.761-0500 c20013| 2016-04-06T02:52:06.603-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:07.763-0500 c20013| 2016-04-06T02:52:06.603-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:07.765-0500 c20013| 2016-04-06T02:52:06.603-0500 D REPL [conn10] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999475μs [js_test:multi_coll_drop] 2016-04-06T02:52:07.767-0500 c20013| 2016-04-06T02:52:06.603-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.767-0500 c20013| 2016-04-06T02:52:06.603-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.770-0500 c20013| 2016-04-06T02:52:06.603-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.771-0500 c20013| 2016-04-06T02:52:06.603-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.773-0500 c20013| 2016-04-06T02:52:06.603-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.779-0500 c20013| 2016-04-06T02:52:06.603-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.782-0500 c20013| 2016-04-06T02:52:06.603-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.783-0500 c20013| 2016-04-06T02:52:06.603-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.786-0500 c20013| 2016-04-06T02:52:06.603-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.787-0500 c20013| 2016-04-06T02:52:06.603-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.788-0500 c20013| 2016-04-06T02:52:06.603-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.789-0500 c20013| 2016-04-06T02:52:06.603-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.795-0500 c20013| 2016-04-06T02:52:06.603-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 44 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.796-0500 c20013| 2016-04-06T02:52:06.603-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.798-0500 c20013| 2016-04-06T02:52:06.603-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 44 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:07.798-0500 c20013| 2016-04-06T02:52:06.603-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.802-0500 c20013| 2016-04-06T02:52:06.603-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.802-0500 c20013| 2016-04-06T02:52:06.603-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:07.804-0500 c20013| 2016-04-06T02:52:06.603-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.808-0500 c20011| 2016-04-06T02:52:06.604-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.808-0500 c20011| 2016-04-06T02:52:06.604-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:07.810-0500 c20011| 2016-04-06T02:52:06.604-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.814-0500 c20011| 2016-04-06T02:52:06.604-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929123000|3, t: 1 } and is durable through: { ts: Timestamp 1459929123000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.818-0500 c20013| 2016-04-06T02:52:06.604-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.823-0500 c20011| 2016-04-06T02:52:06.604-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:07.825-0500 c20013| 2016-04-06T02:52:06.604-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.828-0500 c20013| 2016-04-06T02:52:06.604-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.830-0500 c20013| 2016-04-06T02:52:06.604-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.832-0500 c20013| 2016-04-06T02:52:06.604-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.834-0500 c20013| 2016-04-06T02:52:06.604-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 44 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.834-0500 c20013| 2016-04-06T02:52:06.604-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.836-0500 c20013| 2016-04-06T02:52:06.604-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.837-0500 c20013| 2016-04-06T02:52:06.604-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.838-0500 c20013| 2016-04-06T02:52:06.604-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.840-0500 c20013| 2016-04-06T02:52:06.604-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.841-0500 c20013| 2016-04-06T02:52:06.604-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.843-0500 c20013| 2016-04-06T02:52:06.604-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.844-0500 c20013| 2016-04-06T02:52:06.604-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.848-0500 c20013| 2016-04-06T02:52:06.604-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.849-0500 c20013| 2016-04-06T02:52:06.604-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.850-0500 c20013| 2016-04-06T02:52:06.604-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.851-0500 c20013| 2016-04-06T02:52:06.604-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:07.853-0500 c20013| 2016-04-06T02:52:06.604-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:07.863-0500 c20013| 2016-04-06T02:52:06.604-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.867-0500 c20013| 2016-04-06T02:52:06.604-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 46 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.868-0500 c20013| 2016-04-06T02:52:06.604-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 46 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:07.874-0500 c20011| 2016-04-06T02:52:06.605-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.876-0500 c20011| 2016-04-06T02:52:06.605-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:07.880-0500 c20011| 2016-04-06T02:52:06.605-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.883-0500 c20011| 2016-04-06T02:52:06.605-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929123000|4, t: 1 } and is durable through: { ts: Timestamp 1459929123000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.888-0500 c20011| 2016-04-06T02:52:06.605-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:07.893-0500 c20013| 2016-04-06T02:52:06.605-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 46 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.895-0500 c20013| 2016-04-06T02:52:06.605-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929123000|4, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:07.900-0500 c20013| 2016-04-06T02:52:06.605-0500 D COMMAND [conn10] Using 'committed' snapshot. { count: "shards", query: {}, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929123000|4, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.900-0500 s20014| 2016-04-06T02:52:06.605-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 10 finished with response: { waitedMS: 2, n: 0, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.910-0500 s20014| 2016-04-06T02:52:06.605-0500 D ASIO [mongosMain] startCommand: RemoteCommand 12 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:36.605-0500 cmd:{ update: "version", updates: [ { q: { _id: 1, minCompatibleVersion: 5, currentVersion: 6, clusterId: ObjectId('5704c02606c33406d4d9c0b9') }, u: { _id: 1, minCompatibleVersion: 5, currentVersion: 6, clusterId: ObjectId('5704c02606c33406d4d9c0b9') }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.913-0500 s20014| 2016-04-06T02:52:06.605-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 12 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:07.921-0500 c20011| 2016-04-06T02:52:06.605-0500 D COMMAND [conn10] run command config.$cmd { update: "version", updates: [ { q: { _id: 1, minCompatibleVersion: 5, currentVersion: 6, clusterId: ObjectId('5704c02606c33406d4d9c0b9') }, u: { _id: 1, minCompatibleVersion: 5, currentVersion: 6, clusterId: ObjectId('5704c02606c33406d4d9c0b9') }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.922-0500 c20011| 2016-04-06T02:52:06.605-0500 D STORAGE [conn10] create collection config.version {} [js_test:multi_coll_drop] 2016-04-06T02:52:07.923-0500 c20011| 2016-04-06T02:52:06.606-0500 D STORAGE [conn10] stored meta data for config.version @ RecordId(7) [js_test:multi_coll_drop] 2016-04-06T02:52:07.926-0500 c20011| 2016-04-06T02:52:06.606-0500 D STORAGE [conn10] WiredTigerKVEngine::createRecordStore uri: table:collection-11--6404702321693896372 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:07.927-0500 c20013| 2016-04-06T02:52:06.605-0500 I COMMAND [conn10] command config.shards command: count { count: "shards", query: {}, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929123000|4, t: 1 } }, maxTimeMS: 30000 } planSummary: EOF numYields:0 reslen:313 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:07.932-0500 c20013| 2016-04-06T02:52:06.609-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.953-0500 c20013| 2016-04-06T02:52:06.609-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 48 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.955-0500 c20013| 2016-04-06T02:52:06.609-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 48 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:07.955-0500 c20013| 2016-04-06T02:52:06.610-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 48 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.961-0500 c20013| 2016-04-06T02:52:06.610-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.976-0500 c20013| 2016-04-06T02:52:06.610-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 49 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.976-0500 c20013| 2016-04-06T02:52:06.610-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:07.977-0500 c20013| 2016-04-06T02:52:06.610-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 49 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:07.979-0500 c20013| 2016-04-06T02:52:06.610-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 50 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:07.981-0500 c20013| 2016-04-06T02:52:06.610-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 49 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.984-0500 c20013| 2016-04-06T02:52:06.610-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:07.985-0500 c20013| 2016-04-06T02:52:06.610-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 50 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:52:07.991-0500 c20011| 2016-04-06T02:52:06.609-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:07.992-0500 c20011| 2016-04-06T02:52:06.609-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:07.995-0500 c20011| 2016-04-06T02:52:06.609-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:07.998-0500 c20011| 2016-04-06T02:52:06.609-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929123000|4, t: 1 } and is durable through: { ts: Timestamp 1459929123000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.000-0500 c20011| 2016-04-06T02:52:06.610-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:08.000-0500 c20011| 2016-04-06T02:52:06.610-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:58945 #16 (12 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:08.002-0500 c20011| 2016-04-06T02:52:06.610-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:08.003-0500 c20011| 2016-04-06T02:52:06.610-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:08.045-0500 c20011| 2016-04-06T02:52:06.610-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.050-0500 c20011| 2016-04-06T02:52:06.610-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929123000|4, t: 1 } and is durable through: { ts: Timestamp 1459929123000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.054-0500 c20011| 2016-04-06T02:52:06.610-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:08.055-0500 c20011| 2016-04-06T02:52:06.610-0500 D COMMAND [conn16] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20013" } [js_test:multi_coll_drop] 2016-04-06T02:52:08.061-0500 c20011| 2016-04-06T02:52:06.610-0500 I COMMAND [conn16] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20013" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:08.068-0500 c20011| 2016-04-06T02:52:06.616-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-11--6404702321693896372 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:08.069-0500 c20011| 2016-04-06T02:52:06.616-0500 D STORAGE [conn10] config.version: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:08.073-0500 c20011| 2016-04-06T02:52:06.616-0500 D STORAGE [conn10] WiredTigerKVEngine::createSortedDataInterface ident: index-12--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.version" }), [js_test:multi_coll_drop] 2016-04-06T02:52:08.077-0500 c20011| 2016-04-06T02:52:06.616-0500 D STORAGE [conn10] create uri: table:index-12--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.version" }), [js_test:multi_coll_drop] 2016-04-06T02:52:08.079-0500 c20011| 2016-04-06T02:52:06.640-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-12--6404702321693896372 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:08.080-0500 c20011| 2016-04-06T02:52:06.640-0500 D STORAGE [conn10] config.version: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:08.087-0500 c20013| 2016-04-06T02:52:06.640-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 43 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|1, t: 1, h: -1407863455254960846, v: 2, op: "c", ns: "config.$cmd", o: { create: "version" } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.088-0500 c20013| 2016-04-06T02:52:06.640-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|1 and ending at ts: Timestamp 1459929126000|1 [js_test:multi_coll_drop] 2016-04-06T02:52:08.093-0500 c20012| 2016-04-06T02:52:06.641-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 49 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|1, t: 1, h: -1407863455254960846, v: 2, op: "c", ns: "config.$cmd", o: { create: "version" } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.098-0500 c20011| 2016-04-06T02:52:06.640-0500 D QUERY [conn10] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.version" } [js_test:multi_coll_drop] 2016-04-06T02:52:08.105-0500 c20011| 2016-04-06T02:52:06.640-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|4, t: 1 } } cursorid:17466612721 numYields:1 nreturned:1 reslen:459 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 47ms [js_test:multi_coll_drop] 2016-04-06T02:52:08.109-0500 c20011| 2016-04-06T02:52:06.640-0500 D QUERY [conn10] Only one plan is available; it will be run but will not be cached. query: { _id: 1, minCompatibleVersion: 5, currentVersion: 6, clusterId: ObjectId('5704c02606c33406d4d9c0b9') } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.115-0500 c20011| 2016-04-06T02:52:06.640-0500 I WRITE [conn10] update config.version query: { _id: 1, minCompatibleVersion: 5, currentVersion: 6, clusterId: ObjectId('5704c02606c33406d4d9c0b9') } update: { _id: 1, minCompatibleVersion: 5, currentVersion: 6, clusterId: ObjectId('5704c02606c33406d4d9c0b9') } keysExamined:0 docsExamined:0 nMatched:0 nModified:0 fastmodinsert:1 upsert:1 numYields:0 locks:{ Global: { acquireCount: { r: 5, w: 5 } }, Database: { acquireCount: { w: 4, W: 1 } }, Collection: { acquireCount: { w: 2 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } 34ms [js_test:multi_coll_drop] 2016-04-06T02:52:08.118-0500 c20011| 2016-04-06T02:52:06.640-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|4, t: 1 } } cursorid:20785203637 numYields:1 nreturned:1 reslen:459 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 47ms [js_test:multi_coll_drop] 2016-04-06T02:52:08.123-0500 c20013| 2016-04-06T02:52:06.641-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:08.128-0500 c20012| 2016-04-06T02:52:06.641-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|1 and ending at ts: Timestamp 1459929126000|1 [js_test:multi_coll_drop] 2016-04-06T02:52:08.128-0500 c20013| 2016-04-06T02:52:06.641-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.129-0500 c20013| 2016-04-06T02:52:06.641-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.132-0500 c20013| 2016-04-06T02:52:06.641-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.133-0500 c20013| 2016-04-06T02:52:06.641-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.136-0500 c20013| 2016-04-06T02:52:06.641-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.138-0500 c20013| 2016-04-06T02:52:06.641-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.139-0500 c20013| 2016-04-06T02:52:06.641-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.140-0500 c20013| 2016-04-06T02:52:06.641-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.142-0500 c20013| 2016-04-06T02:52:06.641-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.145-0500 c20013| 2016-04-06T02:52:06.641-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.157-0500 c20013| 2016-04-06T02:52:06.641-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.159-0500 c20013| 2016-04-06T02:52:06.641-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.162-0500 c20013| 2016-04-06T02:52:06.641-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.162-0500 c20013| 2016-04-06T02:52:06.641-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.164-0500 c20013| 2016-04-06T02:52:06.641-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.164-0500 c20013| 2016-04-06T02:52:06.641-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.165-0500 c20013| 2016-04-06T02:52:06.641-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:08.165-0500 c20012| 2016-04-06T02:52:06.641-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:08.167-0500 c20012| 2016-04-06T02:52:06.641-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.172-0500 c20012| 2016-04-06T02:52:06.641-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.184-0500 c20012| 2016-04-06T02:52:06.641-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.188-0500 c20012| 2016-04-06T02:52:06.641-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.190-0500 c20012| 2016-04-06T02:52:06.641-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.192-0500 c20012| 2016-04-06T02:52:06.641-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.193-0500 c20012| 2016-04-06T02:52:06.641-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.200-0500 c20012| 2016-04-06T02:52:06.642-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.201-0500 c20012| 2016-04-06T02:52:06.642-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.201-0500 c20012| 2016-04-06T02:52:06.642-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.203-0500 c20012| 2016-04-06T02:52:06.642-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:08.206-0500 c20012| 2016-04-06T02:52:06.642-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.207-0500 c20012| 2016-04-06T02:52:06.642-0500 D STORAGE [repl writer worker 14] create collection config.version {} [js_test:multi_coll_drop] 2016-04-06T02:52:08.207-0500 c20012| 2016-04-06T02:52:06.642-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.212-0500 c20012| 2016-04-06T02:52:06.642-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.214-0500 c20012| 2016-04-06T02:52:06.642-0500 D STORAGE [repl writer worker 14] stored meta data for config.version @ RecordId(8) [js_test:multi_coll_drop] 2016-04-06T02:52:08.214-0500 c20012| 2016-04-06T02:52:06.642-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.218-0500 c20012| 2016-04-06T02:52:06.642-0500 D STORAGE [repl writer worker 14] WiredTigerKVEngine::createRecordStore uri: table:collection-13-6577373056560964212 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:08.220-0500 c20013| 2016-04-06T02:52:06.641-0500 D STORAGE [repl writer worker 1] create collection config.version {} [js_test:multi_coll_drop] 2016-04-06T02:52:08.223-0500 c20013| 2016-04-06T02:52:06.641-0500 D STORAGE [repl writer worker 1] stored meta data for config.version @ RecordId(8) [js_test:multi_coll_drop] 2016-04-06T02:52:08.227-0500 c20013| 2016-04-06T02:52:06.641-0500 D STORAGE [repl writer worker 1] WiredTigerKVEngine::createRecordStore uri: table:collection-13-751336887848580549 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:08.229-0500 c20012| 2016-04-06T02:52:06.642-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.234-0500 c20012| 2016-04-06T02:52:06.642-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.241-0500 c20013| 2016-04-06T02:52:06.643-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 54 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.643-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:08.242-0500 c20013| 2016-04-06T02:52:06.643-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 54 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:08.248-0500 c20011| 2016-04-06T02:52:06.643-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:08.253-0500 c20011| 2016-04-06T02:52:06.643-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|4, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:520 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:08.257-0500 c20013| 2016-04-06T02:52:06.643-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 54 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|2, t: 1, h: -7403030732037573668, v: 2, op: "i", ns: "config.version", o: { _id: 1, minCompatibleVersion: 5, currentVersion: 6, clusterId: ObjectId('5704c02606c33406d4d9c0b9') } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.259-0500 c20013| 2016-04-06T02:52:06.643-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|2 and ending at ts: Timestamp 1459929126000|2 [js_test:multi_coll_drop] 2016-04-06T02:52:08.262-0500 c20012| 2016-04-06T02:52:06.643-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 51 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.643-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:08.264-0500 c20012| 2016-04-06T02:52:06.643-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 51 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:08.266-0500 c20011| 2016-04-06T02:52:06.643-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:08.269-0500 c20012| 2016-04-06T02:52:06.644-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 51 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|2, t: 1, h: -7403030732037573668, v: 2, op: "i", ns: "config.version", o: { _id: 1, minCompatibleVersion: 5, currentVersion: 6, clusterId: ObjectId('5704c02606c33406d4d9c0b9') } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.271-0500 c20012| 2016-04-06T02:52:06.644-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|2 and ending at ts: Timestamp 1459929126000|2 [js_test:multi_coll_drop] 2016-04-06T02:52:08.278-0500 c20011| 2016-04-06T02:52:06.643-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|4, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:520 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:08.280-0500 c20011| 2016-04-06T02:52:06.645-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:08.287-0500 c20012| 2016-04-06T02:52:06.646-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 53 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.646-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:08.288-0500 c20012| 2016-04-06T02:52:06.646-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 53 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:08.291-0500 c20011| 2016-04-06T02:52:06.646-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:08.293-0500 c20013| 2016-04-06T02:52:06.645-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 56 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.645-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:08.295-0500 c20013| 2016-04-06T02:52:06.645-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 56 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:08.297-0500 c20011| 2016-04-06T02:52:06.652-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929126000|2, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929123000|4, t: 1 }, name-id: "17" } [js_test:multi_coll_drop] 2016-04-06T02:52:08.299-0500 c20012| 2016-04-06T02:52:06.655-0500 D STORAGE [repl writer worker 14] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-13-6577373056560964212 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:08.300-0500 c20012| 2016-04-06T02:52:06.655-0500 D STORAGE [repl writer worker 14] config.version: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:08.302-0500 c20012| 2016-04-06T02:52:06.655-0500 D STORAGE [repl writer worker 14] WiredTigerKVEngine::createSortedDataInterface ident: index-14-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.version" }), [js_test:multi_coll_drop] 2016-04-06T02:52:08.309-0500 c20012| 2016-04-06T02:52:06.655-0500 D STORAGE [repl writer worker 14] create uri: table:index-14-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.version" }), [js_test:multi_coll_drop] 2016-04-06T02:52:08.313-0500 c20013| 2016-04-06T02:52:06.655-0500 D STORAGE [repl writer worker 1] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-13-751336887848580549 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:08.316-0500 c20013| 2016-04-06T02:52:06.655-0500 D STORAGE [repl writer worker 1] config.version: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:08.321-0500 c20013| 2016-04-06T02:52:06.655-0500 D STORAGE [repl writer worker 1] WiredTigerKVEngine::createSortedDataInterface ident: index-14-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.version" }), [js_test:multi_coll_drop] 2016-04-06T02:52:08.324-0500 c20013| 2016-04-06T02:52:06.655-0500 D STORAGE [repl writer worker 1] create uri: table:index-14-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.version" }), [js_test:multi_coll_drop] 2016-04-06T02:52:08.326-0500 c20012| 2016-04-06T02:52:06.662-0500 D STORAGE [repl writer worker 14] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-14-6577373056560964212 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:08.328-0500 c20012| 2016-04-06T02:52:06.662-0500 D STORAGE [repl writer worker 14] config.version: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:08.329-0500 c20012| 2016-04-06T02:52:06.662-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.331-0500 c20012| 2016-04-06T02:52:06.662-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.336-0500 c20013| 2016-04-06T02:52:06.662-0500 D STORAGE [repl writer worker 1] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-14-751336887848580549 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:08.337-0500 c20012| 2016-04-06T02:52:06.662-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.340-0500 c20012| 2016-04-06T02:52:06.662-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.342-0500 c20012| 2016-04-06T02:52:06.662-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.344-0500 c20012| 2016-04-06T02:52:06.662-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.347-0500 c20013| 2016-04-06T02:52:06.662-0500 D STORAGE [repl writer worker 1] config.version: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:08.347-0500 c20012| 2016-04-06T02:52:06.664-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.348-0500 c20012| 2016-04-06T02:52:06.664-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.352-0500 c20012| 2016-04-06T02:52:06.664-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.355-0500 c20012| 2016-04-06T02:52:06.664-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.357-0500 c20012| 2016-04-06T02:52:06.664-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.357-0500 c20012| 2016-04-06T02:52:06.664-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.359-0500 c20013| 2016-04-06T02:52:06.664-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.361-0500 c20013| 2016-04-06T02:52:06.664-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.363-0500 c20012| 2016-04-06T02:52:06.664-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.364-0500 c20012| 2016-04-06T02:52:06.664-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.369-0500 c20012| 2016-04-06T02:52:06.664-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.369-0500 c20012| 2016-04-06T02:52:06.664-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.374-0500 c20013| 2016-04-06T02:52:06.664-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.376-0500 c20013| 2016-04-06T02:52:06.664-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.379-0500 c20012| 2016-04-06T02:52:06.664-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:08.381-0500 c20012| 2016-04-06T02:52:06.664-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:08.382-0500 c20012| 2016-04-06T02:52:06.665-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.387-0500 c20012| 2016-04-06T02:52:06.665-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.389-0500 c20012| 2016-04-06T02:52:06.665-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.395-0500 c20012| 2016-04-06T02:52:06.665-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.396-0500 c20012| 2016-04-06T02:52:06.665-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.397-0500 c20012| 2016-04-06T02:52:06.665-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.399-0500 c20012| 2016-04-06T02:52:06.665-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.403-0500 c20012| 2016-04-06T02:52:06.665-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.403-0500 c20012| 2016-04-06T02:52:06.665-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.404-0500 c20012| 2016-04-06T02:52:06.665-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.405-0500 c20012| 2016-04-06T02:52:06.666-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.408-0500 c20012| 2016-04-06T02:52:06.666-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.408-0500 c20012| 2016-04-06T02:52:06.666-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:08.411-0500 c20012| 2016-04-06T02:52:06.666-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.419-0500 c20012| 2016-04-06T02:52:06.666-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:08.421-0500 c20012| 2016-04-06T02:52:06.666-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.426-0500 c20012| 2016-04-06T02:52:06.666-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 54 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:08.428-0500 c20012| 2016-04-06T02:52:06.666-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 54 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:08.430-0500 c20012| 2016-04-06T02:52:06.667-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.432-0500 c20012| 2016-04-06T02:52:06.667-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.438-0500 c20012| 2016-04-06T02:52:06.667-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 54 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.446-0500 c20011| 2016-04-06T02:52:06.667-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:08.446-0500 c20011| 2016-04-06T02:52:06.667-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:08.454-0500 c20011| 2016-04-06T02:52:06.667-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|1, t: 1 } and is durable through: { ts: Timestamp 1459929123000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.456-0500 c20011| 2016-04-06T02:52:06.667-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929126000|2, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929123000|4, t: 1 }, name-id: "17" } [js_test:multi_coll_drop] 2016-04-06T02:52:08.460-0500 c20011| 2016-04-06T02:52:06.667-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.462-0500 c20011| 2016-04-06T02:52:06.667-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:08.464-0500 c20012| 2016-04-06T02:52:06.667-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.466-0500 c20012| 2016-04-06T02:52:06.667-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.467-0500 c20012| 2016-04-06T02:52:06.667-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.470-0500 c20012| 2016-04-06T02:52:06.667-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.474-0500 c20012| 2016-04-06T02:52:06.667-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.474-0500 c20013| 2016-04-06T02:52:06.668-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.475-0500 c20013| 2016-04-06T02:52:06.668-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.477-0500 c20013| 2016-04-06T02:52:06.668-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.478-0500 c20013| 2016-04-06T02:52:06.668-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.479-0500 c20013| 2016-04-06T02:52:06.668-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.481-0500 c20013| 2016-04-06T02:52:06.668-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.482-0500 c20013| 2016-04-06T02:52:06.668-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.483-0500 c20013| 2016-04-06T02:52:06.668-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.484-0500 c20013| 2016-04-06T02:52:06.668-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.485-0500 c20013| 2016-04-06T02:52:06.668-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.489-0500 c20012| 2016-04-06T02:52:06.667-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.492-0500 c20012| 2016-04-06T02:52:06.668-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.493-0500 c20012| 2016-04-06T02:52:06.668-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.496-0500 c20012| 2016-04-06T02:52:06.668-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.501-0500 c20012| 2016-04-06T02:52:06.668-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.508-0500 c20012| 2016-04-06T02:52:06.668-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.510-0500 c20012| 2016-04-06T02:52:06.668-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.515-0500 c20012| 2016-04-06T02:52:06.668-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.518-0500 c20012| 2016-04-06T02:52:06.668-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.521-0500 c20012| 2016-04-06T02:52:06.668-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.531-0500 c20012| 2016-04-06T02:52:06.668-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.538-0500 c20012| 2016-04-06T02:52:06.668-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:08.543-0500 c20012| 2016-04-06T02:52:06.669-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:08.549-0500 c20012| 2016-04-06T02:52:06.669-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 56 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:08.561-0500 c20012| 2016-04-06T02:52:06.669-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 56 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:08.563-0500 c20013| 2016-04-06T02:52:06.668-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.563-0500 c20013| 2016-04-06T02:52:06.668-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.567-0500 c20011| 2016-04-06T02:52:06.669-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:08.567-0500 c20011| 2016-04-06T02:52:06.669-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:08.569-0500 c20011| 2016-04-06T02:52:06.669-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|2, t: 1 } and is durable through: { ts: Timestamp 1459929123000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.573-0500 c20011| 2016-04-06T02:52:06.669-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929126000|2, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929123000|4, t: 1 }, name-id: "17" } [js_test:multi_coll_drop] 2016-04-06T02:52:08.578-0500 c20011| 2016-04-06T02:52:06.669-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.595-0500 c20011| 2016-04-06T02:52:06.669-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:08.602-0500 c20012| 2016-04-06T02:52:06.669-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 56 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.608-0500 c20013| 2016-04-06T02:52:06.670-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:08.614-0500 c20013| 2016-04-06T02:52:06.671-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:08.615-0500 c20013| 2016-04-06T02:52:06.671-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.616-0500 c20013| 2016-04-06T02:52:06.671-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.617-0500 c20013| 2016-04-06T02:52:06.671-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.618-0500 c20013| 2016-04-06T02:52:06.671-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.619-0500 c20013| 2016-04-06T02:52:06.671-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.622-0500 c20013| 2016-04-06T02:52:06.671-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.623-0500 c20013| 2016-04-06T02:52:06.671-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.624-0500 c20013| 2016-04-06T02:52:06.671-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.624-0500 c20013| 2016-04-06T02:52:06.671-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.625-0500 c20013| 2016-04-06T02:52:06.672-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.625-0500 c20013| 2016-04-06T02:52:06.672-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.626-0500 c20013| 2016-04-06T02:52:06.672-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.626-0500 c20013| 2016-04-06T02:52:06.672-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.628-0500 c20013| 2016-04-06T02:52:06.672-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.629-0500 c20013| 2016-04-06T02:52:06.672-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.630-0500 c20013| 2016-04-06T02:52:06.672-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:08.635-0500 c20011| 2016-04-06T02:52:06.673-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:08.637-0500 c20011| 2016-04-06T02:52:06.673-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:08.647-0500 c20011| 2016-04-06T02:52:06.673-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.653-0500 c20011| 2016-04-06T02:52:06.673-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|1, t: 1 } and is durable through: { ts: Timestamp 1459929123000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.664-0500 c20011| 2016-04-06T02:52:06.673-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929126000|2, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929123000|4, t: 1 }, name-id: "17" } [js_test:multi_coll_drop] 2016-04-06T02:52:08.670-0500 c20011| 2016-04-06T02:52:06.673-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:08.673-0500 c20013| 2016-04-06T02:52:06.673-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:08.677-0500 c20013| 2016-04-06T02:52:06.673-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 57 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929123000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:08.679-0500 c20013| 2016-04-06T02:52:06.673-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.680-0500 c20013| 2016-04-06T02:52:06.673-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 57 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:08.682-0500 c20013| 2016-04-06T02:52:06.673-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 57 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.687-0500 c20012| 2016-04-06T02:52:06.673-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:08.687-0500 c20013| 2016-04-06T02:52:06.673-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.692-0500 c20012| 2016-04-06T02:52:06.673-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 58 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:08.696-0500 c20012| 2016-04-06T02:52:06.673-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 58 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:08.697-0500 c20013| 2016-04-06T02:52:06.674-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.699-0500 c20013| 2016-04-06T02:52:06.674-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.707-0500 c20011| 2016-04-06T02:52:06.674-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:08.712-0500 c20013| 2016-04-06T02:52:06.674-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.712-0500 c20011| 2016-04-06T02:52:06.674-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:08.717-0500 c20011| 2016-04-06T02:52:06.674-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|2, t: 1 } and is durable through: { ts: Timestamp 1459929126000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.723-0500 c20011| 2016-04-06T02:52:06.674-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.728-0500 c20011| 2016-04-06T02:52:06.674-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929126000|2, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|1, t: 1 }, name-id: "20" } [js_test:multi_coll_drop] 2016-04-06T02:52:08.729-0500 c20011| 2016-04-06T02:52:06.674-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929126000|2, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|1, t: 1 }, name-id: "20" } [js_test:multi_coll_drop] 2016-04-06T02:52:08.736-0500 c20011| 2016-04-06T02:52:06.674-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.747-0500 c20011| 2016-04-06T02:52:06.674-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:08.754-0500 c20012| 2016-04-06T02:52:06.674-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 58 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.764-0500 c20011| 2016-04-06T02:52:06.674-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|4, t: 1 } } cursorid:17466612721 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 28ms [js_test:multi_coll_drop] 2016-04-06T02:52:08.768-0500 c20011| 2016-04-06T02:52:06.674-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929123000|4, t: 1 } } cursorid:20785203637 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 27ms [js_test:multi_coll_drop] 2016-04-06T02:52:08.771-0500 c20012| 2016-04-06T02:52:06.674-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 53 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.774-0500 c20013| 2016-04-06T02:52:06.674-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 56 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.776-0500 c20013| 2016-04-06T02:52:06.674-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.776-0500 c20013| 2016-04-06T02:52:06.674-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.779-0500 c20013| 2016-04-06T02:52:06.674-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.780-0500 c20013| 2016-04-06T02:52:06.674-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:08.786-0500 c20013| 2016-04-06T02:52:06.674-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 60 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.674-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:08.788-0500 c20012| 2016-04-06T02:52:06.675-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.795-0500 c20012| 2016-04-06T02:52:06.675-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:08.807-0500 c20013| 2016-04-06T02:52:06.675-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 60 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:08.813-0500 c20012| 2016-04-06T02:52:06.675-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 61 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.675-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:08.816-0500 c20011| 2016-04-06T02:52:06.675-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:08.818-0500 c20012| 2016-04-06T02:52:06.675-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 61 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:08.821-0500 c20011| 2016-04-06T02:52:06.675-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:08.822-0500 c20013| 2016-04-06T02:52:06.677-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.823-0500 c20013| 2016-04-06T02:52:06.677-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.824-0500 c20013| 2016-04-06T02:52:06.677-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.825-0500 c20013| 2016-04-06T02:52:06.677-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.827-0500 c20013| 2016-04-06T02:52:06.677-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.828-0500 c20013| 2016-04-06T02:52:06.677-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.829-0500 c20013| 2016-04-06T02:52:06.677-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.831-0500 c20013| 2016-04-06T02:52:06.677-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.835-0500 c20012| 2016-04-06T02:52:06.677-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:08.840-0500 c20012| 2016-04-06T02:52:06.677-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 62 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:08.842-0500 c20012| 2016-04-06T02:52:06.677-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 62 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:08.845-0500 c20011| 2016-04-06T02:52:06.677-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:08.848-0500 c20011| 2016-04-06T02:52:06.677-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:08.852-0500 c20011| 2016-04-06T02:52:06.677-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|2, t: 1 } and is durable through: { ts: Timestamp 1459929126000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.854-0500 c20011| 2016-04-06T02:52:06.677-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.858-0500 c20011| 2016-04-06T02:52:06.677-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.862-0500 c20011| 2016-04-06T02:52:06.677-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:08.866-0500 c20013| 2016-04-06T02:52:06.677-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.868-0500 c20012| 2016-04-06T02:52:06.677-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 62 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.871-0500 c20013| 2016-04-06T02:52:06.677-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 60 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.877-0500 c20011| 2016-04-06T02:52:06.677-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|1, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:08.886-0500 c20011| 2016-04-06T02:52:06.677-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|1, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:08.888-0500 c20012| 2016-04-06T02:52:06.677-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 61 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.889-0500 c20013| 2016-04-06T02:52:06.677-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.892-0500 c20013| 2016-04-06T02:52:06.677-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:08.894-0500 c20013| 2016-04-06T02:52:06.678-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 62 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.678-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:08.895-0500 c20012| 2016-04-06T02:52:06.677-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.901-0500 c20012| 2016-04-06T02:52:06.678-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:08.902-0500 c20013| 2016-04-06T02:52:06.678-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 62 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:08.909-0500 c20012| 2016-04-06T02:52:06.678-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 65 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.678-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:08.911-0500 c20012| 2016-04-06T02:52:06.678-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 65 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:08.912-0500 s20014| 2016-04-06T02:52:06.678-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 12 finished with response: { ok: 1, nModified: 0, n: 1, upserted: [ { index: 0, _id: 1 } ], opTime: { ts: Timestamp 1459929126000|2, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:08.915-0500 c20011| 2016-04-06T02:52:06.678-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:08.919-0500 c20011| 2016-04-06T02:52:06.678-0500 I COMMAND [conn10] command config.$cmd command: update { update: "version", updates: [ { q: { _id: 1, minCompatibleVersion: 5, currentVersion: 6, clusterId: ObjectId('5704c02606c33406d4d9c0b9') }, u: { _id: 1, minCompatibleVersion: 5, currentVersion: 6, clusterId: ObjectId('5704c02606c33406d4d9c0b9') }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:429 locks:{ Global: { acquireCount: { r: 5, w: 5 } }, Database: { acquireCount: { w: 4, W: 1 } }, Collection: { acquireCount: { w: 2 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 72ms [js_test:multi_coll_drop] 2016-04-06T02:52:08.921-0500 c20011| 2016-04-06T02:52:06.678-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:08.926-0500 s20014| 2016-04-06T02:52:06.678-0500 D ASIO [mongosMain] startCommand: RemoteCommand 14 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:52:36.678-0500 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929126000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.932-0500 c20013| 2016-04-06T02:52:06.678-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:08.937-0500 c20013| 2016-04-06T02:52:06.678-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 63 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:08.941-0500 c20013| 2016-04-06T02:52:06.678-0500 D COMMAND [conn10] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929126000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.943-0500 c20013| 2016-04-06T02:52:06.678-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 63 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:08.946-0500 c20013| 2016-04-06T02:52:06.678-0500 D REPL [conn10] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929126000|2, t: 1 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929126000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.947-0500 c20013| 2016-04-06T02:52:06.678-0500 D REPL [conn10] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999974μs [js_test:multi_coll_drop] 2016-04-06T02:52:08.952-0500 s20014| 2016-04-06T02:52:06.678-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 14 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:08.961-0500 c20011| 2016-04-06T02:52:06.678-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:08.964-0500 c20011| 2016-04-06T02:52:06.678-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:08.969-0500 c20011| 2016-04-06T02:52:06.678-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.973-0500 c20011| 2016-04-06T02:52:06.678-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|1, t: 1 } and is durable through: { ts: Timestamp 1459929126000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.976-0500 c20011| 2016-04-06T02:52:06.678-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:08.977-0500 c20013| 2016-04-06T02:52:06.678-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 63 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.978-0500 c20013| 2016-04-06T02:52:06.682-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:08.980-0500 c20013| 2016-04-06T02:52:06.682-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:08.983-0500 c20013| 2016-04-06T02:52:06.682-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929126000|2, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:08.985-0500 c20013| 2016-04-06T02:52:06.682-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929126000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:08.987-0500 c20013| 2016-04-06T02:52:06.682-0500 D QUERY [conn10] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:08.992-0500 c20013| 2016-04-06T02:52:06.682-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:08.997-0500 c20013| 2016-04-06T02:52:06.683-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 65 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:08.999-0500 c20013| 2016-04-06T02:52:06.683-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 65 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:09.009-0500 c20013| 2016-04-06T02:52:06.683-0500 I COMMAND [conn10] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929126000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:372 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:09.014-0500 c20011| 2016-04-06T02:52:06.683-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.015-0500 c20011| 2016-04-06T02:52:06.683-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:09.022-0500 c20011| 2016-04-06T02:52:06.683-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.024-0500 c20011| 2016-04-06T02:52:06.683-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|2, t: 1 } and is durable through: { ts: Timestamp 1459929126000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.031-0500 c20011| 2016-04-06T02:52:06.683-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:09.032-0500 s20014| 2016-04-06T02:52:06.683-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 14 finished with response: { waitedMS: 4, cursor: { id: 0, ns: "config.settings", firstBatch: [] }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.033-0500 c20013| 2016-04-06T02:52:06.683-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 65 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.036-0500 s20014| 2016-04-06T02:52:06.683-0500 D ASIO [mongosMain] startCommand: RemoteCommand 16 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:36.683-0500 cmd:{ insert: "settings", documents: [ { _id: "chunksize", value: 50 } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.037-0500 s20014| 2016-04-06T02:52:06.683-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 16 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:09.038-0500 c20011| 2016-04-06T02:52:06.683-0500 D COMMAND [conn10] run command config.$cmd { insert: "settings", documents: [ { _id: "chunksize", value: 50 } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.039-0500 c20011| 2016-04-06T02:52:06.683-0500 D STORAGE [conn10] stored meta data for config.settings @ RecordId(8) [js_test:multi_coll_drop] 2016-04-06T02:52:09.040-0500 c20011| 2016-04-06T02:52:06.683-0500 D STORAGE [conn10] WiredTigerKVEngine::createRecordStore uri: table:collection-13--6404702321693896372 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:09.042-0500 c20011| 2016-04-06T02:52:06.689-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-13--6404702321693896372 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:09.043-0500 c20011| 2016-04-06T02:52:06.689-0500 D STORAGE [conn10] config.settings: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:09.049-0500 c20011| 2016-04-06T02:52:06.690-0500 D STORAGE [conn10] WiredTigerKVEngine::createSortedDataInterface ident: index-14--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.settings" }), [js_test:multi_coll_drop] 2016-04-06T02:52:09.053-0500 c20011| 2016-04-06T02:52:06.690-0500 D STORAGE [conn10] create uri: table:index-14--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.settings" }), [js_test:multi_coll_drop] 2016-04-06T02:52:09.058-0500 c20013| 2016-04-06T02:52:06.695-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.061-0500 c20013| 2016-04-06T02:52:06.695-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 67 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.062-0500 c20013| 2016-04-06T02:52:06.695-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 67 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:09.066-0500 c20011| 2016-04-06T02:52:06.695-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.066-0500 c20011| 2016-04-06T02:52:06.695-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:09.071-0500 c20011| 2016-04-06T02:52:06.695-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.072-0500 c20011| 2016-04-06T02:52:06.695-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|2, t: 1 } and is durable through: { ts: Timestamp 1459929126000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.076-0500 c20011| 2016-04-06T02:52:06.695-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:09.080-0500 c20013| 2016-04-06T02:52:06.695-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 67 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.083-0500 c20011| 2016-04-06T02:52:06.695-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-14--6404702321693896372 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:09.084-0500 c20011| 2016-04-06T02:52:06.695-0500 D STORAGE [conn10] config.settings: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:09.088-0500 c20013| 2016-04-06T02:52:06.695-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 62 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|3, t: 1, h: -744036901581004205, v: 2, op: "c", ns: "config.$cmd", o: { create: "settings" } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.090-0500 c20013| 2016-04-06T02:52:06.695-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|3 and ending at ts: Timestamp 1459929126000|3 [js_test:multi_coll_drop] 2016-04-06T02:52:09.092-0500 c20011| 2016-04-06T02:52:06.695-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|2, t: 1 } } cursorid:17466612721 numYields:1 nreturned:1 reslen:460 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 17ms [js_test:multi_coll_drop] 2016-04-06T02:52:09.096-0500 c20011| 2016-04-06T02:52:06.695-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|2, t: 1 } } cursorid:20785203637 numYields:1 nreturned:1 reslen:460 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 17ms [js_test:multi_coll_drop] 2016-04-06T02:52:09.098-0500 c20013| 2016-04-06T02:52:06.696-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:09.102-0500 c20012| 2016-04-06T02:52:06.696-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 65 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|3, t: 1, h: -744036901581004205, v: 2, op: "c", ns: "config.$cmd", o: { create: "settings" } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.110-0500 c20012| 2016-04-06T02:52:06.696-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|3 and ending at ts: Timestamp 1459929126000|3 [js_test:multi_coll_drop] 2016-04-06T02:52:09.113-0500 c20013| 2016-04-06T02:52:06.696-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.115-0500 c20013| 2016-04-06T02:52:06.696-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.115-0500 c20013| 2016-04-06T02:52:06.696-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.118-0500 c20011| 2016-04-06T02:52:06.696-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929126000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|2, t: 1 }, name-id: "21" } [js_test:multi_coll_drop] 2016-04-06T02:52:09.121-0500 c20013| 2016-04-06T02:52:06.696-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.124-0500 c20013| 2016-04-06T02:52:06.696-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.125-0500 c20013| 2016-04-06T02:52:06.696-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.125-0500 c20013| 2016-04-06T02:52:06.696-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.126-0500 c20013| 2016-04-06T02:52:06.696-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.128-0500 c20013| 2016-04-06T02:52:06.696-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.128-0500 c20013| 2016-04-06T02:52:06.696-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.130-0500 c20013| 2016-04-06T02:52:06.696-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.132-0500 c20013| 2016-04-06T02:52:06.696-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.134-0500 c20013| 2016-04-06T02:52:06.696-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.136-0500 c20013| 2016-04-06T02:52:06.696-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:09.136-0500 c20013| 2016-04-06T02:52:06.696-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.138-0500 c20013| 2016-04-06T02:52:06.696-0500 D STORAGE [repl writer worker 15] create collection config.settings {} [js_test:multi_coll_drop] 2016-04-06T02:52:09.140-0500 c20013| 2016-04-06T02:52:06.696-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.141-0500 c20013| 2016-04-06T02:52:06.696-0500 D STORAGE [repl writer worker 15] stored meta data for config.settings @ RecordId(9) [js_test:multi_coll_drop] 2016-04-06T02:52:09.144-0500 c20013| 2016-04-06T02:52:06.696-0500 D STORAGE [repl writer worker 15] WiredTigerKVEngine::createRecordStore uri: table:collection-15-751336887848580549 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:09.148-0500 c20012| 2016-04-06T02:52:06.696-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:09.152-0500 c20012| 2016-04-06T02:52:06.696-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.154-0500 c20012| 2016-04-06T02:52:06.696-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.157-0500 c20012| 2016-04-06T02:52:06.696-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.157-0500 c20012| 2016-04-06T02:52:06.696-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.158-0500 c20012| 2016-04-06T02:52:06.696-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.162-0500 c20012| 2016-04-06T02:52:06.696-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.165-0500 c20012| 2016-04-06T02:52:06.696-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.168-0500 c20012| 2016-04-06T02:52:06.696-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.168-0500 c20012| 2016-04-06T02:52:06.696-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.169-0500 c20012| 2016-04-06T02:52:06.696-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.169-0500 c20012| 2016-04-06T02:52:06.697-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.170-0500 c20012| 2016-04-06T02:52:06.697-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.171-0500 c20012| 2016-04-06T02:52:06.697-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:09.173-0500 c20012| 2016-04-06T02:52:06.697-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.185-0500 c20012| 2016-04-06T02:52:06.697-0500 D STORAGE [repl writer worker 15] create collection config.settings {} [js_test:multi_coll_drop] 2016-04-06T02:52:09.185-0500 c20012| 2016-04-06T02:52:06.697-0500 D STORAGE [repl writer worker 15] stored meta data for config.settings @ RecordId(9) [js_test:multi_coll_drop] 2016-04-06T02:52:09.187-0500 c20012| 2016-04-06T02:52:06.697-0500 D STORAGE [repl writer worker 15] WiredTigerKVEngine::createRecordStore uri: table:collection-15-6577373056560964212 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:09.188-0500 c20012| 2016-04-06T02:52:06.697-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.190-0500 c20012| 2016-04-06T02:52:06.697-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.191-0500 c20013| 2016-04-06T02:52:06.697-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.192-0500 c20012| 2016-04-06T02:52:06.697-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.202-0500 c20013| 2016-04-06T02:52:06.698-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 70 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.698-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:09.203-0500 c20013| 2016-04-06T02:52:06.698-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 70 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:09.205-0500 c20011| 2016-04-06T02:52:06.698-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:09.212-0500 c20011| 2016-04-06T02:52:06.698-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|2, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:477 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:09.215-0500 c20013| 2016-04-06T02:52:06.698-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 70 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|4, t: 1, h: -3634950017947575031, v: 2, op: "i", ns: "config.settings", o: { _id: "chunksize", value: 50 } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.217-0500 c20013| 2016-04-06T02:52:06.698-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|4 and ending at ts: Timestamp 1459929126000|4 [js_test:multi_coll_drop] 2016-04-06T02:52:09.225-0500 c20012| 2016-04-06T02:52:06.698-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 67 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.698-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:09.231-0500 c20012| 2016-04-06T02:52:06.698-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 67 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:09.238-0500 c20011| 2016-04-06T02:52:06.698-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:09.241-0500 c20012| 2016-04-06T02:52:06.699-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 67 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|4, t: 1, h: -3634950017947575031, v: 2, op: "i", ns: "config.settings", o: { _id: "chunksize", value: 50 } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.243-0500 c20011| 2016-04-06T02:52:06.698-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|2, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:477 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:09.247-0500 c20012| 2016-04-06T02:52:06.699-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|4 and ending at ts: Timestamp 1459929126000|4 [js_test:multi_coll_drop] 2016-04-06T02:52:09.250-0500 c20013| 2016-04-06T02:52:06.700-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 72 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.700-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:09.251-0500 c20013| 2016-04-06T02:52:06.700-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 72 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:09.252-0500 c20011| 2016-04-06T02:52:06.700-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:09.256-0500 c20012| 2016-04-06T02:52:06.701-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 69 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.701-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:09.261-0500 c20012| 2016-04-06T02:52:06.701-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 69 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:09.263-0500 c20011| 2016-04-06T02:52:06.701-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:09.267-0500 c20012| 2016-04-06T02:52:06.704-0500 D STORAGE [repl writer worker 15] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-15-6577373056560964212 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:09.269-0500 c20012| 2016-04-06T02:52:06.704-0500 D STORAGE [repl writer worker 15] config.settings: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:09.272-0500 c20012| 2016-04-06T02:52:06.704-0500 D STORAGE [repl writer worker 15] WiredTigerKVEngine::createSortedDataInterface ident: index-16-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.settings" }), [js_test:multi_coll_drop] 2016-04-06T02:52:09.276-0500 c20012| 2016-04-06T02:52:06.704-0500 D STORAGE [repl writer worker 15] create uri: table:index-16-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.settings" }), [js_test:multi_coll_drop] 2016-04-06T02:52:09.278-0500 c20013| 2016-04-06T02:52:06.704-0500 D STORAGE [repl writer worker 15] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-15-751336887848580549 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:09.280-0500 c20013| 2016-04-06T02:52:06.704-0500 D STORAGE [repl writer worker 15] config.settings: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:09.282-0500 c20013| 2016-04-06T02:52:06.704-0500 D STORAGE [repl writer worker 15] WiredTigerKVEngine::createSortedDataInterface ident: index-16-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.settings" }), [js_test:multi_coll_drop] 2016-04-06T02:52:09.288-0500 c20013| 2016-04-06T02:52:06.705-0500 D STORAGE [repl writer worker 15] create uri: table:index-16-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.settings" }), [js_test:multi_coll_drop] 2016-04-06T02:52:09.289-0500 2016-04-06T02:52:06.713-0500 W NETWORK [thread1] Failed to connect to 127.0.0.1:20014, reason: Connection refused [js_test:multi_coll_drop] 2016-04-06T02:52:09.290-0500 c20012| 2016-04-06T02:52:06.717-0500 D STORAGE [repl writer worker 15] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-16-6577373056560964212 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:09.293-0500 c20012| 2016-04-06T02:52:06.717-0500 D STORAGE [repl writer worker 15] config.settings: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:09.294-0500 c20012| 2016-04-06T02:52:06.717-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.296-0500 c20012| 2016-04-06T02:52:06.718-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.299-0500 c20012| 2016-04-06T02:52:06.718-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.301-0500 c20012| 2016-04-06T02:52:06.719-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.303-0500 c20012| 2016-04-06T02:52:06.719-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.307-0500 c20012| 2016-04-06T02:52:06.719-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.309-0500 c20012| 2016-04-06T02:52:06.719-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.311-0500 c20012| 2016-04-06T02:52:06.719-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.313-0500 c20012| 2016-04-06T02:52:06.719-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.315-0500 c20012| 2016-04-06T02:52:06.719-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.317-0500 c20012| 2016-04-06T02:52:06.719-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.318-0500 c20012| 2016-04-06T02:52:06.719-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.319-0500 c20012| 2016-04-06T02:52:06.719-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.320-0500 c20012| 2016-04-06T02:52:06.719-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.322-0500 c20012| 2016-04-06T02:52:06.719-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.323-0500 c20012| 2016-04-06T02:52:06.719-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.325-0500 c20013| 2016-04-06T02:52:06.720-0500 D STORAGE [repl writer worker 15] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-16-751336887848580549 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:09.329-0500 c20013| 2016-04-06T02:52:06.720-0500 D STORAGE [repl writer worker 15] config.settings: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:09.334-0500 c20013| 2016-04-06T02:52:06.720-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.336-0500 c20013| 2016-04-06T02:52:06.720-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.336-0500 c20013| 2016-04-06T02:52:06.720-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.337-0500 c20013| 2016-04-06T02:52:06.720-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.342-0500 c20013| 2016-04-06T02:52:06.720-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.342-0500 c20013| 2016-04-06T02:52:06.721-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.344-0500 c20013| 2016-04-06T02:52:06.721-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.346-0500 c20013| 2016-04-06T02:52:06.721-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.348-0500 c20013| 2016-04-06T02:52:06.721-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.358-0500 c20011| 2016-04-06T02:52:06.721-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.358-0500 c20011| 2016-04-06T02:52:06.721-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:09.359-0500 c20013| 2016-04-06T02:52:06.721-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.359-0500 c20013| 2016-04-06T02:52:06.721-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.364-0500 c20011| 2016-04-06T02:52:06.721-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|3, t: 1 } and is durable through: { ts: Timestamp 1459929126000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.367-0500 c20011| 2016-04-06T02:52:06.721-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929126000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|2, t: 1 }, name-id: "21" } [js_test:multi_coll_drop] 2016-04-06T02:52:09.368-0500 c20013| 2016-04-06T02:52:06.721-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.374-0500 c20012| 2016-04-06T02:52:06.720-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:09.381-0500 c20012| 2016-04-06T02:52:06.720-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:09.386-0500 c20011| 2016-04-06T02:52:06.721-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.386-0500 c20013| 2016-04-06T02:52:06.721-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.395-0500 c20011| 2016-04-06T02:52:06.721-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:09.396-0500 c20013| 2016-04-06T02:52:06.721-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.398-0500 c20012| 2016-04-06T02:52:06.720-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.399-0500 c20012| 2016-04-06T02:52:06.720-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.400-0500 c20012| 2016-04-06T02:52:06.720-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.400-0500 c20012| 2016-04-06T02:52:06.720-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.401-0500 c20012| 2016-04-06T02:52:06.720-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.402-0500 c20012| 2016-04-06T02:52:06.720-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.403-0500 c20012| 2016-04-06T02:52:06.720-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.407-0500 c20012| 2016-04-06T02:52:06.720-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.408-0500 c20012| 2016-04-06T02:52:06.720-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.409-0500 c20012| 2016-04-06T02:52:06.721-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:09.410-0500 c20012| 2016-04-06T02:52:06.721-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.411-0500 c20013| 2016-04-06T02:52:06.721-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.414-0500 c20013| 2016-04-06T02:52:06.721-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.416-0500 c20013| 2016-04-06T02:52:06.721-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:09.418-0500 c20013| 2016-04-06T02:52:06.721-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:09.419-0500 c20013| 2016-04-06T02:52:06.721-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.421-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.422-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.423-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.425-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.429-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.430-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.430-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.431-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.433-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.434-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.434-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.436-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.437-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.440-0500 c20013| 2016-04-06T02:52:06.722-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:09.441-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.443-0500 c20012| 2016-04-06T02:52:06.721-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.446-0500 c20012| 2016-04-06T02:52:06.721-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.452-0500 c20012| 2016-04-06T02:52:06.721-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.457-0500 c20012| 2016-04-06T02:52:06.721-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.461-0500 c20013| 2016-04-06T02:52:06.722-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.464-0500 c20013| 2016-04-06T02:52:06.722-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 73 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.466-0500 c20013| 2016-04-06T02:52:06.722-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 73 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:09.471-0500 c20012| 2016-04-06T02:52:06.721-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 70 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.473-0500 c20012| 2016-04-06T02:52:06.721-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 70 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:09.474-0500 c20012| 2016-04-06T02:52:06.721-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.477-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.478-0500 c20012| 2016-04-06T02:52:06.721-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 70 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.479-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.480-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.480-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.481-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.481-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.482-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.483-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.484-0500 c20012| 2016-04-06T02:52:06.721-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.485-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.487-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.490-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.491-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.493-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.498-0500 c20011| 2016-04-06T02:52:06.722-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.498-0500 c20011| 2016-04-06T02:52:06.722-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:09.500-0500 c20011| 2016-04-06T02:52:06.722-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.502-0500 c20011| 2016-04-06T02:52:06.722-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|3, t: 1 } and is durable through: { ts: Timestamp 1459929126000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.506-0500 c20011| 2016-04-06T02:52:06.722-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929126000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|2, t: 1 }, name-id: "21" } [js_test:multi_coll_drop] 2016-04-06T02:52:09.512-0500 c20011| 2016-04-06T02:52:06.722-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:09.515-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.517-0500 c20013| 2016-04-06T02:52:06.722-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 73 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.518-0500 c20013| 2016-04-06T02:52:06.722-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.519-0500 c20012| 2016-04-06T02:52:06.723-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.519-0500 c20012| 2016-04-06T02:52:06.723-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.520-0500 c20012| 2016-04-06T02:52:06.723-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.520-0500 c20012| 2016-04-06T02:52:06.723-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.523-0500 c20012| 2016-04-06T02:52:06.723-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.523-0500 c20012| 2016-04-06T02:52:06.723-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.523-0500 c20012| 2016-04-06T02:52:06.723-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.527-0500 c20012| 2016-04-06T02:52:06.723-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.529-0500 c20012| 2016-04-06T02:52:06.723-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.532-0500 c20012| 2016-04-06T02:52:06.723-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.533-0500 c20012| 2016-04-06T02:52:06.723-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.536-0500 c20012| 2016-04-06T02:52:06.723-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.536-0500 c20012| 2016-04-06T02:52:06.723-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.536-0500 c20012| 2016-04-06T02:52:06.723-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.537-0500 c20012| 2016-04-06T02:52:06.724-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.542-0500 c20013| 2016-04-06T02:52:06.724-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.543-0500 c20013| 2016-04-06T02:52:06.724-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.544-0500 c20013| 2016-04-06T02:52:06.724-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:09.546-0500 c20012| 2016-04-06T02:52:06.724-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.548-0500 c20012| 2016-04-06T02:52:06.724-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:09.557-0500 c20013| 2016-04-06T02:52:06.725-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.563-0500 c20013| 2016-04-06T02:52:06.725-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 75 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.565-0500 c20013| 2016-04-06T02:52:06.725-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 75 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:09.575-0500 c20011| 2016-04-06T02:52:06.725-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.576-0500 c20011| 2016-04-06T02:52:06.725-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:09.580-0500 c20011| 2016-04-06T02:52:06.725-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.584-0500 c20011| 2016-04-06T02:52:06.725-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|4, t: 1 } and is durable through: { ts: Timestamp 1459929126000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.586-0500 c20011| 2016-04-06T02:52:06.725-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929126000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|2, t: 1 }, name-id: "21" } [js_test:multi_coll_drop] 2016-04-06T02:52:09.592-0500 c20011| 2016-04-06T02:52:06.725-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:09.594-0500 c20013| 2016-04-06T02:52:06.725-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 75 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.600-0500 c20011| 2016-04-06T02:52:06.726-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.601-0500 c20011| 2016-04-06T02:52:06.726-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:09.612-0500 c20011| 2016-04-06T02:52:06.726-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|4, t: 1 } and is durable through: { ts: Timestamp 1459929126000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.620-0500 c20011| 2016-04-06T02:52:06.726-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929126000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|2, t: 1 }, name-id: "21" } [js_test:multi_coll_drop] 2016-04-06T02:52:09.622-0500 c20012| 2016-04-06T02:52:06.725-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:09.635-0500 c20012| 2016-04-06T02:52:06.725-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.643-0500 c20012| 2016-04-06T02:52:06.725-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 72 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.644-0500 c20012| 2016-04-06T02:52:06.726-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 72 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:09.649-0500 c20011| 2016-04-06T02:52:06.726-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.652-0500 c20011| 2016-04-06T02:52:06.726-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:09.656-0500 c20012| 2016-04-06T02:52:06.726-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 72 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.661-0500 c20013| 2016-04-06T02:52:06.727-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.668-0500 c20013| 2016-04-06T02:52:06.727-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 77 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.671-0500 c20013| 2016-04-06T02:52:06.727-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 77 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:09.674-0500 c20011| 2016-04-06T02:52:06.727-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.676-0500 c20011| 2016-04-06T02:52:06.727-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:09.679-0500 c20011| 2016-04-06T02:52:06.727-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.684-0500 c20011| 2016-04-06T02:52:06.727-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|4, t: 1 } and is durable through: { ts: Timestamp 1459929126000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.693-0500 c20011| 2016-04-06T02:52:06.727-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.704-0500 c20012| 2016-04-06T02:52:06.727-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.712-0500 c20012| 2016-04-06T02:52:06.727-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 74 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.712-0500 c20012| 2016-04-06T02:52:06.727-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 74 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:09.713-0500 c20011| 2016-04-06T02:52:06.727-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.719-0500 c20011| 2016-04-06T02:52:06.727-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929126000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|3, t: 1 }, name-id: "24" } [js_test:multi_coll_drop] 2016-04-06T02:52:09.720-0500 c20011| 2016-04-06T02:52:06.727-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929126000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|3, t: 1 }, name-id: "24" } [js_test:multi_coll_drop] 2016-04-06T02:52:09.732-0500 c20011| 2016-04-06T02:52:06.727-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:09.732-0500 c20011| 2016-04-06T02:52:06.727-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:09.752-0500 c20011| 2016-04-06T02:52:06.727-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|2, t: 1 } } cursorid:17466612721 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 27ms [js_test:multi_coll_drop] 2016-04-06T02:52:09.754-0500 c20013| 2016-04-06T02:52:06.727-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 77 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.761-0500 c20011| 2016-04-06T02:52:06.727-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|4, t: 1 } and is durable through: { ts: Timestamp 1459929126000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.764-0500 c20011| 2016-04-06T02:52:06.728-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929126000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|3, t: 1 }, name-id: "24" } [js_test:multi_coll_drop] 2016-04-06T02:52:09.771-0500 c20013| 2016-04-06T02:52:06.728-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 72 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.782-0500 c20011| 2016-04-06T02:52:06.728-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.790-0500 c20011| 2016-04-06T02:52:06.728-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:09.791-0500 c20012| 2016-04-06T02:52:06.728-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 74 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.798-0500 c20013| 2016-04-06T02:52:06.728-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.800-0500 c20013| 2016-04-06T02:52:06.728-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:09.805-0500 c20013| 2016-04-06T02:52:06.728-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 80 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.728-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:09.806-0500 c20013| 2016-04-06T02:52:06.728-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 80 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:09.807-0500 c20011| 2016-04-06T02:52:06.728-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:09.810-0500 c20011| 2016-04-06T02:52:06.728-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|2, t: 1 } } cursorid:20785203637 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 27ms [js_test:multi_coll_drop] 2016-04-06T02:52:09.813-0500 c20012| 2016-04-06T02:52:06.728-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 69 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.814-0500 c20012| 2016-04-06T02:52:06.728-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.815-0500 c20012| 2016-04-06T02:52:06.728-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:09.819-0500 c20012| 2016-04-06T02:52:06.728-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 77 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.728-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:09.821-0500 c20012| 2016-04-06T02:52:06.728-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 77 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:09.822-0500 c20011| 2016-04-06T02:52:06.729-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:09.827-0500 c20013| 2016-04-06T02:52:06.729-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.831-0500 c20013| 2016-04-06T02:52:06.729-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 81 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.832-0500 c20013| 2016-04-06T02:52:06.729-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 81 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:09.836-0500 c20011| 2016-04-06T02:52:06.730-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.836-0500 c20011| 2016-04-06T02:52:06.730-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:09.839-0500 c20011| 2016-04-06T02:52:06.730-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.839-0500 c20011| 2016-04-06T02:52:06.730-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|4, t: 1 } and is durable through: { ts: Timestamp 1459929126000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.840-0500 c20011| 2016-04-06T02:52:06.730-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.846-0500 c20011| 2016-04-06T02:52:06.730-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:09.848-0500 c20012| 2016-04-06T02:52:06.730-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.854-0500 c20012| 2016-04-06T02:52:06.730-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 78 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.855-0500 c20012| 2016-04-06T02:52:06.730-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 78 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:09.859-0500 c20011| 2016-04-06T02:52:06.730-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|3, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:09.864-0500 c20011| 2016-04-06T02:52:06.730-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:09.864-0500 c20011| 2016-04-06T02:52:06.730-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:09.868-0500 c20011| 2016-04-06T02:52:06.730-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|4, t: 1 } and is durable through: { ts: Timestamp 1459929126000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.872-0500 c20011| 2016-04-06T02:52:06.730-0500 I COMMAND [conn10] command config.settings command: insert { insert: "settings", documents: [ { _id: "chunksize", value: 50 } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 4, w: 4 } }, Database: { acquireCount: { w: 3, W: 1 } }, Collection: { acquireCount: { w: 1, W: 1 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 46ms [js_test:multi_coll_drop] 2016-04-06T02:52:09.877-0500 c20011| 2016-04-06T02:52:06.730-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.879-0500 c20011| 2016-04-06T02:52:06.730-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:09.883-0500 c20013| 2016-04-06T02:52:06.730-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 80 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.894-0500 c20012| 2016-04-06T02:52:06.730-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 78 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.896-0500 c20013| 2016-04-06T02:52:06.730-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 81 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.898-0500 c20013| 2016-04-06T02:52:06.730-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.899-0500 c20013| 2016-04-06T02:52:06.730-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:09.903-0500 c20013| 2016-04-06T02:52:06.730-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 84 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.730-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:09.905-0500 c20013| 2016-04-06T02:52:06.730-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 84 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:09.906-0500 c20012| 2016-04-06T02:52:06.730-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 77 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.912-0500 c20011| 2016-04-06T02:52:06.730-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|3, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:09.916-0500 c20011| 2016-04-06T02:52:06.730-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:09.921-0500 s20014| 2016-04-06T02:52:06.730-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 16 finished with response: { ok: 1, n: 1, opTime: { ts: Timestamp 1459929126000|4, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:09.925-0500 s20014| 2016-04-06T02:52:06.730-0500 D ASIO [mongosMain] startCommand: RemoteCommand 18 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:36.730-0500 cmd:{ insert: "system.indexes", documents: [ { ns: "config.chunks", key: { ns: 1, min: 1 }, name: "ns_1_min_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.929-0500 s20014| 2016-04-06T02:52:06.730-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 18 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:09.935-0500 c20011| 2016-04-06T02:52:06.730-0500 D COMMAND [conn10] run command config.$cmd { insert: "system.indexes", documents: [ { ns: "config.chunks", key: { ns: 1, min: 1 }, name: "ns_1_min_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.936-0500 c20011| 2016-04-06T02:52:06.730-0500 D STORAGE [conn10] stored meta data for config.chunks @ RecordId(9) [js_test:multi_coll_drop] 2016-04-06T02:52:09.940-0500 c20012| 2016-04-06T02:52:06.730-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:09.943-0500 c20011| 2016-04-06T02:52:06.731-0500 D STORAGE [conn10] WiredTigerKVEngine::createRecordStore uri: table:collection-15--6404702321693896372 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:09.946-0500 c20012| 2016-04-06T02:52:06.731-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:09.949-0500 c20012| 2016-04-06T02:52:06.731-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 81 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.731-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:09.949-0500 c20012| 2016-04-06T02:52:06.731-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 81 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:09.951-0500 c20011| 2016-04-06T02:52:06.731-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:09.959-0500 c20011| 2016-04-06T02:52:06.741-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-15--6404702321693896372 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:09.962-0500 c20011| 2016-04-06T02:52:06.741-0500 D STORAGE [conn10] config.chunks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:10.016-0500 c20011| 2016-04-06T02:52:06.741-0500 D STORAGE [conn10] WiredTigerKVEngine::createSortedDataInterface ident: index-16--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.chunks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:10.017-0500 c20011| 2016-04-06T02:52:06.741-0500 D STORAGE [conn10] create uri: table:index-16--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.chunks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:10.019-0500 c20011| 2016-04-06T02:52:06.748-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-16--6404702321693896372 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:10.020-0500 c20011| 2016-04-06T02:52:06.748-0500 D STORAGE [conn10] config.chunks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:10.024-0500 c20011| 2016-04-06T02:52:06.748-0500 D STORAGE [conn10] WiredTigerKVEngine::createSortedDataInterface ident: index-17--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "ns" : 1, "min" : 1 }, "name" : "ns_1_min_1", "ns" : "config.chunks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:10.028-0500 c20011| 2016-04-06T02:52:06.748-0500 D STORAGE [conn10] create uri: table:index-17--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "ns" : 1, "min" : 1 }, "name" : "ns_1_min_1", "ns" : "config.chunks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:10.032-0500 c20011| 2016-04-06T02:52:06.748-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|4, t: 1 } } cursorid:17466612721 numYields:1 nreturned:1 reslen:458 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 17ms [js_test:multi_coll_drop] 2016-04-06T02:52:10.034-0500 c20013| 2016-04-06T02:52:06.748-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 84 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|5, t: 1, h: 4314582289991331182, v: 2, op: "c", ns: "config.$cmd", o: { create: "chunks" } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.037-0500 c20013| 2016-04-06T02:52:06.748-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|5 and ending at ts: Timestamp 1459929126000|5 [js_test:multi_coll_drop] 2016-04-06T02:52:10.040-0500 c20012| 2016-04-06T02:52:06.748-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 81 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|5, t: 1, h: 4314582289991331182, v: 2, op: "c", ns: "config.$cmd", o: { create: "chunks" } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.045-0500 c20011| 2016-04-06T02:52:06.748-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|4, t: 1 } } cursorid:20785203637 numYields:1 nreturned:1 reslen:458 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 17ms [js_test:multi_coll_drop] 2016-04-06T02:52:10.046-0500 c20013| 2016-04-06T02:52:06.749-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:10.047-0500 c20013| 2016-04-06T02:52:06.749-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.050-0500 c20013| 2016-04-06T02:52:06.749-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.056-0500 c20013| 2016-04-06T02:52:06.749-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.056-0500 c20013| 2016-04-06T02:52:06.749-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.059-0500 c20013| 2016-04-06T02:52:06.749-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.060-0500 c20013| 2016-04-06T02:52:06.749-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.061-0500 c20013| 2016-04-06T02:52:06.749-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.062-0500 c20013| 2016-04-06T02:52:06.749-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.064-0500 c20013| 2016-04-06T02:52:06.749-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.067-0500 c20013| 2016-04-06T02:52:06.749-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.068-0500 c20013| 2016-04-06T02:52:06.749-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.070-0500 c20013| 2016-04-06T02:52:06.749-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.070-0500 c20013| 2016-04-06T02:52:06.749-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.071-0500 c20013| 2016-04-06T02:52:06.749-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:10.072-0500 c20013| 2016-04-06T02:52:06.749-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.074-0500 c20012| 2016-04-06T02:52:06.748-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|5 and ending at ts: Timestamp 1459929126000|5 [js_test:multi_coll_drop] 2016-04-06T02:52:10.075-0500 c20013| 2016-04-06T02:52:06.749-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.077-0500 c20013| 2016-04-06T02:52:06.749-0500 D STORAGE [repl writer worker 0] create collection config.chunks {} [js_test:multi_coll_drop] 2016-04-06T02:52:10.079-0500 c20013| 2016-04-06T02:52:06.749-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.080-0500 c20013| 2016-04-06T02:52:06.749-0500 D STORAGE [repl writer worker 0] stored meta data for config.chunks @ RecordId(10) [js_test:multi_coll_drop] 2016-04-06T02:52:10.085-0500 c20013| 2016-04-06T02:52:06.749-0500 D STORAGE [repl writer worker 0] WiredTigerKVEngine::createRecordStore uri: table:collection-17-751336887848580549 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:10.086-0500 c20012| 2016-04-06T02:52:06.749-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:10.087-0500 c20012| 2016-04-06T02:52:06.750-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.087-0500 c20012| 2016-04-06T02:52:06.750-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.087-0500 c20012| 2016-04-06T02:52:06.750-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.089-0500 c20012| 2016-04-06T02:52:06.750-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.089-0500 c20012| 2016-04-06T02:52:06.750-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.091-0500 c20012| 2016-04-06T02:52:06.750-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.092-0500 c20012| 2016-04-06T02:52:06.750-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.096-0500 c20012| 2016-04-06T02:52:06.750-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.097-0500 c20012| 2016-04-06T02:52:06.750-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.097-0500 c20012| 2016-04-06T02:52:06.750-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.100-0500 c20012| 2016-04-06T02:52:06.750-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.101-0500 c20012| 2016-04-06T02:52:06.750-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.102-0500 c20012| 2016-04-06T02:52:06.750-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.102-0500 c20012| 2016-04-06T02:52:06.750-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:10.103-0500 c20012| 2016-04-06T02:52:06.750-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.108-0500 c20012| 2016-04-06T02:52:06.750-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.109-0500 c20012| 2016-04-06T02:52:06.750-0500 D STORAGE [repl writer worker 6] create collection config.chunks {} [js_test:multi_coll_drop] 2016-04-06T02:52:10.112-0500 c20012| 2016-04-06T02:52:06.750-0500 D STORAGE [repl writer worker 6] stored meta data for config.chunks @ RecordId(10) [js_test:multi_coll_drop] 2016-04-06T02:52:10.115-0500 c20012| 2016-04-06T02:52:06.750-0500 D STORAGE [repl writer worker 6] WiredTigerKVEngine::createRecordStore uri: table:collection-17-6577373056560964212 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:10.118-0500 c20012| 2016-04-06T02:52:06.750-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.121-0500 c20013| 2016-04-06T02:52:06.750-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 86 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.750-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:10.122-0500 c20013| 2016-04-06T02:52:06.751-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 86 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:10.129-0500 c20011| 2016-04-06T02:52:06.751-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:10.131-0500 c20012| 2016-04-06T02:52:06.751-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 83 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.751-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:10.133-0500 c20012| 2016-04-06T02:52:06.751-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 83 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:10.135-0500 c20011| 2016-04-06T02:52:06.751-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:10.135-0500 c20011| 2016-04-06T02:52:06.752-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-17--6404702321693896372 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:10.135-0500 c20011| 2016-04-06T02:52:06.752-0500 I INDEX [conn10] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:10.135-0500 c20011| 2016-04-06T02:52:06.752-0500 I INDEX [conn10] building index using bulk method [js_test:multi_coll_drop] 2016-04-06T02:52:10.135-0500 c20011| 2016-04-06T02:52:06.752-0500 D INDEX [conn10] bulk commit starting for index: ns_1_min_1 [js_test:multi_coll_drop] 2016-04-06T02:52:10.136-0500 c20011| 2016-04-06T02:52:06.753-0500 D INDEX [conn10] done building bottom layer, going to commit [js_test:multi_coll_drop] 2016-04-06T02:52:10.136-0500 c20013| 2016-04-06T02:52:06.754-0500 D STORAGE [repl writer worker 0] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-17-751336887848580549 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:10.147-0500 c20013| 2016-04-06T02:52:06.754-0500 D STORAGE [repl writer worker 0] config.chunks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:10.150-0500 c20013| 2016-04-06T02:52:06.754-0500 D STORAGE [repl writer worker 0] WiredTigerKVEngine::createSortedDataInterface ident: index-18-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.chunks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:10.154-0500 c20013| 2016-04-06T02:52:06.754-0500 D STORAGE [repl writer worker 0] create uri: table:index-18-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.chunks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:10.154-0500 c20011| 2016-04-06T02:52:06.755-0500 I INDEX [conn10] build index done. scanned 0 total records. 0 secs [js_test:multi_coll_drop] 2016-04-06T02:52:10.155-0500 c20011| 2016-04-06T02:52:06.755-0500 D STORAGE [conn10] config.chunks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:10.160-0500 c20011| 2016-04-06T02:52:06.755-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|4, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:545 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:10.163-0500 c20013| 2016-04-06T02:52:06.755-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 86 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|6, t: 1, h: 1201721484615831387, v: 2, op: "i", ns: "config.system.indexes", o: { _id: ObjectId('5704c0263876c4cfd2eb3ebb'), ns: "config.chunks", key: { ns: 1, min: 1 }, name: "ns_1_min_1", unique: true } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.167-0500 c20011| 2016-04-06T02:52:06.755-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|4, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:545 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:10.169-0500 c20012| 2016-04-06T02:52:06.755-0500 D STORAGE [repl writer worker 6] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-17-6577373056560964212 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:10.174-0500 c20012| 2016-04-06T02:52:06.755-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 83 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|6, t: 1, h: 1201721484615831387, v: 2, op: "i", ns: "config.system.indexes", o: { _id: ObjectId('5704c0263876c4cfd2eb3ebb'), ns: "config.chunks", key: { ns: 1, min: 1 }, name: "ns_1_min_1", unique: true } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.176-0500 c20012| 2016-04-06T02:52:06.756-0500 D STORAGE [repl writer worker 6] config.chunks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:10.179-0500 c20012| 2016-04-06T02:52:06.756-0500 D STORAGE [repl writer worker 6] WiredTigerKVEngine::createSortedDataInterface ident: index-18-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.chunks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:10.181-0500 c20012| 2016-04-06T02:52:06.756-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|6 and ending at ts: Timestamp 1459929126000|6 [js_test:multi_coll_drop] 2016-04-06T02:52:10.184-0500 c20013| 2016-04-06T02:52:06.756-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|6 and ending at ts: Timestamp 1459929126000|6 [js_test:multi_coll_drop] 2016-04-06T02:52:10.195-0500 c20012| 2016-04-06T02:52:06.756-0500 D STORAGE [repl writer worker 6] create uri: table:index-18-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.chunks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:10.203-0500 c20013| 2016-04-06T02:52:06.758-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 88 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.758-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:10.205-0500 c20013| 2016-04-06T02:52:06.758-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 88 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:10.206-0500 c20012| 2016-04-06T02:52:06.758-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 85 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.758-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:10.207-0500 c20012| 2016-04-06T02:52:06.758-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 85 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:10.208-0500 c20011| 2016-04-06T02:52:06.758-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:10.209-0500 c20011| 2016-04-06T02:52:06.758-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:10.209-0500 c20011| 2016-04-06T02:52:06.760-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929126000|6, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|4, t: 1 }, name-id: "25" } [js_test:multi_coll_drop] 2016-04-06T02:52:10.210-0500 c20013| 2016-04-06T02:52:06.761-0500 D STORAGE [repl writer worker 0] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-18-751336887848580549 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:10.211-0500 c20013| 2016-04-06T02:52:06.762-0500 D STORAGE [repl writer worker 0] config.chunks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:10.212-0500 c20013| 2016-04-06T02:52:06.762-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.213-0500 c20013| 2016-04-06T02:52:06.762-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.213-0500 c20013| 2016-04-06T02:52:06.762-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.215-0500 c20013| 2016-04-06T02:52:06.762-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.215-0500 c20013| 2016-04-06T02:52:06.762-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.216-0500 c20013| 2016-04-06T02:52:06.762-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.219-0500 c20013| 2016-04-06T02:52:06.762-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.220-0500 c20013| 2016-04-06T02:52:06.763-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.220-0500 c20013| 2016-04-06T02:52:06.763-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.222-0500 c20013| 2016-04-06T02:52:06.763-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.224-0500 c20013| 2016-04-06T02:52:06.763-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.224-0500 c20013| 2016-04-06T02:52:06.763-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.227-0500 c20013| 2016-04-06T02:52:06.763-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.228-0500 c20013| 2016-04-06T02:52:06.763-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.230-0500 c20013| 2016-04-06T02:52:06.763-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.231-0500 c20013| 2016-04-06T02:52:06.763-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.234-0500 c20013| 2016-04-06T02:52:06.763-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:10.236-0500 c20013| 2016-04-06T02:52:06.763-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:10.238-0500 c20012| 2016-04-06T02:52:06.763-0500 D STORAGE [repl writer worker 6] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-18-6577373056560964212 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:10.239-0500 c20012| 2016-04-06T02:52:06.763-0500 D STORAGE [repl writer worker 6] config.chunks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:10.242-0500 c20013| 2016-04-06T02:52:06.764-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.244-0500 c20013| 2016-04-06T02:52:06.764-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.244-0500 c20013| 2016-04-06T02:52:06.764-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.245-0500 c20013| 2016-04-06T02:52:06.764-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.246-0500 c20013| 2016-04-06T02:52:06.764-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.247-0500 c20013| 2016-04-06T02:52:06.764-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.249-0500 c20013| 2016-04-06T02:52:06.764-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.249-0500 c20012| 2016-04-06T02:52:06.764-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.252-0500 c20013| 2016-04-06T02:52:06.764-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.255-0500 c20013| 2016-04-06T02:52:06.764-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.257-0500 c20012| 2016-04-06T02:52:06.764-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.258-0500 c20013| 2016-04-06T02:52:06.764-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.258-0500 c20013| 2016-04-06T02:52:06.764-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.260-0500 c20013| 2016-04-06T02:52:06.764-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.262-0500 c20012| 2016-04-06T02:52:06.764-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.263-0500 c20013| 2016-04-06T02:52:06.764-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:10.265-0500 c20012| 2016-04-06T02:52:06.764-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.266-0500 c20012| 2016-04-06T02:52:06.764-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.268-0500 c20012| 2016-04-06T02:52:06.764-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.269-0500 c20012| 2016-04-06T02:52:06.764-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.272-0500 c20012| 2016-04-06T02:52:06.764-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.273-0500 c20012| 2016-04-06T02:52:06.764-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.273-0500 c20012| 2016-04-06T02:52:06.764-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.276-0500 c20013| 2016-04-06T02:52:06.764-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.277-0500 c20013| 2016-04-06T02:52:06.764-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:10.284-0500 c20013| 2016-04-06T02:52:06.764-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 89 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:10.287-0500 c20013| 2016-04-06T02:52:06.764-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 89 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:10.292-0500 c20013| 2016-04-06T02:52:06.764-0500 D STORAGE [repl writer worker 8] WiredTigerKVEngine::createSortedDataInterface ident: index-19-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "ns" : 1, "min" : 1 }, "name" : "ns_1_min_1", "ns" : "config.chunks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:10.296-0500 c20013| 2016-04-06T02:52:06.764-0500 D STORAGE [repl writer worker 8] create uri: table:index-19-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "ns" : 1, "min" : 1 }, "name" : "ns_1_min_1", "ns" : "config.chunks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:10.297-0500 c20012| 2016-04-06T02:52:06.765-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.298-0500 c20012| 2016-04-06T02:52:06.765-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.299-0500 c20012| 2016-04-06T02:52:06.765-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.303-0500 c20012| 2016-04-06T02:52:06.765-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.308-0500 c20011| 2016-04-06T02:52:06.765-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:10.308-0500 c20011| 2016-04-06T02:52:06.765-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:10.309-0500 c20011| 2016-04-06T02:52:06.765-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.313-0500 c20011| 2016-04-06T02:52:06.765-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|5, t: 1 } and is durable through: { ts: Timestamp 1459929126000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.316-0500 c20011| 2016-04-06T02:52:06.765-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929126000|6, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|4, t: 1 }, name-id: "25" } [js_test:multi_coll_drop] 2016-04-06T02:52:10.321-0500 c20011| 2016-04-06T02:52:06.765-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:10.322-0500 c20013| 2016-04-06T02:52:06.765-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.326-0500 c20012| 2016-04-06T02:52:06.765-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.327-0500 c20012| 2016-04-06T02:52:06.765-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.328-0500 c20013| 2016-04-06T02:52:06.765-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.329-0500 c20013| 2016-04-06T02:52:06.765-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.333-0500 c20012| 2016-04-06T02:52:06.767-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:10.337-0500 c20012| 2016-04-06T02:52:06.767-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:10.339-0500 c20012| 2016-04-06T02:52:06.767-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.341-0500 c20012| 2016-04-06T02:52:06.767-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.341-0500 c20012| 2016-04-06T02:52:06.767-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.343-0500 c20012| 2016-04-06T02:52:06.767-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.343-0500 c20012| 2016-04-06T02:52:06.767-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.344-0500 c20012| 2016-04-06T02:52:06.767-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.344-0500 c20012| 2016-04-06T02:52:06.767-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.345-0500 c20012| 2016-04-06T02:52:06.767-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.347-0500 c20012| 2016-04-06T02:52:06.767-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.348-0500 c20012| 2016-04-06T02:52:06.767-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.352-0500 c20012| 2016-04-06T02:52:06.767-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.353-0500 c20012| 2016-04-06T02:52:06.767-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.353-0500 c20012| 2016-04-06T02:52:06.767-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.354-0500 c20012| 2016-04-06T02:52:06.767-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:10.356-0500 c20012| 2016-04-06T02:52:06.767-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.362-0500 c20011| 2016-04-06T02:52:06.767-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:10.363-0500 c20011| 2016-04-06T02:52:06.767-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:10.365-0500 c20011| 2016-04-06T02:52:06.767-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|5, t: 1 } and is durable through: { ts: Timestamp 1459929126000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.368-0500 c20011| 2016-04-06T02:52:06.767-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929126000|6, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|4, t: 1 }, name-id: "25" } [js_test:multi_coll_drop] 2016-04-06T02:52:10.376-0500 c20011| 2016-04-06T02:52:06.768-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.386-0500 c20011| 2016-04-06T02:52:06.768-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:10.390-0500 c20012| 2016-04-06T02:52:06.767-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:10.395-0500 c20012| 2016-04-06T02:52:06.767-0500 D STORAGE [repl writer worker 15] WiredTigerKVEngine::createSortedDataInterface ident: index-19-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "ns" : 1, "min" : 1 }, "name" : "ns_1_min_1", "ns" : "config.chunks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:10.395-0500 c20012| 2016-04-06T02:52:06.767-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.399-0500 c20012| 2016-04-06T02:52:06.767-0500 D STORAGE [repl writer worker 15] create uri: table:index-19-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "ns" : 1, "min" : 1 }, "name" : "ns_1_min_1", "ns" : "config.chunks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:10.403-0500 c20012| 2016-04-06T02:52:06.767-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 86 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:10.405-0500 c20012| 2016-04-06T02:52:06.767-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 86 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:10.405-0500 c20012| 2016-04-06T02:52:06.768-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 86 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.408-0500 c20013| 2016-04-06T02:52:06.766-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 89 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.411-0500 c20013| 2016-04-06T02:52:06.768-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:10.417-0500 c20013| 2016-04-06T02:52:06.769-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 91 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:10.418-0500 c20013| 2016-04-06T02:52:06.769-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 91 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:10.419-0500 c20012| 2016-04-06T02:52:06.775-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.422-0500 c20011| 2016-04-06T02:52:06.775-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:10.424-0500 c20011| 2016-04-06T02:52:06.775-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:10.426-0500 c20011| 2016-04-06T02:52:06.775-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.429-0500 c20011| 2016-04-06T02:52:06.775-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|5, t: 1 } and is durable through: { ts: Timestamp 1459929126000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.431-0500 c20011| 2016-04-06T02:52:06.775-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.434-0500 c20011| 2016-04-06T02:52:06.775-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929126000|6, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|5, t: 1 }, name-id: "29" } [js_test:multi_coll_drop] 2016-04-06T02:52:10.439-0500 c20011| 2016-04-06T02:52:06.775-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929126000|6, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|5, t: 1 }, name-id: "29" } [js_test:multi_coll_drop] 2016-04-06T02:52:10.446-0500 c20011| 2016-04-06T02:52:06.775-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:10.450-0500 c20012| 2016-04-06T02:52:06.776-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 85 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.453-0500 c20013| 2016-04-06T02:52:06.776-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 88 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.455-0500 c20011| 2016-04-06T02:52:06.775-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|4, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 17ms [js_test:multi_coll_drop] 2016-04-06T02:52:10.462-0500 c20011| 2016-04-06T02:52:06.776-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|4, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 17ms [js_test:multi_coll_drop] 2016-04-06T02:52:10.466-0500 c20012| 2016-04-06T02:52:06.776-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.467-0500 c20012| 2016-04-06T02:52:06.776-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:10.490-0500 c20013| 2016-04-06T02:52:06.776-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.494-0500 c20013| 2016-04-06T02:52:06.776-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:10.521-0500 c20013| 2016-04-06T02:52:06.776-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 93 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.776-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:10.522-0500 c20013| 2016-04-06T02:52:06.776-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 93 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:10.529-0500 c20012| 2016-04-06T02:52:06.776-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 89 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.776-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:10.530-0500 c20012| 2016-04-06T02:52:06.776-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 89 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:10.533-0500 c20011| 2016-04-06T02:52:06.776-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:10.534-0500 c20011| 2016-04-06T02:52:06.776-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:10.537-0500 c20013| 2016-04-06T02:52:06.777-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 91 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.547-0500 c20011| 2016-04-06T02:52:06.778-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:10.548-0500 c20011| 2016-04-06T02:52:06.778-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:10.553-0500 c20011| 2016-04-06T02:52:06.778-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|5, t: 1 } and is durable through: { ts: Timestamp 1459929126000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.558-0500 c20011| 2016-04-06T02:52:06.778-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929126000|6, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|5, t: 1 }, name-id: "29" } [js_test:multi_coll_drop] 2016-04-06T02:52:10.562-0500 c20011| 2016-04-06T02:52:06.778-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.566-0500 c20011| 2016-04-06T02:52:06.778-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:10.570-0500 c20012| 2016-04-06T02:52:06.778-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:10.580-0500 c20012| 2016-04-06T02:52:06.778-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 90 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:10.583-0500 c20012| 2016-04-06T02:52:06.778-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 90 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:10.587-0500 c20012| 2016-04-06T02:52:06.778-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 90 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.590-0500 c20013| 2016-04-06T02:52:06.779-0500 D STORAGE [repl writer worker 8] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-19-751336887848580549 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:10.591-0500 c20013| 2016-04-06T02:52:06.779-0500 I INDEX [repl writer worker 8] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:10.592-0500 c20013| 2016-04-06T02:52:06.779-0500 I INDEX [repl writer worker 8] building index using bulk method [js_test:multi_coll_drop] 2016-04-06T02:52:10.593-0500 c20013| 2016-04-06T02:52:06.779-0500 D INDEX [repl writer worker 8] bulk commit starting for index: ns_1_min_1 [js_test:multi_coll_drop] 2016-04-06T02:52:10.594-0500 c20013| 2016-04-06T02:52:06.780-0500 D INDEX [repl writer worker 8] done building bottom layer, going to commit [js_test:multi_coll_drop] 2016-04-06T02:52:10.596-0500 c20012| 2016-04-06T02:52:06.781-0500 D STORAGE [repl writer worker 15] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-19-6577373056560964212 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:10.597-0500 c20012| 2016-04-06T02:52:06.781-0500 I INDEX [repl writer worker 15] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:10.599-0500 c20012| 2016-04-06T02:52:06.781-0500 I INDEX [repl writer worker 15] building index using bulk method [js_test:multi_coll_drop] 2016-04-06T02:52:10.600-0500 c20012| 2016-04-06T02:52:06.781-0500 D INDEX [repl writer worker 15] bulk commit starting for index: ns_1_min_1 [js_test:multi_coll_drop] 2016-04-06T02:52:10.602-0500 c20012| 2016-04-06T02:52:06.781-0500 D INDEX [repl writer worker 15] done building bottom layer, going to commit [js_test:multi_coll_drop] 2016-04-06T02:52:10.603-0500 c20013| 2016-04-06T02:52:06.781-0500 I INDEX [repl writer worker 8] build index done. scanned 0 total records. 0 secs [js_test:multi_coll_drop] 2016-04-06T02:52:10.606-0500 c20013| 2016-04-06T02:52:06.782-0500 D STORAGE [repl writer worker 8] config.chunks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:10.606-0500 c20013| 2016-04-06T02:52:06.782-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.608-0500 c20013| 2016-04-06T02:52:06.782-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.611-0500 c20013| 2016-04-06T02:52:06.782-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.614-0500 c20013| 2016-04-06T02:52:06.782-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.617-0500 c20013| 2016-04-06T02:52:06.782-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.619-0500 c20013| 2016-04-06T02:52:06.782-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.620-0500 c20013| 2016-04-06T02:52:06.782-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.620-0500 c20013| 2016-04-06T02:52:06.782-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.622-0500 c20013| 2016-04-06T02:52:06.782-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.626-0500 c20013| 2016-04-06T02:52:06.782-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.626-0500 c20013| 2016-04-06T02:52:06.782-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.628-0500 c20013| 2016-04-06T02:52:06.782-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.629-0500 c20013| 2016-04-06T02:52:06.782-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.632-0500 c20013| 2016-04-06T02:52:06.782-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.635-0500 c20013| 2016-04-06T02:52:06.782-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.654-0500 c20013| 2016-04-06T02:52:06.782-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.656-0500 c20013| 2016-04-06T02:52:06.783-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:10.664-0500 c20013| 2016-04-06T02:52:06.783-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:10.670-0500 c20013| 2016-04-06T02:52:06.783-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 95 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:10.679-0500 c20013| 2016-04-06T02:52:06.783-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 95 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:10.680-0500 c20012| 2016-04-06T02:52:06.783-0500 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs [js_test:multi_coll_drop] 2016-04-06T02:52:10.684-0500 c20012| 2016-04-06T02:52:06.783-0500 D STORAGE [repl writer worker 15] config.chunks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:10.695-0500 c20011| 2016-04-06T02:52:06.783-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:10.696-0500 c20011| 2016-04-06T02:52:06.783-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:10.702-0500 c20011| 2016-04-06T02:52:06.783-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.706-0500 c20011| 2016-04-06T02:52:06.783-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|6, t: 1 } and is durable through: { ts: Timestamp 1459929126000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.707-0500 c20011| 2016-04-06T02:52:06.783-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929126000|6, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|5, t: 1 }, name-id: "29" } [js_test:multi_coll_drop] 2016-04-06T02:52:10.708-0500 c20012| 2016-04-06T02:52:06.783-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.713-0500 c20011| 2016-04-06T02:52:06.783-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:10.715-0500 c20012| 2016-04-06T02:52:06.783-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.718-0500 c20012| 2016-04-06T02:52:06.783-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.720-0500 c20012| 2016-04-06T02:52:06.783-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.722-0500 c20012| 2016-04-06T02:52:06.783-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.722-0500 c20012| 2016-04-06T02:52:06.783-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.723-0500 c20012| 2016-04-06T02:52:06.784-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.725-0500 c20013| 2016-04-06T02:52:06.784-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 95 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.726-0500 c20012| 2016-04-06T02:52:06.784-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.726-0500 c20012| 2016-04-06T02:52:06.784-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.727-0500 c20012| 2016-04-06T02:52:06.784-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.728-0500 c20012| 2016-04-06T02:52:06.784-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.728-0500 c20012| 2016-04-06T02:52:06.784-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.729-0500 c20012| 2016-04-06T02:52:06.784-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.731-0500 c20012| 2016-04-06T02:52:06.784-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.733-0500 c20012| 2016-04-06T02:52:06.784-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.738-0500 c20012| 2016-04-06T02:52:06.784-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.739-0500 c20012| 2016-04-06T02:52:06.784-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:10.750-0500 c20011| 2016-04-06T02:52:06.785-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:10.750-0500 c20011| 2016-04-06T02:52:06.785-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:10.752-0500 c20011| 2016-04-06T02:52:06.785-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|6, t: 1 } and is durable through: { ts: Timestamp 1459929126000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.756-0500 c20011| 2016-04-06T02:52:06.785-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929126000|6, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|5, t: 1 }, name-id: "29" } [js_test:multi_coll_drop] 2016-04-06T02:52:10.766-0500 c20011| 2016-04-06T02:52:06.785-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.771-0500 c20011| 2016-04-06T02:52:06.785-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:10.775-0500 c20012| 2016-04-06T02:52:06.784-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:10.781-0500 c20012| 2016-04-06T02:52:06.785-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 92 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:10.786-0500 c20012| 2016-04-06T02:52:06.785-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 92 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:10.787-0500 c20012| 2016-04-06T02:52:06.785-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 92 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.795-0500 c20013| 2016-04-06T02:52:06.786-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:10.800-0500 c20013| 2016-04-06T02:52:06.786-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 97 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:10.802-0500 c20013| 2016-04-06T02:52:06.786-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 97 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:10.810-0500 c20011| 2016-04-06T02:52:06.786-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:10.810-0500 c20011| 2016-04-06T02:52:06.786-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:10.815-0500 c20011| 2016-04-06T02:52:06.786-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.822-0500 c20011| 2016-04-06T02:52:06.786-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|6, t: 1 } and is durable through: { ts: Timestamp 1459929126000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.826-0500 c20011| 2016-04-06T02:52:06.786-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.832-0500 c20011| 2016-04-06T02:52:06.786-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:10.838-0500 c20011| 2016-04-06T02:52:06.786-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|5, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 9ms [js_test:multi_coll_drop] 2016-04-06T02:52:10.846-0500 c20011| 2016-04-06T02:52:06.786-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|5, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 9ms [js_test:multi_coll_drop] 2016-04-06T02:52:10.853-0500 c20012| 2016-04-06T02:52:06.786-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 89 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.855-0500 c20013| 2016-04-06T02:52:06.786-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 97 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.859-0500 c20013| 2016-04-06T02:52:06.786-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 93 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.861-0500 c20013| 2016-04-06T02:52:06.786-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.862-0500 c20012| 2016-04-06T02:52:06.786-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.862-0500 c20013| 2016-04-06T02:52:06.786-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:10.869-0500 c20011| 2016-04-06T02:52:06.786-0500 I COMMAND [conn10] command config.system.indexes command: insert { insert: "system.indexes", documents: [ { ns: "config.chunks", key: { ns: 1, min: 1 }, name: "ns_1_min_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 3, w: 3 } }, Database: { acquireCount: { w: 2, W: 1 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 55ms [js_test:multi_coll_drop] 2016-04-06T02:52:10.871-0500 c20013| 2016-04-06T02:52:06.786-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 100 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.786-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:10.872-0500 c20012| 2016-04-06T02:52:06.786-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:10.878-0500 s20014| 2016-04-06T02:52:06.786-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 18 finished with response: { ok: 1, n: 1, opTime: { ts: Timestamp 1459929126000|6, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:10.881-0500 c20012| 2016-04-06T02:52:06.786-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 95 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.786-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:10.883-0500 c20011| 2016-04-06T02:52:06.786-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:10.889-0500 s20014| 2016-04-06T02:52:06.786-0500 D ASIO [mongosMain] startCommand: RemoteCommand 20 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:36.786-0500 cmd:{ insert: "system.indexes", documents: [ { ns: "config.chunks", key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.891-0500 c20012| 2016-04-06T02:52:06.787-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 95 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:10.892-0500 c20011| 2016-04-06T02:52:06.787-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:10.893-0500 c20013| 2016-04-06T02:52:06.786-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 100 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:10.898-0500 c20011| 2016-04-06T02:52:06.787-0500 D COMMAND [conn10] run command config.$cmd { insert: "system.indexes", documents: [ { ns: "config.chunks", key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.903-0500 c20011| 2016-04-06T02:52:06.787-0500 D STORAGE [conn10] WiredTigerKVEngine::createSortedDataInterface ident: index-18--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "ns" : 1, "shard" : 1, "min" : 1 }, "name" : "ns_1_shard_1_min_1", "ns" : "config.chunks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:10.908-0500 c20011| 2016-04-06T02:52:06.787-0500 D STORAGE [conn10] create uri: table:index-18--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "ns" : 1, "shard" : 1, "min" : 1 }, "name" : "ns_1_shard_1_min_1", "ns" : "config.chunks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:10.910-0500 s20014| 2016-04-06T02:52:06.787-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 20 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:10.914-0500 c20012| 2016-04-06T02:52:06.787-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:10.920-0500 c20012| 2016-04-06T02:52:06.787-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 96 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:10.920-0500 c20012| 2016-04-06T02:52:06.787-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 96 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:10.924-0500 c20012| 2016-04-06T02:52:06.788-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 96 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.935-0500 c20011| 2016-04-06T02:52:06.788-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:10.936-0500 c20011| 2016-04-06T02:52:06.788-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:10.938-0500 c20011| 2016-04-06T02:52:06.788-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|6, t: 1 } and is durable through: { ts: Timestamp 1459929126000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.946-0500 c20011| 2016-04-06T02:52:06.788-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.953-0500 c20011| 2016-04-06T02:52:06.788-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:10.954-0500 c20011| 2016-04-06T02:52:06.793-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-18--6404702321693896372 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:10.954-0500 c20011| 2016-04-06T02:52:06.793-0500 I INDEX [conn10] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:10.955-0500 c20011| 2016-04-06T02:52:06.793-0500 I INDEX [conn10] building index using bulk method [js_test:multi_coll_drop] 2016-04-06T02:52:10.956-0500 c20011| 2016-04-06T02:52:06.793-0500 D INDEX [conn10] bulk commit starting for index: ns_1_shard_1_min_1 [js_test:multi_coll_drop] 2016-04-06T02:52:10.957-0500 c20011| 2016-04-06T02:52:06.793-0500 D INDEX [conn10] done building bottom layer, going to commit [js_test:multi_coll_drop] 2016-04-06T02:52:10.958-0500 c20011| 2016-04-06T02:52:06.796-0500 I INDEX [conn10] build index done. scanned 0 total records. 0 secs [js_test:multi_coll_drop] 2016-04-06T02:52:10.961-0500 c20011| 2016-04-06T02:52:06.796-0500 D STORAGE [conn10] config.chunks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:10.965-0500 c20011| 2016-04-06T02:52:06.796-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|6, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:564 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 10ms [js_test:multi_coll_drop] 2016-04-06T02:52:10.968-0500 c20011| 2016-04-06T02:52:06.797-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|6, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:564 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 9ms [js_test:multi_coll_drop] 2016-04-06T02:52:10.975-0500 c20013| 2016-04-06T02:52:06.797-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 100 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|7, t: 1, h: -473199642460803664, v: 2, op: "i", ns: "config.system.indexes", o: { _id: ObjectId('5704c0263876c4cfd2eb3ebc'), ns: "config.chunks", key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", unique: true } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.979-0500 c20012| 2016-04-06T02:52:06.797-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 95 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|7, t: 1, h: -473199642460803664, v: 2, op: "i", ns: "config.system.indexes", o: { _id: ObjectId('5704c0263876c4cfd2eb3ebc'), ns: "config.chunks", key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", unique: true } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:10.983-0500 c20012| 2016-04-06T02:52:06.797-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|7 and ending at ts: Timestamp 1459929126000|7 [js_test:multi_coll_drop] 2016-04-06T02:52:10.987-0500 c20013| 2016-04-06T02:52:06.797-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|7 and ending at ts: Timestamp 1459929126000|7 [js_test:multi_coll_drop] 2016-04-06T02:52:10.989-0500 c20013| 2016-04-06T02:52:06.797-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:10.991-0500 c20012| 2016-04-06T02:52:06.797-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:10.993-0500 c20013| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.993-0500 c20013| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.997-0500 c20013| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.997-0500 c20013| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:10.999-0500 c20013| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.014-0500 c20013| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.018-0500 c20013| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.018-0500 c20013| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.018-0500 c20013| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.020-0500 c20013| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.021-0500 c20013| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.022-0500 c20013| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.025-0500 c20013| 2016-04-06T02:52:06.797-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:11.025-0500 c20013| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.027-0500 c20013| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.027-0500 c20013| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.031-0500 c20013| 2016-04-06T02:52:06.798-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.040-0500 c20013| 2016-04-06T02:52:06.798-0500 D STORAGE [repl writer worker 1] WiredTigerKVEngine::createSortedDataInterface ident: index-20-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "ns" : 1, "shard" : 1, "min" : 1 }, "name" : "ns_1_shard_1_min_1", "ns" : "config.chunks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:11.044-0500 c20013| 2016-04-06T02:52:06.798-0500 D STORAGE [repl writer worker 1] create uri: table:index-20-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "ns" : 1, "shard" : 1, "min" : 1 }, "name" : "ns_1_shard_1_min_1", "ns" : "config.chunks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:11.048-0500 c20012| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.050-0500 c20012| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.051-0500 c20012| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.052-0500 c20012| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.055-0500 c20012| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.059-0500 c20012| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.062-0500 c20012| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.063-0500 c20012| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.069-0500 c20012| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.070-0500 c20012| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.075-0500 c20012| 2016-04-06T02:52:06.797-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:11.086-0500 c20012| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.088-0500 c20012| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.091-0500 c20012| 2016-04-06T02:52:06.797-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.103-0500 c20012| 2016-04-06T02:52:06.799-0500 D STORAGE [repl writer worker 15] WiredTigerKVEngine::createSortedDataInterface ident: index-20-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "ns" : 1, "shard" : 1, "min" : 1 }, "name" : "ns_1_shard_1_min_1", "ns" : "config.chunks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:11.112-0500 c20012| 2016-04-06T02:52:06.799-0500 D STORAGE [repl writer worker 15] create uri: table:index-20-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "ns" : 1, "shard" : 1, "min" : 1 }, "name" : "ns_1_shard_1_min_1", "ns" : "config.chunks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:11.113-0500 c20012| 2016-04-06T02:52:06.798-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.113-0500 c20012| 2016-04-06T02:52:06.798-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.119-0500 c20012| 2016-04-06T02:52:06.798-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.127-0500 c20013| 2016-04-06T02:52:06.799-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 102 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.799-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:11.131-0500 c20013| 2016-04-06T02:52:06.799-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 102 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:11.132-0500 c20011| 2016-04-06T02:52:06.799-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:11.137-0500 c20012| 2016-04-06T02:52:06.799-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 99 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.799-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:11.147-0500 c20012| 2016-04-06T02:52:06.799-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 99 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:11.150-0500 c20011| 2016-04-06T02:52:06.799-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:11.154-0500 c20011| 2016-04-06T02:52:06.800-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929126000|7, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|6, t: 1 }, name-id: "31" } [js_test:multi_coll_drop] 2016-04-06T02:52:11.171-0500 c20012| 2016-04-06T02:52:06.805-0500 D STORAGE [repl writer worker 15] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-20-6577373056560964212 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:11.174-0500 c20012| 2016-04-06T02:52:06.805-0500 I INDEX [repl writer worker 15] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:11.176-0500 c20012| 2016-04-06T02:52:06.805-0500 I INDEX [repl writer worker 15] building index using bulk method [js_test:multi_coll_drop] 2016-04-06T02:52:11.180-0500 c20012| 2016-04-06T02:52:06.805-0500 D INDEX [repl writer worker 15] bulk commit starting for index: ns_1_shard_1_min_1 [js_test:multi_coll_drop] 2016-04-06T02:52:11.183-0500 c20012| 2016-04-06T02:52:06.806-0500 D INDEX [repl writer worker 15] done building bottom layer, going to commit [js_test:multi_coll_drop] 2016-04-06T02:52:11.187-0500 c20013| 2016-04-06T02:52:06.807-0500 D STORAGE [repl writer worker 1] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-20-751336887848580549 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:11.200-0500 c20013| 2016-04-06T02:52:06.807-0500 I INDEX [repl writer worker 1] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:11.207-0500 c20013| 2016-04-06T02:52:06.807-0500 I INDEX [repl writer worker 1] building index using bulk method [js_test:multi_coll_drop] 2016-04-06T02:52:11.208-0500 c20013| 2016-04-06T02:52:06.807-0500 D INDEX [repl writer worker 1] bulk commit starting for index: ns_1_shard_1_min_1 [js_test:multi_coll_drop] 2016-04-06T02:52:11.213-0500 c20013| 2016-04-06T02:52:06.808-0500 D INDEX [repl writer worker 1] done building bottom layer, going to commit [js_test:multi_coll_drop] 2016-04-06T02:52:11.213-0500 c20012| 2016-04-06T02:52:06.810-0500 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs [js_test:multi_coll_drop] 2016-04-06T02:52:11.216-0500 c20012| 2016-04-06T02:52:06.810-0500 D STORAGE [repl writer worker 15] config.chunks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:11.217-0500 c20012| 2016-04-06T02:52:06.810-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.218-0500 c20012| 2016-04-06T02:52:06.810-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.219-0500 c20012| 2016-04-06T02:52:06.810-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.219-0500 c20012| 2016-04-06T02:52:06.810-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.222-0500 c20012| 2016-04-06T02:52:06.810-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.227-0500 c20012| 2016-04-06T02:52:06.810-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.227-0500 c20012| 2016-04-06T02:52:06.810-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.228-0500 c20012| 2016-04-06T02:52:06.810-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.229-0500 c20012| 2016-04-06T02:52:06.810-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.230-0500 c20012| 2016-04-06T02:52:06.810-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.235-0500 c20012| 2016-04-06T02:52:06.810-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.238-0500 c20012| 2016-04-06T02:52:06.810-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.239-0500 c20012| 2016-04-06T02:52:06.810-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.241-0500 c20012| 2016-04-06T02:52:06.810-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.248-0500 c20012| 2016-04-06T02:52:06.810-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.253-0500 c20012| 2016-04-06T02:52:06.811-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.253-0500 c20013| 2016-04-06T02:52:06.811-0500 I INDEX [repl writer worker 1] build index done. scanned 0 total records. 0 secs [js_test:multi_coll_drop] 2016-04-06T02:52:11.255-0500 c20013| 2016-04-06T02:52:06.811-0500 D STORAGE [repl writer worker 1] config.chunks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:11.271-0500 c20011| 2016-04-06T02:52:06.811-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:11.271-0500 c20011| 2016-04-06T02:52:06.811-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:11.278-0500 c20011| 2016-04-06T02:52:06.811-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|7, t: 1 } and is durable through: { ts: Timestamp 1459929126000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.279-0500 c20011| 2016-04-06T02:52:06.811-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929126000|7, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|6, t: 1 }, name-id: "31" } [js_test:multi_coll_drop] 2016-04-06T02:52:11.284-0500 c20011| 2016-04-06T02:52:06.811-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.299-0500 c20011| 2016-04-06T02:52:06.811-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:11.303-0500 c20013| 2016-04-06T02:52:06.811-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.311-0500 c20013| 2016-04-06T02:52:06.811-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.311-0500 c20013| 2016-04-06T02:52:06.811-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.313-0500 c20013| 2016-04-06T02:52:06.811-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.316-0500 c20012| 2016-04-06T02:52:06.811-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:11.322-0500 c20012| 2016-04-06T02:52:06.811-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:11.329-0500 c20012| 2016-04-06T02:52:06.811-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 100 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:11.331-0500 c20012| 2016-04-06T02:52:06.811-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 100 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:11.332-0500 c20012| 2016-04-06T02:52:06.811-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 100 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.336-0500 c20013| 2016-04-06T02:52:06.811-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.337-0500 c20013| 2016-04-06T02:52:06.811-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.338-0500 c20013| 2016-04-06T02:52:06.811-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.339-0500 c20013| 2016-04-06T02:52:06.811-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.340-0500 c20013| 2016-04-06T02:52:06.812-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.343-0500 c20013| 2016-04-06T02:52:06.812-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.344-0500 c20013| 2016-04-06T02:52:06.812-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.347-0500 c20013| 2016-04-06T02:52:06.812-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.349-0500 c20013| 2016-04-06T02:52:06.812-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.349-0500 c20013| 2016-04-06T02:52:06.812-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.350-0500 c20013| 2016-04-06T02:52:06.812-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.351-0500 c20013| 2016-04-06T02:52:06.812-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.354-0500 c20013| 2016-04-06T02:52:06.812-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:11.358-0500 c20013| 2016-04-06T02:52:06.812-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:11.363-0500 c20013| 2016-04-06T02:52:06.812-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 103 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:11.364-0500 c20013| 2016-04-06T02:52:06.812-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 103 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:11.381-0500 c20011| 2016-04-06T02:52:06.812-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:11.383-0500 c20011| 2016-04-06T02:52:06.812-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:11.392-0500 c20011| 2016-04-06T02:52:06.812-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.393-0500 c20011| 2016-04-06T02:52:06.812-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|7, t: 1 } and is durable through: { ts: Timestamp 1459929126000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.416-0500 c20011| 2016-04-06T02:52:06.812-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929126000|7, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|6, t: 1 }, name-id: "31" } [js_test:multi_coll_drop] 2016-04-06T02:52:11.445-0500 c20011| 2016-04-06T02:52:06.812-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:11.458-0500 c20013| 2016-04-06T02:52:06.812-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 103 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.468-0500 c20012| 2016-04-06T02:52:06.814-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:11.473-0500 c20012| 2016-04-06T02:52:06.814-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 102 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:11.476-0500 c20012| 2016-04-06T02:52:06.814-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 102 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:11.509-0500 c20011| 2016-04-06T02:52:06.814-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:11.509-0500 c20011| 2016-04-06T02:52:06.814-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:11.518-0500 c20011| 2016-04-06T02:52:06.814-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|7, t: 1 } and is durable through: { ts: Timestamp 1459929126000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.520-0500 c20011| 2016-04-06T02:52:06.814-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.527-0500 c20011| 2016-04-06T02:52:06.815-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.533-0500 c20011| 2016-04-06T02:52:06.815-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:11.535-0500 c20012| 2016-04-06T02:52:06.815-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 102 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.539-0500 c20011| 2016-04-06T02:52:06.815-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|6, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 15ms [js_test:multi_coll_drop] 2016-04-06T02:52:11.544-0500 c20011| 2016-04-06T02:52:06.815-0500 I COMMAND [conn10] command config.system.indexes command: insert { insert: "system.indexes", documents: [ { ns: "config.chunks", key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 28ms [js_test:multi_coll_drop] 2016-04-06T02:52:11.545-0500 c20012| 2016-04-06T02:52:06.815-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 99 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.550-0500 s20014| 2016-04-06T02:52:06.815-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 20 finished with response: { ok: 1, n: 1, opTime: { ts: Timestamp 1459929126000|7, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:11.554-0500 c20011| 2016-04-06T02:52:06.815-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|6, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 15ms [js_test:multi_coll_drop] 2016-04-06T02:52:11.555-0500 c20012| 2016-04-06T02:52:06.815-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.556-0500 c20012| 2016-04-06T02:52:06.815-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:11.560-0500 s20014| 2016-04-06T02:52:06.815-0500 D ASIO [mongosMain] startCommand: RemoteCommand 22 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:36.815-0500 cmd:{ insert: "system.indexes", documents: [ { ns: "config.chunks", key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.563-0500 c20012| 2016-04-06T02:52:06.815-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 105 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.815-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:11.564-0500 s20014| 2016-04-06T02:52:06.815-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 22 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:11.568-0500 c20011| 2016-04-06T02:52:06.815-0500 D COMMAND [conn10] run command config.$cmd { insert: "system.indexes", documents: [ { ns: "config.chunks", key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.572-0500 c20012| 2016-04-06T02:52:06.815-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 105 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:11.576-0500 c20011| 2016-04-06T02:52:06.815-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:11.579-0500 c20011| 2016-04-06T02:52:06.815-0500 D STORAGE [conn10] WiredTigerKVEngine::createSortedDataInterface ident: index-19--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "ns" : 1, "lastmod" : 1 }, "name" : "ns_1_lastmod_1", "ns" : "config.chunks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:11.584-0500 c20011| 2016-04-06T02:52:06.815-0500 D STORAGE [conn10] create uri: table:index-19--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "ns" : 1, "lastmod" : 1 }, "name" : "ns_1_lastmod_1", "ns" : "config.chunks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:11.585-0500 c20013| 2016-04-06T02:52:06.815-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 102 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.589-0500 c20013| 2016-04-06T02:52:06.815-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.592-0500 c20013| 2016-04-06T02:52:06.815-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:11.594-0500 c20013| 2016-04-06T02:52:06.815-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 106 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.815-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:11.598-0500 c20013| 2016-04-06T02:52:06.816-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:11.601-0500 c20013| 2016-04-06T02:52:06.816-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 107 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:11.603-0500 c20013| 2016-04-06T02:52:06.816-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 107 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:11.604-0500 c20013| 2016-04-06T02:52:06.816-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 106 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:11.606-0500 c20011| 2016-04-06T02:52:06.816-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:11.608-0500 c20011| 2016-04-06T02:52:06.816-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:11.611-0500 c20011| 2016-04-06T02:52:06.816-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:11.613-0500 c20011| 2016-04-06T02:52:06.816-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.618-0500 c20011| 2016-04-06T02:52:06.816-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|7, t: 1 } and is durable through: { ts: Timestamp 1459929126000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.623-0500 c20011| 2016-04-06T02:52:06.816-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:11.624-0500 c20013| 2016-04-06T02:52:06.816-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 107 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.627-0500 c20011| 2016-04-06T02:52:06.819-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-19--6404702321693896372 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:11.628-0500 c20011| 2016-04-06T02:52:06.819-0500 I INDEX [conn10] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:11.629-0500 c20011| 2016-04-06T02:52:06.819-0500 I INDEX [conn10] building index using bulk method [js_test:multi_coll_drop] 2016-04-06T02:52:11.629-0500 c20011| 2016-04-06T02:52:06.819-0500 D INDEX [conn10] bulk commit starting for index: ns_1_lastmod_1 [js_test:multi_coll_drop] 2016-04-06T02:52:11.630-0500 c20011| 2016-04-06T02:52:06.819-0500 D INDEX [conn10] done building bottom layer, going to commit [js_test:multi_coll_drop] 2016-04-06T02:52:11.632-0500 c20011| 2016-04-06T02:52:06.821-0500 I INDEX [conn10] build index done. scanned 0 total records. 0 secs [js_test:multi_coll_drop] 2016-04-06T02:52:11.635-0500 c20011| 2016-04-06T02:52:06.821-0500 D STORAGE [conn10] config.chunks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:11.640-0500 c20011| 2016-04-06T02:52:06.821-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|7, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:553 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:11.644-0500 c20011| 2016-04-06T02:52:06.821-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|7, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:553 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:11.648-0500 c20012| 2016-04-06T02:52:06.821-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 105 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|8, t: 1, h: -3358101729819284316, v: 2, op: "i", ns: "config.system.indexes", o: { _id: ObjectId('5704c0263876c4cfd2eb3ebd'), ns: "config.chunks", key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", unique: true } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.652-0500 c20013| 2016-04-06T02:52:06.821-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 106 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|8, t: 1, h: -3358101729819284316, v: 2, op: "i", ns: "config.system.indexes", o: { _id: ObjectId('5704c0263876c4cfd2eb3ebd'), ns: "config.chunks", key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", unique: true } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.658-0500 c20012| 2016-04-06T02:52:06.822-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|8 and ending at ts: Timestamp 1459929126000|8 [js_test:multi_coll_drop] 2016-04-06T02:52:11.659-0500 c20012| 2016-04-06T02:52:06.822-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:11.660-0500 c20012| 2016-04-06T02:52:06.822-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.661-0500 c20012| 2016-04-06T02:52:06.822-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.664-0500 c20012| 2016-04-06T02:52:06.822-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.665-0500 c20012| 2016-04-06T02:52:06.822-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.666-0500 c20012| 2016-04-06T02:52:06.822-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.667-0500 c20012| 2016-04-06T02:52:06.822-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.668-0500 c20012| 2016-04-06T02:52:06.822-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.670-0500 c20012| 2016-04-06T02:52:06.822-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.671-0500 c20012| 2016-04-06T02:52:06.822-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.671-0500 c20012| 2016-04-06T02:52:06.822-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.672-0500 c20012| 2016-04-06T02:52:06.822-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.672-0500 c20012| 2016-04-06T02:52:06.822-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.673-0500 c20012| 2016-04-06T02:52:06.822-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.674-0500 c20012| 2016-04-06T02:52:06.822-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.674-0500 c20012| 2016-04-06T02:52:06.822-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:11.675-0500 c20012| 2016-04-06T02:52:06.822-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.675-0500 c20012| 2016-04-06T02:52:06.823-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.680-0500 c20012| 2016-04-06T02:52:06.823-0500 D STORAGE [repl writer worker 15] WiredTigerKVEngine::createSortedDataInterface ident: index-21-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "ns" : 1, "lastmod" : 1 }, "name" : "ns_1_lastmod_1", "ns" : "config.chunks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:11.685-0500 c20012| 2016-04-06T02:52:06.823-0500 D STORAGE [repl writer worker 15] create uri: table:index-21-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "ns" : 1, "lastmod" : 1 }, "name" : "ns_1_lastmod_1", "ns" : "config.chunks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:11.686-0500 c20013| 2016-04-06T02:52:06.823-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|8 and ending at ts: Timestamp 1459929126000|8 [js_test:multi_coll_drop] 2016-04-06T02:52:11.687-0500 c20013| 2016-04-06T02:52:06.824-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:11.689-0500 c20011| 2016-04-06T02:52:06.824-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929126000|8, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|7, t: 1 }, name-id: "33" } [js_test:multi_coll_drop] 2016-04-06T02:52:11.689-0500 c20013| 2016-04-06T02:52:06.824-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.690-0500 c20013| 2016-04-06T02:52:06.824-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.690-0500 c20013| 2016-04-06T02:52:06.824-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.692-0500 c20013| 2016-04-06T02:52:06.824-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.692-0500 c20013| 2016-04-06T02:52:06.824-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.694-0500 c20013| 2016-04-06T02:52:06.824-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.701-0500 c20012| 2016-04-06T02:52:06.824-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 107 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.824-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:11.706-0500 c20012| 2016-04-06T02:52:06.824-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 107 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:11.708-0500 c20013| 2016-04-06T02:52:06.824-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.709-0500 c20011| 2016-04-06T02:52:06.824-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:11.710-0500 c20013| 2016-04-06T02:52:06.824-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.713-0500 c20013| 2016-04-06T02:52:06.824-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.714-0500 c20013| 2016-04-06T02:52:06.824-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.715-0500 c20013| 2016-04-06T02:52:06.824-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.716-0500 c20013| 2016-04-06T02:52:06.824-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.721-0500 c20013| 2016-04-06T02:52:06.824-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.723-0500 c20013| 2016-04-06T02:52:06.824-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.727-0500 c20013| 2016-04-06T02:52:06.824-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.728-0500 c20013| 2016-04-06T02:52:06.824-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:11.728-0500 c20013| 2016-04-06T02:52:06.824-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.734-0500 c20013| 2016-04-06T02:52:06.825-0500 D STORAGE [repl writer worker 2] WiredTigerKVEngine::createSortedDataInterface ident: index-21-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "ns" : 1, "lastmod" : 1 }, "name" : "ns_1_lastmod_1", "ns" : "config.chunks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:11.737-0500 c20013| 2016-04-06T02:52:06.825-0500 D STORAGE [repl writer worker 2] create uri: table:index-21-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "ns" : 1, "lastmod" : 1 }, "name" : "ns_1_lastmod_1", "ns" : "config.chunks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:11.741-0500 c20013| 2016-04-06T02:52:06.825-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 110 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.825-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:11.745-0500 c20013| 2016-04-06T02:52:06.825-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 110 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:11.747-0500 c20011| 2016-04-06T02:52:06.825-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:11.757-0500 c20011| 2016-04-06T02:52:06.846-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:11.758-0500 c20011| 2016-04-06T02:52:06.846-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:11.762-0500 c20011| 2016-04-06T02:52:06.846-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.769-0500 c20011| 2016-04-06T02:52:06.846-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|8, t: 1 } and is durable through: { ts: Timestamp 1459929126000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.773-0500 c20011| 2016-04-06T02:52:06.846-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929126000|8, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|7, t: 1 }, name-id: "33" } [js_test:multi_coll_drop] 2016-04-06T02:52:11.780-0500 c20011| 2016-04-06T02:52:06.847-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:11.780-0500 c20013| 2016-04-06T02:52:06.839-0500 D STORAGE [repl writer worker 2] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-21-751336887848580549 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:11.798-0500 c20013| 2016-04-06T02:52:06.839-0500 I INDEX [repl writer worker 2] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:11.799-0500 c20013| 2016-04-06T02:52:06.839-0500 I INDEX [repl writer worker 2] building index using bulk method [js_test:multi_coll_drop] 2016-04-06T02:52:11.800-0500 c20013| 2016-04-06T02:52:06.839-0500 D INDEX [repl writer worker 2] bulk commit starting for index: ns_1_lastmod_1 [js_test:multi_coll_drop] 2016-04-06T02:52:11.801-0500 c20013| 2016-04-06T02:52:06.839-0500 D INDEX [repl writer worker 2] done building bottom layer, going to commit [js_test:multi_coll_drop] 2016-04-06T02:52:11.802-0500 c20013| 2016-04-06T02:52:06.841-0500 I INDEX [repl writer worker 2] build index done. scanned 0 total records. 0 secs [js_test:multi_coll_drop] 2016-04-06T02:52:11.802-0500 c20013| 2016-04-06T02:52:06.841-0500 D STORAGE [repl writer worker 2] config.chunks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:11.804-0500 c20013| 2016-04-06T02:52:06.845-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.805-0500 c20013| 2016-04-06T02:52:06.845-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.808-0500 c20013| 2016-04-06T02:52:06.845-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.810-0500 c20013| 2016-04-06T02:52:06.845-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.811-0500 c20013| 2016-04-06T02:52:06.846-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.812-0500 c20013| 2016-04-06T02:52:06.846-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.813-0500 c20013| 2016-04-06T02:52:06.846-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.814-0500 c20013| 2016-04-06T02:52:06.846-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.816-0500 c20013| 2016-04-06T02:52:06.846-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.817-0500 c20013| 2016-04-06T02:52:06.846-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.819-0500 c20013| 2016-04-06T02:52:06.846-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.819-0500 c20013| 2016-04-06T02:52:06.846-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.820-0500 c20013| 2016-04-06T02:52:06.846-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.820-0500 c20013| 2016-04-06T02:52:06.846-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.821-0500 c20013| 2016-04-06T02:52:06.846-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.822-0500 c20013| 2016-04-06T02:52:06.846-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.824-0500 c20013| 2016-04-06T02:52:06.846-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:11.831-0500 c20013| 2016-04-06T02:52:06.846-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:11.838-0500 c20013| 2016-04-06T02:52:06.846-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 111 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:11.840-0500 c20013| 2016-04-06T02:52:06.846-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 111 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:11.844-0500 c20013| 2016-04-06T02:52:06.847-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 111 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.850-0500 c20011| 2016-04-06T02:52:06.850-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:11.851-0500 c20011| 2016-04-06T02:52:06.850-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:11.853-0500 c20011| 2016-04-06T02:52:06.850-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.857-0500 c20011| 2016-04-06T02:52:06.850-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|8, t: 1 } and is durable through: { ts: Timestamp 1459929126000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.858-0500 c20011| 2016-04-06T02:52:06.850-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.861-0500 c20011| 2016-04-06T02:52:06.850-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:11.865-0500 c20013| 2016-04-06T02:52:06.850-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:11.869-0500 c20013| 2016-04-06T02:52:06.850-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 113 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:11.872-0500 c20013| 2016-04-06T02:52:06.850-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 113 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:11.876-0500 c20011| 2016-04-06T02:52:06.850-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|7, t: 1 } } cursorid:20785203637 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 26ms [js_test:multi_coll_drop] 2016-04-06T02:52:11.880-0500 c20013| 2016-04-06T02:52:06.850-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 113 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.885-0500 c20011| 2016-04-06T02:52:06.850-0500 I COMMAND [conn10] command config.system.indexes command: insert { insert: "system.indexes", documents: [ { ns: "config.chunks", key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 35ms [js_test:multi_coll_drop] 2016-04-06T02:52:11.887-0500 c20012| 2016-04-06T02:52:06.850-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 107 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.889-0500 c20012| 2016-04-06T02:52:06.850-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.891-0500 s20014| 2016-04-06T02:52:06.850-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 22 finished with response: { ok: 1, n: 1, opTime: { ts: Timestamp 1459929126000|8, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:11.892-0500 c20012| 2016-04-06T02:52:06.850-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:11.894-0500 c20012| 2016-04-06T02:52:06.851-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 109 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.851-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:11.898-0500 s20014| 2016-04-06T02:52:06.851-0500 D ASIO [mongosMain] startCommand: RemoteCommand 24 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:36.851-0500 cmd:{ insert: "system.indexes", documents: [ { ns: "config.shards", key: { host: 1 }, name: "host_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.899-0500 s20014| 2016-04-06T02:52:06.851-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 24 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:11.900-0500 c20011| 2016-04-06T02:52:06.851-0500 D COMMAND [conn10] run command config.$cmd { insert: "system.indexes", documents: [ { ns: "config.shards", key: { host: 1 }, name: "host_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.914-0500 c20011| 2016-04-06T02:52:06.851-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:11.916-0500 c20012| 2016-04-06T02:52:06.851-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 109 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:11.917-0500 c20011| 2016-04-06T02:52:06.851-0500 D STORAGE [conn10] stored meta data for config.shards @ RecordId(10) [js_test:multi_coll_drop] 2016-04-06T02:52:11.920-0500 c20011| 2016-04-06T02:52:06.851-0500 D STORAGE [conn10] WiredTigerKVEngine::createRecordStore uri: table:collection-20--6404702321693896372 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:11.928-0500 c20011| 2016-04-06T02:52:06.850-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|7, t: 1 } } cursorid:17466612721 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 24ms [js_test:multi_coll_drop] 2016-04-06T02:52:11.929-0500 c20012| 2016-04-06T02:52:06.852-0500 D STORAGE [repl writer worker 15] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-21-6577373056560964212 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:11.935-0500 c20012| 2016-04-06T02:52:06.852-0500 I INDEX [repl writer worker 15] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:11.935-0500 c20012| 2016-04-06T02:52:06.852-0500 I INDEX [repl writer worker 15] building index using bulk method [js_test:multi_coll_drop] 2016-04-06T02:52:11.936-0500 c20012| 2016-04-06T02:52:06.852-0500 D INDEX [repl writer worker 15] bulk commit starting for index: ns_1_lastmod_1 [js_test:multi_coll_drop] 2016-04-06T02:52:11.938-0500 c20011| 2016-04-06T02:52:06.852-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:11.938-0500 c20012| 2016-04-06T02:52:06.853-0500 D INDEX [repl writer worker 15] done building bottom layer, going to commit [js_test:multi_coll_drop] 2016-04-06T02:52:11.939-0500 c20013| 2016-04-06T02:52:06.851-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 110 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.939-0500 c20013| 2016-04-06T02:52:06.852-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:11.941-0500 c20013| 2016-04-06T02:52:06.852-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:11.946-0500 c20013| 2016-04-06T02:52:06.852-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 116 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.852-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:11.948-0500 c20013| 2016-04-06T02:52:06.852-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 116 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:11.950-0500 c20012| 2016-04-06T02:52:06.855-0500 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs [js_test:multi_coll_drop] 2016-04-06T02:52:11.951-0500 c20012| 2016-04-06T02:52:06.855-0500 D STORAGE [repl writer worker 15] config.chunks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:11.951-0500 c20012| 2016-04-06T02:52:06.855-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.953-0500 c20012| 2016-04-06T02:52:06.855-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.955-0500 c20012| 2016-04-06T02:52:06.855-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.957-0500 c20012| 2016-04-06T02:52:06.855-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.958-0500 c20012| 2016-04-06T02:52:06.856-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.959-0500 c20012| 2016-04-06T02:52:06.856-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.960-0500 c20012| 2016-04-06T02:52:06.856-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.961-0500 c20012| 2016-04-06T02:52:06.856-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.962-0500 c20012| 2016-04-06T02:52:06.856-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.962-0500 c20012| 2016-04-06T02:52:06.856-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.963-0500 c20011| 2016-04-06T02:52:06.856-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-20--6404702321693896372 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:11.963-0500 c20011| 2016-04-06T02:52:06.856-0500 D STORAGE [conn10] config.shards: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:11.966-0500 c20011| 2016-04-06T02:52:06.856-0500 D STORAGE [conn10] WiredTigerKVEngine::createSortedDataInterface ident: index-21--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.shards" }), [js_test:multi_coll_drop] 2016-04-06T02:52:11.968-0500 c20011| 2016-04-06T02:52:06.856-0500 D STORAGE [conn10] create uri: table:index-21--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.shards" }), [js_test:multi_coll_drop] 2016-04-06T02:52:11.969-0500 c20012| 2016-04-06T02:52:06.856-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.970-0500 c20012| 2016-04-06T02:52:06.856-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.971-0500 c20012| 2016-04-06T02:52:06.856-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.972-0500 c20012| 2016-04-06T02:52:06.856-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.972-0500 c20012| 2016-04-06T02:52:06.858-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.973-0500 c20012| 2016-04-06T02:52:06.858-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:11.977-0500 c20012| 2016-04-06T02:52:06.859-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:11.983-0500 c20011| 2016-04-06T02:52:06.859-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-21--6404702321693896372 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:11.984-0500 c20011| 2016-04-06T02:52:06.859-0500 D STORAGE [conn10] config.shards: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:11.988-0500 c20011| 2016-04-06T02:52:06.859-0500 D STORAGE [conn10] WiredTigerKVEngine::createSortedDataInterface ident: index-22--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "host" : 1 }, "name" : "host_1", "ns" : "config.shards" }), [js_test:multi_coll_drop] 2016-04-06T02:52:11.992-0500 c20011| 2016-04-06T02:52:06.859-0500 D STORAGE [conn10] create uri: table:index-22--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "host" : 1 }, "name" : "host_1", "ns" : "config.shards" }), [js_test:multi_coll_drop] 2016-04-06T02:52:11.994-0500 c20011| 2016-04-06T02:52:06.859-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|8, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:458 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 8ms [js_test:multi_coll_drop] 2016-04-06T02:52:12.014-0500 c20011| 2016-04-06T02:52:06.859-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|8, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:458 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:52:12.015-0500 c20012| 2016-04-06T02:52:06.859-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 109 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|9, t: 1, h: 4991443241523653531, v: 2, op: "c", ns: "config.$cmd", o: { create: "shards" } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.016-0500 c20013| 2016-04-06T02:52:06.859-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 116 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|9, t: 1, h: 4991443241523653531, v: 2, op: "c", ns: "config.$cmd", o: { create: "shards" } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.016-0500 c20012| 2016-04-06T02:52:06.859-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|9 and ending at ts: Timestamp 1459929126000|9 [js_test:multi_coll_drop] 2016-04-06T02:52:12.017-0500 c20013| 2016-04-06T02:52:06.859-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|9 and ending at ts: Timestamp 1459929126000|9 [js_test:multi_coll_drop] 2016-04-06T02:52:12.017-0500 c20012| 2016-04-06T02:52:06.861-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:12.020-0500 c20012| 2016-04-06T02:52:06.861-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:12.023-0500 c20012| 2016-04-06T02:52:06.861-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 111 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:12.026-0500 c20012| 2016-04-06T02:52:06.861-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 111 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:12.028-0500 c20012| 2016-04-06T02:52:06.861-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.030-0500 c20012| 2016-04-06T02:52:06.861-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.032-0500 c20012| 2016-04-06T02:52:06.861-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.033-0500 c20012| 2016-04-06T02:52:06.861-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.034-0500 c20012| 2016-04-06T02:52:06.861-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.034-0500 c20012| 2016-04-06T02:52:06.861-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.041-0500 c20011| 2016-04-06T02:52:06.861-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:12.042-0500 c20011| 2016-04-06T02:52:06.861-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:12.044-0500 c20011| 2016-04-06T02:52:06.861-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|8, t: 1 } and is durable through: { ts: Timestamp 1459929126000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.047-0500 c20011| 2016-04-06T02:52:06.861-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.048-0500 c20012| 2016-04-06T02:52:06.861-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.049-0500 c20012| 2016-04-06T02:52:06.861-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.054-0500 c20011| 2016-04-06T02:52:06.861-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:12.056-0500 c20012| 2016-04-06T02:52:06.861-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.058-0500 c20012| 2016-04-06T02:52:06.861-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 111 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.058-0500 c20012| 2016-04-06T02:52:06.861-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.059-0500 c20012| 2016-04-06T02:52:06.861-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.063-0500 c20012| 2016-04-06T02:52:06.861-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.064-0500 c20012| 2016-04-06T02:52:06.861-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.064-0500 c20012| 2016-04-06T02:52:06.861-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:12.066-0500 c20013| 2016-04-06T02:52:06.861-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:12.067-0500 c20012| 2016-04-06T02:52:06.861-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.069-0500 c20012| 2016-04-06T02:52:06.861-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.069-0500 c20012| 2016-04-06T02:52:06.861-0500 D STORAGE [repl writer worker 13] create collection config.shards {} [js_test:multi_coll_drop] 2016-04-06T02:52:12.071-0500 c20012| 2016-04-06T02:52:06.861-0500 D STORAGE [repl writer worker 13] stored meta data for config.shards @ RecordId(11) [js_test:multi_coll_drop] 2016-04-06T02:52:12.086-0500 c20012| 2016-04-06T02:52:06.861-0500 D STORAGE [repl writer worker 13] WiredTigerKVEngine::createRecordStore uri: table:collection-22-6577373056560964212 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:12.090-0500 c20012| 2016-04-06T02:52:06.861-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.092-0500 c20013| 2016-04-06T02:52:06.862-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.093-0500 c20013| 2016-04-06T02:52:06.862-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.095-0500 c20013| 2016-04-06T02:52:06.862-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.100-0500 c20013| 2016-04-06T02:52:06.862-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 118 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.862-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:12.102-0500 c20013| 2016-04-06T02:52:06.862-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.105-0500 c20013| 2016-04-06T02:52:06.862-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.106-0500 c20013| 2016-04-06T02:52:06.862-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.108-0500 c20013| 2016-04-06T02:52:06.862-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.110-0500 c20012| 2016-04-06T02:52:06.862-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 113 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.862-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:12.111-0500 c20013| 2016-04-06T02:52:06.862-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.112-0500 c20013| 2016-04-06T02:52:06.862-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.113-0500 c20013| 2016-04-06T02:52:06.862-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.114-0500 c20013| 2016-04-06T02:52:06.862-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.116-0500 c20013| 2016-04-06T02:52:06.862-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 118 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:12.118-0500 c20012| 2016-04-06T02:52:06.862-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 113 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:12.123-0500 c20011| 2016-04-06T02:52:06.862-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:12.129-0500 c20011| 2016-04-06T02:52:06.862-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:12.132-0500 c20013| 2016-04-06T02:52:06.862-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.137-0500 c20013| 2016-04-06T02:52:06.862-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.138-0500 c20013| 2016-04-06T02:52:06.863-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.139-0500 c20013| 2016-04-06T02:52:06.863-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.141-0500 c20013| 2016-04-06T02:52:06.863-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:12.143-0500 c20013| 2016-04-06T02:52:06.863-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.144-0500 c20013| 2016-04-06T02:52:06.863-0500 D STORAGE [repl writer worker 14] create collection config.shards {} [js_test:multi_coll_drop] 2016-04-06T02:52:12.148-0500 c20013| 2016-04-06T02:52:06.863-0500 D STORAGE [repl writer worker 14] stored meta data for config.shards @ RecordId(11) [js_test:multi_coll_drop] 2016-04-06T02:52:12.153-0500 c20013| 2016-04-06T02:52:06.863-0500 D STORAGE [repl writer worker 14] WiredTigerKVEngine::createRecordStore uri: table:collection-22-751336887848580549 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:12.156-0500 c20011| 2016-04-06T02:52:06.864-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-22--6404702321693896372 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:12.157-0500 c20011| 2016-04-06T02:52:06.864-0500 I INDEX [conn10] build index on: config.shards properties: { v: 1, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } [js_test:multi_coll_drop] 2016-04-06T02:52:12.158-0500 c20011| 2016-04-06T02:52:06.864-0500 I INDEX [conn10] building index using bulk method [js_test:multi_coll_drop] 2016-04-06T02:52:12.159-0500 c20011| 2016-04-06T02:52:06.864-0500 D INDEX [conn10] bulk commit starting for index: host_1 [js_test:multi_coll_drop] 2016-04-06T02:52:12.161-0500 c20011| 2016-04-06T02:52:06.865-0500 D INDEX [conn10] done building bottom layer, going to commit [js_test:multi_coll_drop] 2016-04-06T02:52:12.166-0500 c20012| 2016-04-06T02:52:06.865-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:12.171-0500 c20012| 2016-04-06T02:52:06.865-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 114 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:12.174-0500 c20012| 2016-04-06T02:52:06.865-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 114 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:12.176-0500 c20011| 2016-04-06T02:52:06.865-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:12.178-0500 c20011| 2016-04-06T02:52:06.865-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:12.180-0500 c20011| 2016-04-06T02:52:06.865-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|8, t: 1 } and is durable through: { ts: Timestamp 1459929126000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.186-0500 c20011| 2016-04-06T02:52:06.865-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.193-0500 c20011| 2016-04-06T02:52:06.865-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:12.194-0500 c20012| 2016-04-06T02:52:06.866-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 114 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.195-0500 c20011| 2016-04-06T02:52:06.867-0500 I INDEX [conn10] build index done. scanned 0 total records. 0 secs [js_test:multi_coll_drop] 2016-04-06T02:52:12.197-0500 c20011| 2016-04-06T02:52:06.867-0500 D STORAGE [conn10] config.shards: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:12.206-0500 c20011| 2016-04-06T02:52:06.867-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|8, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:534 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:12.209-0500 c20011| 2016-04-06T02:52:06.867-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|8, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:534 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:12.214-0500 c20012| 2016-04-06T02:52:06.867-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 113 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|10, t: 1, h: 5985663985225581924, v: 2, op: "i", ns: "config.system.indexes", o: { _id: ObjectId('5704c0263876c4cfd2eb3ebe'), ns: "config.shards", key: { host: 1 }, name: "host_1", unique: true } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.218-0500 c20013| 2016-04-06T02:52:06.867-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 118 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|10, t: 1, h: 5985663985225581924, v: 2, op: "i", ns: "config.system.indexes", o: { _id: ObjectId('5704c0263876c4cfd2eb3ebe'), ns: "config.shards", key: { host: 1 }, name: "host_1", unique: true } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.219-0500 c20012| 2016-04-06T02:52:06.867-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|10 and ending at ts: Timestamp 1459929126000|10 [js_test:multi_coll_drop] 2016-04-06T02:52:12.223-0500 c20012| 2016-04-06T02:52:06.867-0500 D STORAGE [repl writer worker 13] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-22-6577373056560964212 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:12.223-0500 c20012| 2016-04-06T02:52:06.867-0500 D STORAGE [repl writer worker 13] config.shards: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:12.227-0500 c20012| 2016-04-06T02:52:06.868-0500 D STORAGE [repl writer worker 13] WiredTigerKVEngine::createSortedDataInterface ident: index-23-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.shards" }), [js_test:multi_coll_drop] 2016-04-06T02:52:12.229-0500 c20012| 2016-04-06T02:52:06.868-0500 D STORAGE [repl writer worker 13] create uri: table:index-23-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.shards" }), [js_test:multi_coll_drop] 2016-04-06T02:52:12.230-0500 c20013| 2016-04-06T02:52:06.867-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|10 and ending at ts: Timestamp 1459929126000|10 [js_test:multi_coll_drop] 2016-04-06T02:52:12.233-0500 c20013| 2016-04-06T02:52:06.867-0500 D STORAGE [repl writer worker 14] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-22-751336887848580549 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:12.234-0500 c20013| 2016-04-06T02:52:06.867-0500 D STORAGE [repl writer worker 14] config.shards: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:12.237-0500 c20013| 2016-04-06T02:52:06.868-0500 D STORAGE [repl writer worker 14] WiredTigerKVEngine::createSortedDataInterface ident: index-23-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.shards" }), [js_test:multi_coll_drop] 2016-04-06T02:52:12.238-0500 c20013| 2016-04-06T02:52:06.868-0500 D STORAGE [repl writer worker 14] create uri: table:index-23-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.shards" }), [js_test:multi_coll_drop] 2016-04-06T02:52:12.241-0500 c20011| 2016-04-06T02:52:06.868-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929126000|10, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|8, t: 1 }, name-id: "35" } [js_test:multi_coll_drop] 2016-04-06T02:52:12.245-0500 c20012| 2016-04-06T02:52:06.869-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 117 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.869-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:12.250-0500 c20012| 2016-04-06T02:52:06.869-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 117 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:12.253-0500 c20011| 2016-04-06T02:52:06.869-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:12.257-0500 c20013| 2016-04-06T02:52:06.870-0500 D STORAGE [repl writer worker 14] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-23-751336887848580549 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:12.258-0500 c20013| 2016-04-06T02:52:06.870-0500 D STORAGE [repl writer worker 14] config.shards: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:12.259-0500 c20013| 2016-04-06T02:52:06.870-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.262-0500 c20012| 2016-04-06T02:52:06.870-0500 D STORAGE [repl writer worker 13] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-23-6577373056560964212 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:12.263-0500 c20013| 2016-04-06T02:52:06.870-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.263-0500 c20012| 2016-04-06T02:52:06.870-0500 D STORAGE [repl writer worker 13] config.shards: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:12.264-0500 c20013| 2016-04-06T02:52:06.870-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.265-0500 c20013| 2016-04-06T02:52:06.870-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.266-0500 c20013| 2016-04-06T02:52:06.870-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.269-0500 c20013| 2016-04-06T02:52:06.870-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.271-0500 c20012| 2016-04-06T02:52:06.870-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.275-0500 c20013| 2016-04-06T02:52:06.870-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.276-0500 c20013| 2016-04-06T02:52:06.870-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.276-0500 c20013| 2016-04-06T02:52:06.870-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.278-0500 c20012| 2016-04-06T02:52:06.871-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.280-0500 c20012| 2016-04-06T02:52:06.871-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.283-0500 c20012| 2016-04-06T02:52:06.871-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.283-0500 c20012| 2016-04-06T02:52:06.871-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.288-0500 c20012| 2016-04-06T02:52:06.871-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.291-0500 c20011| 2016-04-06T02:52:06.871-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:12.291-0500 c20012| 2016-04-06T02:52:06.871-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.292-0500 c20012| 2016-04-06T02:52:06.871-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.293-0500 c20012| 2016-04-06T02:52:06.871-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.296-0500 c20013| 2016-04-06T02:52:06.871-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 120 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.871-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:12.300-0500 c20013| 2016-04-06T02:52:06.871-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 120 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:12.300-0500 c20012| 2016-04-06T02:52:06.872-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.302-0500 c20012| 2016-04-06T02:52:06.872-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.303-0500 c20013| 2016-04-06T02:52:06.872-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.305-0500 c20013| 2016-04-06T02:52:06.872-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.306-0500 c20013| 2016-04-06T02:52:06.872-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.307-0500 c20013| 2016-04-06T02:52:06.872-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.310-0500 c20012| 2016-04-06T02:52:06.872-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.310-0500 c20012| 2016-04-06T02:52:06.872-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.311-0500 c20012| 2016-04-06T02:52:06.872-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.311-0500 c20013| 2016-04-06T02:52:06.872-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.312-0500 c20013| 2016-04-06T02:52:06.872-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.315-0500 c20013| 2016-04-06T02:52:06.872-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.315-0500 c20012| 2016-04-06T02:52:06.872-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.317-0500 c20012| 2016-04-06T02:52:06.872-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.323-0500 c20012| 2016-04-06T02:52:06.872-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:12.328-0500 c20012| 2016-04-06T02:52:06.873-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:12.330-0500 c20012| 2016-04-06T02:52:06.873-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.330-0500 c20012| 2016-04-06T02:52:06.873-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.331-0500 c20012| 2016-04-06T02:52:06.873-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.333-0500 c20012| 2016-04-06T02:52:06.873-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.333-0500 c20012| 2016-04-06T02:52:06.873-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.337-0500 c20012| 2016-04-06T02:52:06.873-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.338-0500 c20012| 2016-04-06T02:52:06.873-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.341-0500 c20012| 2016-04-06T02:52:06.873-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.341-0500 c20012| 2016-04-06T02:52:06.873-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.342-0500 c20012| 2016-04-06T02:52:06.873-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.346-0500 c20012| 2016-04-06T02:52:06.873-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.349-0500 c20012| 2016-04-06T02:52:06.873-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:12.354-0500 c20012| 2016-04-06T02:52:06.873-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 118 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:12.355-0500 c20012| 2016-04-06T02:52:06.873-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.355-0500 c20012| 2016-04-06T02:52:06.873-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.356-0500 c20012| 2016-04-06T02:52:06.873-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:12.356-0500 c20012| 2016-04-06T02:52:06.873-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.359-0500 c20012| 2016-04-06T02:52:06.873-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 118 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:12.360-0500 c20012| 2016-04-06T02:52:06.873-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.366-0500 c20012| 2016-04-06T02:52:06.873-0500 D STORAGE [repl writer worker 15] WiredTigerKVEngine::createSortedDataInterface ident: index-24-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "host" : 1 }, "name" : "host_1", "ns" : "config.shards" }), [js_test:multi_coll_drop] 2016-04-06T02:52:12.373-0500 c20012| 2016-04-06T02:52:06.873-0500 D STORAGE [repl writer worker 15] create uri: table:index-24-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "host" : 1 }, "name" : "host_1", "ns" : "config.shards" }), [js_test:multi_coll_drop] 2016-04-06T02:52:12.374-0500 c20012| 2016-04-06T02:52:06.873-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 118 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.377-0500 c20013| 2016-04-06T02:52:06.873-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:12.379-0500 c20013| 2016-04-06T02:52:06.873-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:12.380-0500 c20013| 2016-04-06T02:52:06.873-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.381-0500 c20013| 2016-04-06T02:52:06.873-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.385-0500 c20013| 2016-04-06T02:52:06.873-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.389-0500 c20013| 2016-04-06T02:52:06.874-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:12.393-0500 c20013| 2016-04-06T02:52:06.874-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 121 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:12.394-0500 c20013| 2016-04-06T02:52:06.874-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.395-0500 c20013| 2016-04-06T02:52:06.874-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.396-0500 c20013| 2016-04-06T02:52:06.874-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.398-0500 c20013| 2016-04-06T02:52:06.874-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.399-0500 c20013| 2016-04-06T02:52:06.874-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.400-0500 c20013| 2016-04-06T02:52:06.874-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.401-0500 c20013| 2016-04-06T02:52:06.874-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.402-0500 c20013| 2016-04-06T02:52:06.874-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.402-0500 c20013| 2016-04-06T02:52:06.874-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.403-0500 c20013| 2016-04-06T02:52:06.874-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.404-0500 c20013| 2016-04-06T02:52:06.874-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 121 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:12.413-0500 c20013| 2016-04-06T02:52:06.874-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.417-0500 c20011| 2016-04-06T02:52:06.873-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:12.418-0500 c20011| 2016-04-06T02:52:06.873-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:12.422-0500 c20011| 2016-04-06T02:52:06.873-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|9, t: 1 } and is durable through: { ts: Timestamp 1459929126000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.426-0500 c20011| 2016-04-06T02:52:06.873-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929126000|10, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|8, t: 1 }, name-id: "35" } [js_test:multi_coll_drop] 2016-04-06T02:52:12.428-0500 c20011| 2016-04-06T02:52:06.873-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.432-0500 c20011| 2016-04-06T02:52:06.873-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:12.435-0500 c20011| 2016-04-06T02:52:06.874-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:12.435-0500 c20011| 2016-04-06T02:52:06.874-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:12.437-0500 c20013| 2016-04-06T02:52:06.874-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:12.437-0500 c20013| 2016-04-06T02:52:06.874-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.441-0500 c20011| 2016-04-06T02:52:06.874-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.445-0500 c20011| 2016-04-06T02:52:06.874-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|9, t: 1 } and is durable through: { ts: Timestamp 1459929126000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.449-0500 c20011| 2016-04-06T02:52:06.874-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929126000|10, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|8, t: 1 }, name-id: "35" } [js_test:multi_coll_drop] 2016-04-06T02:52:12.451-0500 c20012| 2016-04-06T02:52:06.874-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.459-0500 c20011| 2016-04-06T02:52:06.874-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:12.469-0500 c20013| 2016-04-06T02:52:06.874-0500 D STORAGE [repl writer worker 11] WiredTigerKVEngine::createSortedDataInterface ident: index-24-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "host" : 1 }, "name" : "host_1", "ns" : "config.shards" }), [js_test:multi_coll_drop] 2016-04-06T02:52:12.469-0500 c20013| 2016-04-06T02:52:06.874-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 121 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.477-0500 c20013| 2016-04-06T02:52:06.874-0500 D STORAGE [repl writer worker 11] create uri: table:index-24-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "host" : 1 }, "name" : "host_1", "ns" : "config.shards" }), [js_test:multi_coll_drop] 2016-04-06T02:52:12.478-0500 c20013| 2016-04-06T02:52:06.874-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.484-0500 c20013| 2016-04-06T02:52:06.875-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:12.735-0500 c20013| 2016-04-06T02:52:06.875-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 123 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:12.740-0500 c20013| 2016-04-06T02:52:06.875-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 123 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:12.748-0500 c20011| 2016-04-06T02:52:06.875-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:12.750-0500 c20011| 2016-04-06T02:52:06.875-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:12.758-0500 c20011| 2016-04-06T02:52:06.875-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.765-0500 c20011| 2016-04-06T02:52:06.875-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|9, t: 1 } and is durable through: { ts: Timestamp 1459929126000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.766-0500 c20011| 2016-04-06T02:52:06.875-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.768-0500 c20012| 2016-04-06T02:52:06.875-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:12.780-0500 c20012| 2016-04-06T02:52:06.875-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 120 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:12.784-0500 c20011| 2016-04-06T02:52:06.875-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929126000|10, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|9, t: 1 }, name-id: "38" } [js_test:multi_coll_drop] 2016-04-06T02:52:12.785-0500 c20011| 2016-04-06T02:52:06.875-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929126000|10, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|9, t: 1 }, name-id: "38" } [js_test:multi_coll_drop] 2016-04-06T02:52:12.786-0500 c20012| 2016-04-06T02:52:06.875-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 120 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:12.799-0500 c20011| 2016-04-06T02:52:06.875-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:12.803-0500 c20011| 2016-04-06T02:52:06.875-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:12.804-0500 c20011| 2016-04-06T02:52:06.875-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:12.811-0500 c20011| 2016-04-06T02:52:06.875-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|9, t: 1 } and is durable through: { ts: Timestamp 1459929126000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.819-0500 c20011| 2016-04-06T02:52:06.875-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929126000|10, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|9, t: 1 }, name-id: "38" } [js_test:multi_coll_drop] 2016-04-06T02:52:12.823-0500 c20011| 2016-04-06T02:52:06.875-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.830-0500 c20011| 2016-04-06T02:52:06.875-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:12.833-0500 c20012| 2016-04-06T02:52:06.875-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 120 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.837-0500 c20013| 2016-04-06T02:52:06.875-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 123 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.842-0500 c20011| 2016-04-06T02:52:06.875-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|8, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:12.847-0500 c20011| 2016-04-06T02:52:06.875-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|8, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:12.850-0500 c20012| 2016-04-06T02:52:06.875-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 117 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.850-0500 c20013| 2016-04-06T02:52:06.875-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 120 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.851-0500 c20012| 2016-04-06T02:52:06.876-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.852-0500 c20012| 2016-04-06T02:52:06.876-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:12.857-0500 c20012| 2016-04-06T02:52:06.877-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 123 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.877-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:12.857-0500 c20012| 2016-04-06T02:52:06.877-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 123 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:12.859-0500 c20011| 2016-04-06T02:52:06.877-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:12.866-0500 c20011| 2016-04-06T02:52:06.877-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:12.868-0500 c20013| 2016-04-06T02:52:06.877-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.869-0500 c20013| 2016-04-06T02:52:06.877-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:12.873-0500 c20013| 2016-04-06T02:52:06.877-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 126 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.877-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:12.876-0500 c20013| 2016-04-06T02:52:06.877-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 126 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:12.879-0500 c20012| 2016-04-06T02:52:06.883-0500 D STORAGE [repl writer worker 15] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-24-6577373056560964212 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:12.881-0500 c20012| 2016-04-06T02:52:06.883-0500 I INDEX [repl writer worker 15] build index on: config.shards properties: { v: 1, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } [js_test:multi_coll_drop] 2016-04-06T02:52:12.885-0500 c20012| 2016-04-06T02:52:06.883-0500 I INDEX [repl writer worker 15] building index using bulk method [js_test:multi_coll_drop] 2016-04-06T02:52:12.886-0500 c20012| 2016-04-06T02:52:06.883-0500 D INDEX [repl writer worker 15] bulk commit starting for index: host_1 [js_test:multi_coll_drop] 2016-04-06T02:52:12.887-0500 c20013| 2016-04-06T02:52:06.883-0500 D STORAGE [repl writer worker 11] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-24-751336887848580549 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:12.890-0500 c20013| 2016-04-06T02:52:06.883-0500 I INDEX [repl writer worker 11] build index on: config.shards properties: { v: 1, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } [js_test:multi_coll_drop] 2016-04-06T02:52:12.892-0500 c20013| 2016-04-06T02:52:06.883-0500 I INDEX [repl writer worker 11] building index using bulk method [js_test:multi_coll_drop] 2016-04-06T02:52:12.892-0500 c20013| 2016-04-06T02:52:06.883-0500 D INDEX [repl writer worker 11] bulk commit starting for index: host_1 [js_test:multi_coll_drop] 2016-04-06T02:52:12.892-0500 c20012| 2016-04-06T02:52:06.883-0500 D INDEX [repl writer worker 15] done building bottom layer, going to commit [js_test:multi_coll_drop] 2016-04-06T02:52:12.895-0500 c20013| 2016-04-06T02:52:06.883-0500 D INDEX [repl writer worker 11] done building bottom layer, going to commit [js_test:multi_coll_drop] 2016-04-06T02:52:12.897-0500 c20012| 2016-04-06T02:52:06.884-0500 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs [js_test:multi_coll_drop] 2016-04-06T02:52:12.899-0500 c20013| 2016-04-06T02:52:06.884-0500 I INDEX [repl writer worker 11] build index done. scanned 0 total records. 0 secs [js_test:multi_coll_drop] 2016-04-06T02:52:12.900-0500 c20012| 2016-04-06T02:52:06.884-0500 D STORAGE [repl writer worker 15] config.shards: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:12.902-0500 c20013| 2016-04-06T02:52:06.884-0500 D STORAGE [repl writer worker 11] config.shards: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:12.907-0500 c20012| 2016-04-06T02:52:06.884-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.908-0500 c20012| 2016-04-06T02:52:06.885-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.908-0500 c20012| 2016-04-06T02:52:06.885-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.912-0500 c20012| 2016-04-06T02:52:06.885-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.913-0500 c20012| 2016-04-06T02:52:06.885-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.914-0500 c20012| 2016-04-06T02:52:06.885-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.914-0500 c20012| 2016-04-06T02:52:06.885-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.920-0500 c20012| 2016-04-06T02:52:06.885-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.922-0500 c20012| 2016-04-06T02:52:06.885-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.922-0500 c20012| 2016-04-06T02:52:06.885-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.924-0500 c20012| 2016-04-06T02:52:06.885-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.924-0500 c20012| 2016-04-06T02:52:06.885-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.927-0500 c20012| 2016-04-06T02:52:06.885-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.929-0500 c20013| 2016-04-06T02:52:06.885-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.930-0500 c20012| 2016-04-06T02:52:06.885-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.932-0500 c20012| 2016-04-06T02:52:06.885-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.937-0500 c20013| 2016-04-06T02:52:06.885-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.938-0500 c20013| 2016-04-06T02:52:06.885-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.940-0500 c20012| 2016-04-06T02:52:06.886-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:12.944-0500 c20012| 2016-04-06T02:52:06.886-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:12.949-0500 c20012| 2016-04-06T02:52:06.888-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:12.955-0500 c20012| 2016-04-06T02:52:06.888-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 124 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:12.961-0500 c20012| 2016-04-06T02:52:06.888-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 124 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:12.969-0500 c20011| 2016-04-06T02:52:06.888-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:12.969-0500 c20011| 2016-04-06T02:52:06.888-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:12.973-0500 c20011| 2016-04-06T02:52:06.888-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|10, t: 1 } and is durable through: { ts: Timestamp 1459929126000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.979-0500 c20011| 2016-04-06T02:52:06.888-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929126000|10, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|9, t: 1 }, name-id: "38" } [js_test:multi_coll_drop] 2016-04-06T02:52:12.983-0500 c20011| 2016-04-06T02:52:06.888-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:12.997-0500 c20011| 2016-04-06T02:52:06.888-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:12.999-0500 c20012| 2016-04-06T02:52:06.888-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 124 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.025-0500 c20013| 2016-04-06T02:52:06.889-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.026-0500 c20013| 2016-04-06T02:52:06.889-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.027-0500 c20013| 2016-04-06T02:52:06.889-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.027-0500 c20013| 2016-04-06T02:52:06.889-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.030-0500 c20013| 2016-04-06T02:52:06.889-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.030-0500 c20013| 2016-04-06T02:52:06.889-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.035-0500 c20013| 2016-04-06T02:52:06.889-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.037-0500 c20013| 2016-04-06T02:52:06.889-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.037-0500 c20013| 2016-04-06T02:52:06.889-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.038-0500 c20013| 2016-04-06T02:52:06.890-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.039-0500 c20013| 2016-04-06T02:52:06.890-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.041-0500 c20013| 2016-04-06T02:52:06.890-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.041-0500 c20013| 2016-04-06T02:52:06.890-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.047-0500 c20013| 2016-04-06T02:52:06.894-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:13.052-0500 c20013| 2016-04-06T02:52:06.895-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.061-0500 c20013| 2016-04-06T02:52:06.895-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 127 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.063-0500 c20013| 2016-04-06T02:52:06.895-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 127 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:13.067-0500 c20011| 2016-04-06T02:52:06.895-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.067-0500 c20011| 2016-04-06T02:52:06.895-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:13.069-0500 c20011| 2016-04-06T02:52:06.895-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.075-0500 c20011| 2016-04-06T02:52:06.895-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|10, t: 1 } and is durable through: { ts: Timestamp 1459929126000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.077-0500 c20011| 2016-04-06T02:52:06.895-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929126000|10, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|9, t: 1 }, name-id: "38" } [js_test:multi_coll_drop] 2016-04-06T02:52:13.084-0500 c20011| 2016-04-06T02:52:06.895-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:13.085-0500 c20013| 2016-04-06T02:52:06.895-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 127 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.089-0500 c20012| 2016-04-06T02:52:06.898-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.095-0500 c20012| 2016-04-06T02:52:06.898-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 126 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.097-0500 c20012| 2016-04-06T02:52:06.898-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 126 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:13.107-0500 c20011| 2016-04-06T02:52:06.898-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.107-0500 c20011| 2016-04-06T02:52:06.898-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:13.109-0500 c20011| 2016-04-06T02:52:06.898-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|10, t: 1 } and is durable through: { ts: Timestamp 1459929126000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.111-0500 c20011| 2016-04-06T02:52:06.898-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.115-0500 c20011| 2016-04-06T02:52:06.898-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.120-0500 c20011| 2016-04-06T02:52:06.898-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:13.124-0500 c20012| 2016-04-06T02:52:06.898-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 126 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.128-0500 c20011| 2016-04-06T02:52:06.898-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|9, t: 1 } } cursorid:17466612721 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 21ms [js_test:multi_coll_drop] 2016-04-06T02:52:13.147-0500 c20011| 2016-04-06T02:52:06.898-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|9, t: 1 } } cursorid:20785203637 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 21ms [js_test:multi_coll_drop] 2016-04-06T02:52:13.152-0500 c20011| 2016-04-06T02:52:06.898-0500 I COMMAND [conn10] command config.system.indexes command: insert { insert: "system.indexes", documents: [ { ns: "config.shards", key: { host: 1 }, name: "host_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 3, w: 3 } }, Database: { acquireCount: { w: 2, W: 1 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 47ms [js_test:multi_coll_drop] 2016-04-06T02:52:13.155-0500 c20012| 2016-04-06T02:52:06.898-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 123 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.158-0500 s20014| 2016-04-06T02:52:06.899-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 24 finished with response: { ok: 1, n: 1, opTime: { ts: Timestamp 1459929126000|10, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:13.161-0500 c20012| 2016-04-06T02:52:06.899-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.163-0500 s20014| 2016-04-06T02:52:06.899-0500 D ASIO [mongosMain] startCommand: RemoteCommand 26 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:36.899-0500 cmd:{ insert: "system.indexes", documents: [ { ns: "config.locks", key: { ts: 1 }, name: "ts_1" } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.164-0500 c20012| 2016-04-06T02:52:06.899-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:13.166-0500 s20014| 2016-04-06T02:52:06.899-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 26 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:13.167-0500 c20011| 2016-04-06T02:52:06.899-0500 D COMMAND [conn10] run command config.$cmd { insert: "system.indexes", documents: [ { ns: "config.locks", key: { ts: 1 }, name: "ts_1" } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.170-0500 c20011| 2016-04-06T02:52:06.899-0500 D STORAGE [conn10] stored meta data for config.locks @ RecordId(11) [js_test:multi_coll_drop] 2016-04-06T02:52:13.170-0500 c20011| 2016-04-06T02:52:06.899-0500 D STORAGE [conn10] WiredTigerKVEngine::createRecordStore uri: table:collection-23--6404702321693896372 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:13.176-0500 c20011| 2016-04-06T02:52:06.899-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.177-0500 c20011| 2016-04-06T02:52:06.899-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:13.178-0500 c20011| 2016-04-06T02:52:06.899-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.182-0500 c20011| 2016-04-06T02:52:06.899-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|10, t: 1 } and is durable through: { ts: Timestamp 1459929126000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.185-0500 c20011| 2016-04-06T02:52:06.899-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:13.189-0500 c20012| 2016-04-06T02:52:06.899-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 129 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.899-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:13.191-0500 c20012| 2016-04-06T02:52:06.899-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 129 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:13.194-0500 c20011| 2016-04-06T02:52:06.899-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:13.197-0500 c20011| 2016-04-06T02:52:06.903-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-23--6404702321693896372 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:13.198-0500 c20011| 2016-04-06T02:52:06.903-0500 D STORAGE [conn10] config.locks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:13.201-0500 c20011| 2016-04-06T02:52:06.903-0500 D STORAGE [conn10] WiredTigerKVEngine::createSortedDataInterface ident: index-24--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.locks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:13.207-0500 c20011| 2016-04-06T02:52:06.903-0500 D STORAGE [conn10] create uri: table:index-24--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.locks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:13.211-0500 c20013| 2016-04-06T02:52:06.898-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 126 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.222-0500 c20013| 2016-04-06T02:52:06.899-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.233-0500 c20013| 2016-04-06T02:52:06.899-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 130 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.235-0500 c20013| 2016-04-06T02:52:06.899-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 130 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:13.239-0500 c20013| 2016-04-06T02:52:06.899-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 130 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.241-0500 c20013| 2016-04-06T02:52:06.905-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.241-0500 c20013| 2016-04-06T02:52:06.905-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:13.244-0500 c20013| 2016-04-06T02:52:06.905-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 132 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.905-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:13.247-0500 c20013| 2016-04-06T02:52:06.906-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 132 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:13.253-0500 c20011| 2016-04-06T02:52:06.906-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:13.254-0500 c20011| 2016-04-06T02:52:06.910-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-24--6404702321693896372 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:13.255-0500 c20011| 2016-04-06T02:52:06.910-0500 D STORAGE [conn10] config.locks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:13.256-0500 c20011| 2016-04-06T02:52:06.910-0500 D STORAGE [conn10] WiredTigerKVEngine::createSortedDataInterface ident: index-25--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "ts" : 1 }, "name" : "ts_1", "ns" : "config.locks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:13.260-0500 c20011| 2016-04-06T02:52:06.910-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|10, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:457 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 10ms [js_test:multi_coll_drop] 2016-04-06T02:52:13.263-0500 c20011| 2016-04-06T02:52:06.910-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|10, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:457 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:13.266-0500 c20011| 2016-04-06T02:52:06.910-0500 D STORAGE [conn10] create uri: table:index-25--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "ts" : 1 }, "name" : "ts_1", "ns" : "config.locks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:13.273-0500 c20012| 2016-04-06T02:52:06.910-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 129 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|11, t: 1, h: 884972818564164215, v: 2, op: "c", ns: "config.$cmd", o: { create: "locks" } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.276-0500 c20013| 2016-04-06T02:52:06.910-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 132 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|11, t: 1, h: 884972818564164215, v: 2, op: "c", ns: "config.$cmd", o: { create: "locks" } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.277-0500 c20012| 2016-04-06T02:52:06.912-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|11 and ending at ts: Timestamp 1459929126000|11 [js_test:multi_coll_drop] 2016-04-06T02:52:13.279-0500 c20012| 2016-04-06T02:52:06.912-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:13.282-0500 c20012| 2016-04-06T02:52:06.912-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.286-0500 c20012| 2016-04-06T02:52:06.912-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.287-0500 c20012| 2016-04-06T02:52:06.912-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.289-0500 c20012| 2016-04-06T02:52:06.912-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.289-0500 c20012| 2016-04-06T02:52:06.912-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.289-0500 c20012| 2016-04-06T02:52:06.912-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.292-0500 c20012| 2016-04-06T02:52:06.912-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.293-0500 c20012| 2016-04-06T02:52:06.912-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.293-0500 c20012| 2016-04-06T02:52:06.912-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.296-0500 c20012| 2016-04-06T02:52:06.912-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.298-0500 c20012| 2016-04-06T02:52:06.912-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.300-0500 c20012| 2016-04-06T02:52:06.912-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.303-0500 c20012| 2016-04-06T02:52:06.912-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.303-0500 c20012| 2016-04-06T02:52:06.913-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.305-0500 c20012| 2016-04-06T02:52:06.913-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.306-0500 c20012| 2016-04-06T02:52:06.913-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:13.308-0500 c20012| 2016-04-06T02:52:06.913-0500 D STORAGE [repl writer worker 4] create collection config.locks {} [js_test:multi_coll_drop] 2016-04-06T02:52:13.308-0500 c20012| 2016-04-06T02:52:06.913-0500 D STORAGE [repl writer worker 4] stored meta data for config.locks @ RecordId(12) [js_test:multi_coll_drop] 2016-04-06T02:52:13.312-0500 c20012| 2016-04-06T02:52:06.913-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.314-0500 c20012| 2016-04-06T02:52:06.913-0500 D STORAGE [repl writer worker 4] WiredTigerKVEngine::createRecordStore uri: table:collection-25-6577373056560964212 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:13.315-0500 2016-04-06T02:52:06.914-0500 W NETWORK [thread1] Failed to connect to 127.0.0.1:20014, reason: Connection refused [js_test:multi_coll_drop] 2016-04-06T02:52:13.317-0500 c20012| 2016-04-06T02:52:06.914-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 131 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.914-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:13.319-0500 c20012| 2016-04-06T02:52:06.914-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 131 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:13.323-0500 c20011| 2016-04-06T02:52:06.914-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:13.324-0500 c20013| 2016-04-06T02:52:06.915-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|11 and ending at ts: Timestamp 1459929126000|11 [js_test:multi_coll_drop] 2016-04-06T02:52:13.326-0500 c20013| 2016-04-06T02:52:06.915-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:13.327-0500 c20013| 2016-04-06T02:52:06.916-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.328-0500 c20013| 2016-04-06T02:52:06.916-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.329-0500 c20013| 2016-04-06T02:52:06.916-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.330-0500 c20013| 2016-04-06T02:52:06.916-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.331-0500 c20013| 2016-04-06T02:52:06.916-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.339-0500 c20013| 2016-04-06T02:52:06.916-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.341-0500 c20013| 2016-04-06T02:52:06.916-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.341-0500 c20013| 2016-04-06T02:52:06.916-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.346-0500 c20013| 2016-04-06T02:52:06.916-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.347-0500 c20013| 2016-04-06T02:52:06.916-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.347-0500 c20013| 2016-04-06T02:52:06.916-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.350-0500 c20013| 2016-04-06T02:52:06.916-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.352-0500 c20013| 2016-04-06T02:52:06.916-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.354-0500 c20013| 2016-04-06T02:52:06.916-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:13.355-0500 c20013| 2016-04-06T02:52:06.916-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.356-0500 c20013| 2016-04-06T02:52:06.916-0500 D STORAGE [repl writer worker 15] create collection config.locks {} [js_test:multi_coll_drop] 2016-04-06T02:52:13.358-0500 c20013| 2016-04-06T02:52:06.916-0500 D STORAGE [repl writer worker 15] stored meta data for config.locks @ RecordId(12) [js_test:multi_coll_drop] 2016-04-06T02:52:13.362-0500 c20013| 2016-04-06T02:52:06.916-0500 D STORAGE [repl writer worker 15] WiredTigerKVEngine::createRecordStore uri: table:collection-25-751336887848580549 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:13.366-0500 c20013| 2016-04-06T02:52:06.916-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.368-0500 c20013| 2016-04-06T02:52:06.916-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.369-0500 c20011| 2016-04-06T02:52:06.918-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-25--6404702321693896372 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:13.369-0500 c20011| 2016-04-06T02:52:06.918-0500 I INDEX [conn10] build index on: config.locks properties: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:13.370-0500 c20011| 2016-04-06T02:52:06.918-0500 I INDEX [conn10] building index using bulk method [js_test:multi_coll_drop] 2016-04-06T02:52:13.373-0500 c20011| 2016-04-06T02:52:06.918-0500 D INDEX [conn10] bulk commit starting for index: ts_1 [js_test:multi_coll_drop] 2016-04-06T02:52:13.376-0500 c20012| 2016-04-06T02:52:06.918-0500 D STORAGE [repl writer worker 4] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-25-6577373056560964212 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:13.378-0500 c20012| 2016-04-06T02:52:06.918-0500 D STORAGE [repl writer worker 4] config.locks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:13.383-0500 c20012| 2016-04-06T02:52:06.918-0500 D STORAGE [repl writer worker 4] WiredTigerKVEngine::createSortedDataInterface ident: index-26-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.locks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:13.388-0500 c20012| 2016-04-06T02:52:06.918-0500 D STORAGE [repl writer worker 4] create uri: table:index-26-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.locks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:13.390-0500 c20011| 2016-04-06T02:52:06.918-0500 D INDEX [conn10] done building bottom layer, going to commit [js_test:multi_coll_drop] 2016-04-06T02:52:13.394-0500 c20011| 2016-04-06T02:52:06.919-0500 I INDEX [conn10] build index done. scanned 0 total records. 0 secs [js_test:multi_coll_drop] 2016-04-06T02:52:13.395-0500 c20011| 2016-04-06T02:52:06.919-0500 D STORAGE [conn10] config.locks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:13.399-0500 c20011| 2016-04-06T02:52:06.920-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|10, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:520 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:13.402-0500 c20012| 2016-04-06T02:52:06.920-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 131 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|12, t: 1, h: 3414081720856621777, v: 2, op: "i", ns: "config.system.indexes", o: { _id: ObjectId('5704c0263876c4cfd2eb3ebf'), ns: "config.locks", key: { ts: 1 }, name: "ts_1" } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.406-0500 c20012| 2016-04-06T02:52:06.920-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|12 and ending at ts: Timestamp 1459929126000|12 [js_test:multi_coll_drop] 2016-04-06T02:52:13.409-0500 c20011| 2016-04-06T02:52:06.921-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929126000|12, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|10, t: 1 }, name-id: "40" } [js_test:multi_coll_drop] 2016-04-06T02:52:13.411-0500 c20013| 2016-04-06T02:52:06.921-0500 D STORAGE [repl writer worker 15] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-25-751336887848580549 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:13.412-0500 c20013| 2016-04-06T02:52:06.921-0500 D STORAGE [repl writer worker 15] config.locks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:13.417-0500 c20013| 2016-04-06T02:52:06.921-0500 D STORAGE [repl writer worker 15] WiredTigerKVEngine::createSortedDataInterface ident: index-26-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.locks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:13.421-0500 c20013| 2016-04-06T02:52:06.921-0500 D STORAGE [repl writer worker 15] create uri: table:index-26-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.locks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:13.424-0500 c20012| 2016-04-06T02:52:06.922-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 133 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.922-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:13.426-0500 c20012| 2016-04-06T02:52:06.922-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 133 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:13.436-0500 c20011| 2016-04-06T02:52:06.922-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:13.444-0500 c20012| 2016-04-06T02:52:06.923-0500 D STORAGE [repl writer worker 4] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-26-6577373056560964212 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:13.447-0500 c20012| 2016-04-06T02:52:06.923-0500 D STORAGE [repl writer worker 4] config.locks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:13.449-0500 c20012| 2016-04-06T02:52:06.923-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.455-0500 c20012| 2016-04-06T02:52:06.923-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.459-0500 c20012| 2016-04-06T02:52:06.923-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.461-0500 c20012| 2016-04-06T02:52:06.923-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.461-0500 c20012| 2016-04-06T02:52:06.923-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.462-0500 c20012| 2016-04-06T02:52:06.923-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.467-0500 c20012| 2016-04-06T02:52:06.923-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.468-0500 c20012| 2016-04-06T02:52:06.923-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.470-0500 c20012| 2016-04-06T02:52:06.923-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.470-0500 c20012| 2016-04-06T02:52:06.923-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.473-0500 c20012| 2016-04-06T02:52:06.923-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.474-0500 c20012| 2016-04-06T02:52:06.923-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.480-0500 c20012| 2016-04-06T02:52:06.923-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.482-0500 c20012| 2016-04-06T02:52:06.923-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.482-0500 c20012| 2016-04-06T02:52:06.923-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.484-0500 c20012| 2016-04-06T02:52:06.923-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.493-0500 c20012| 2016-04-06T02:52:06.923-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:13.499-0500 c20012| 2016-04-06T02:52:06.923-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:13.514-0500 c20012| 2016-04-06T02:52:06.924-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.523-0500 c20012| 2016-04-06T02:52:06.924-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 134 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.524-0500 c20012| 2016-04-06T02:52:06.924-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.534-0500 c20012| 2016-04-06T02:52:06.924-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 134 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:13.536-0500 c20012| 2016-04-06T02:52:06.924-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.540-0500 c20011| 2016-04-06T02:52:06.924-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.540-0500 c20012| 2016-04-06T02:52:06.924-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.542-0500 c20011| 2016-04-06T02:52:06.924-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:13.545-0500 c20012| 2016-04-06T02:52:06.924-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.549-0500 c20011| 2016-04-06T02:52:06.924-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|11, t: 1 } and is durable through: { ts: Timestamp 1459929126000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.550-0500 c20012| 2016-04-06T02:52:06.924-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.565-0500 c20011| 2016-04-06T02:52:06.924-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929126000|12, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|10, t: 1 }, name-id: "40" } [js_test:multi_coll_drop] 2016-04-06T02:52:13.571-0500 c20011| 2016-04-06T02:52:06.924-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.572-0500 c20012| 2016-04-06T02:52:06.924-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.586-0500 c20011| 2016-04-06T02:52:06.924-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:13.589-0500 c20012| 2016-04-06T02:52:06.924-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 134 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.589-0500 c20012| 2016-04-06T02:52:06.924-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.591-0500 c20012| 2016-04-06T02:52:06.924-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:13.592-0500 c20012| 2016-04-06T02:52:06.924-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.596-0500 c20012| 2016-04-06T02:52:06.924-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.598-0500 c20012| 2016-04-06T02:52:06.924-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.603-0500 c20012| 2016-04-06T02:52:06.924-0500 D STORAGE [repl writer worker 11] WiredTigerKVEngine::createSortedDataInterface ident: index-27-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "ts" : 1 }, "name" : "ts_1", "ns" : "config.locks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:13.604-0500 c20012| 2016-04-06T02:52:06.924-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.614-0500 c20012| 2016-04-06T02:52:06.924-0500 D STORAGE [repl writer worker 11] create uri: table:index-27-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "ts" : 1 }, "name" : "ts_1", "ns" : "config.locks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:13.616-0500 c20012| 2016-04-06T02:52:06.924-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.617-0500 c20012| 2016-04-06T02:52:06.924-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.618-0500 c20012| 2016-04-06T02:52:06.924-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.619-0500 c20012| 2016-04-06T02:52:06.924-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.621-0500 c20012| 2016-04-06T02:52:06.924-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.622-0500 c20013| 2016-04-06T02:52:06.925-0500 D STORAGE [repl writer worker 15] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-26-751336887848580549 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:13.624-0500 c20013| 2016-04-06T02:52:06.925-0500 D STORAGE [repl writer worker 15] config.locks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:13.626-0500 c20012| 2016-04-06T02:52:06.926-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.632-0500 c20012| 2016-04-06T02:52:06.926-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 136 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.633-0500 c20012| 2016-04-06T02:52:06.926-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 136 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:13.646-0500 c20011| 2016-04-06T02:52:06.926-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.646-0500 c20011| 2016-04-06T02:52:06.926-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:13.651-0500 c20011| 2016-04-06T02:52:06.926-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|11, t: 1 } and is durable through: { ts: Timestamp 1459929126000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.654-0500 c20011| 2016-04-06T02:52:06.926-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.657-0500 c20011| 2016-04-06T02:52:06.927-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929126000|12, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|11, t: 1 }, name-id: "43" } [js_test:multi_coll_drop] 2016-04-06T02:52:13.662-0500 c20011| 2016-04-06T02:52:06.927-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929126000|12, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|11, t: 1 }, name-id: "43" } [js_test:multi_coll_drop] 2016-04-06T02:52:13.666-0500 c20011| 2016-04-06T02:52:06.927-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.669-0500 c20011| 2016-04-06T02:52:06.927-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:13.671-0500 c20012| 2016-04-06T02:52:06.927-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 136 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.674-0500 c20012| 2016-04-06T02:52:06.927-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 133 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.678-0500 c20011| 2016-04-06T02:52:06.927-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|10, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:13.679-0500 c20012| 2016-04-06T02:52:06.927-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.685-0500 c20012| 2016-04-06T02:52:06.927-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:13.717-0500 c20012| 2016-04-06T02:52:06.927-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 139 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.927-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:13.719-0500 c20012| 2016-04-06T02:52:06.927-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 139 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:13.721-0500 c20011| 2016-04-06T02:52:06.927-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:13.722-0500 c20012| 2016-04-06T02:52:06.927-0500 D STORAGE [repl writer worker 11] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-27-6577373056560964212 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:13.722-0500 c20012| 2016-04-06T02:52:06.928-0500 I INDEX [repl writer worker 11] build index on: config.locks properties: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:13.724-0500 c20012| 2016-04-06T02:52:06.928-0500 I INDEX [repl writer worker 11] building index using bulk method [js_test:multi_coll_drop] 2016-04-06T02:52:13.725-0500 c20012| 2016-04-06T02:52:06.928-0500 D INDEX [repl writer worker 11] bulk commit starting for index: ts_1 [js_test:multi_coll_drop] 2016-04-06T02:52:13.733-0500 c20012| 2016-04-06T02:52:06.928-0500 D INDEX [repl writer worker 11] done building bottom layer, going to commit [js_test:multi_coll_drop] 2016-04-06T02:52:13.734-0500 c20013| 2016-04-06T02:52:06.933-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.734-0500 c20013| 2016-04-06T02:52:06.933-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.735-0500 c20013| 2016-04-06T02:52:06.933-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.740-0500 c20013| 2016-04-06T02:52:06.933-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.741-0500 c20013| 2016-04-06T02:52:06.933-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.749-0500 c20013| 2016-04-06T02:52:06.933-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.751-0500 c20013| 2016-04-06T02:52:06.933-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.754-0500 c20013| 2016-04-06T02:52:06.935-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.761-0500 c20013| 2016-04-06T02:52:06.935-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 134 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.935-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:13.762-0500 c20012| 2016-04-06T02:52:06.935-0500 I INDEX [repl writer worker 11] build index done. scanned 0 total records. 0 secs [js_test:multi_coll_drop] 2016-04-06T02:52:13.765-0500 c20012| 2016-04-06T02:52:06.935-0500 D STORAGE [repl writer worker 11] config.locks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:13.765-0500 c20013| 2016-04-06T02:52:06.935-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 134 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:13.766-0500 c20013| 2016-04-06T02:52:06.935-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.767-0500 c20013| 2016-04-06T02:52:06.935-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.771-0500 c20011| 2016-04-06T02:52:06.935-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:13.773-0500 c20013| 2016-04-06T02:52:06.935-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.774-0500 c20012| 2016-04-06T02:52:06.935-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.780-0500 c20013| 2016-04-06T02:52:06.935-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.782-0500 c20012| 2016-04-06T02:52:06.935-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.786-0500 c20012| 2016-04-06T02:52:06.935-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.788-0500 c20012| 2016-04-06T02:52:06.935-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.790-0500 c20013| 2016-04-06T02:52:06.935-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.801-0500 c20011| 2016-04-06T02:52:06.935-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|10, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:520 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:13.805-0500 c20013| 2016-04-06T02:52:06.935-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 134 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|12, t: 1, h: 3414081720856621777, v: 2, op: "i", ns: "config.system.indexes", o: { _id: ObjectId('5704c0263876c4cfd2eb3ebf'), ns: "config.locks", key: { ts: 1 }, name: "ts_1" } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.806-0500 c20012| 2016-04-06T02:52:06.936-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.808-0500 c20012| 2016-04-06T02:52:06.936-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.809-0500 c20012| 2016-04-06T02:52:06.936-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.812-0500 c20012| 2016-04-06T02:52:06.936-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.814-0500 c20012| 2016-04-06T02:52:06.936-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.816-0500 c20012| 2016-04-06T02:52:06.936-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.819-0500 c20013| 2016-04-06T02:52:06.936-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.820-0500 c20013| 2016-04-06T02:52:06.936-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.821-0500 c20013| 2016-04-06T02:52:06.936-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.822-0500 c20012| 2016-04-06T02:52:06.936-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.822-0500 c20012| 2016-04-06T02:52:06.936-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.824-0500 c20013| 2016-04-06T02:52:06.937-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:13.829-0500 c20013| 2016-04-06T02:52:06.937-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.843-0500 c20013| 2016-04-06T02:52:06.937-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 136 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.846-0500 c20013| 2016-04-06T02:52:06.937-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 136 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:13.848-0500 c20012| 2016-04-06T02:52:06.937-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.850-0500 c20012| 2016-04-06T02:52:06.937-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.855-0500 c20011| 2016-04-06T02:52:06.937-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.855-0500 c20011| 2016-04-06T02:52:06.937-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:13.861-0500 c20011| 2016-04-06T02:52:06.937-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.861-0500 c20011| 2016-04-06T02:52:06.937-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|11, t: 1 } and is durable through: { ts: Timestamp 1459929126000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.883-0500 c20011| 2016-04-06T02:52:06.937-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929126000|12, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|11, t: 1 }, name-id: "43" } [js_test:multi_coll_drop] 2016-04-06T02:52:13.886-0500 c20011| 2016-04-06T02:52:06.937-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:13.887-0500 c20012| 2016-04-06T02:52:06.937-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.888-0500 c20012| 2016-04-06T02:52:06.937-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:13.889-0500 c20013| 2016-04-06T02:52:06.937-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 136 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.890-0500 c20012| 2016-04-06T02:52:06.938-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:13.894-0500 c20012| 2016-04-06T02:52:06.938-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.896-0500 c20012| 2016-04-06T02:52:06.938-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 140 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.897-0500 c20012| 2016-04-06T02:52:06.938-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 140 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:13.901-0500 c20013| 2016-04-06T02:52:06.938-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.908-0500 c20013| 2016-04-06T02:52:06.938-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 138 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.909-0500 c20013| 2016-04-06T02:52:06.938-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 138 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:13.910-0500 c20012| 2016-04-06T02:52:06.938-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 140 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.915-0500 c20011| 2016-04-06T02:52:06.938-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.916-0500 c20011| 2016-04-06T02:52:06.938-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:13.918-0500 c20011| 2016-04-06T02:52:06.938-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|12, t: 1 } and is durable through: { ts: Timestamp 1459929126000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.920-0500 c20011| 2016-04-06T02:52:06.938-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929126000|12, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|11, t: 1 }, name-id: "43" } [js_test:multi_coll_drop] 2016-04-06T02:52:13.922-0500 c20011| 2016-04-06T02:52:06.938-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.928-0500 c20011| 2016-04-06T02:52:06.938-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:13.933-0500 c20011| 2016-04-06T02:52:06.938-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.933-0500 c20011| 2016-04-06T02:52:06.938-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:13.936-0500 c20011| 2016-04-06T02:52:06.938-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.939-0500 c20011| 2016-04-06T02:52:06.938-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|11, t: 1 } and is durable through: { ts: Timestamp 1459929126000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.940-0500 c20011| 2016-04-06T02:52:06.938-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929126000|12, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|11, t: 1 }, name-id: "43" } [js_test:multi_coll_drop] 2016-04-06T02:52:13.943-0500 c20011| 2016-04-06T02:52:06.938-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:13.944-0500 c20013| 2016-04-06T02:52:06.938-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 138 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.947-0500 c20012| 2016-04-06T02:52:06.940-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.949-0500 c20012| 2016-04-06T02:52:06.940-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 142 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.950-0500 c20012| 2016-04-06T02:52:06.940-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 142 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:13.957-0500 c20011| 2016-04-06T02:52:06.940-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:13.957-0500 c20011| 2016-04-06T02:52:06.940-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:13.961-0500 c20011| 2016-04-06T02:52:06.940-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|12, t: 1 } and is durable through: { ts: Timestamp 1459929126000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.962-0500 c20011| 2016-04-06T02:52:06.940-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.963-0500 c20011| 2016-04-06T02:52:06.940-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.968-0500 c20011| 2016-04-06T02:52:06.940-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:13.969-0500 c20012| 2016-04-06T02:52:06.940-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 142 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.971-0500 c20011| 2016-04-06T02:52:06.941-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|11, t: 1 } } cursorid:20785203637 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 13ms [js_test:multi_coll_drop] 2016-04-06T02:52:13.973-0500 c20011| 2016-04-06T02:52:06.941-0500 I COMMAND [conn10] command config.system.indexes command: insert { insert: "system.indexes", documents: [ { ns: "config.locks", key: { ts: 1 }, name: "ts_1" } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 3, w: 3 } }, Database: { acquireCount: { w: 2, W: 1 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 41ms [js_test:multi_coll_drop] 2016-04-06T02:52:13.974-0500 c20012| 2016-04-06T02:52:06.941-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 139 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.976-0500 s20014| 2016-04-06T02:52:06.941-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 26 finished with response: { ok: 1, n: 1, opTime: { ts: Timestamp 1459929126000|12, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:13.979-0500 c20012| 2016-04-06T02:52:06.941-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.979-0500 c20012| 2016-04-06T02:52:06.941-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:13.984-0500 s20014| 2016-04-06T02:52:06.941-0500 D ASIO [mongosMain] startCommand: RemoteCommand 28 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:36.941-0500 cmd:{ insert: "system.indexes", documents: [ { ns: "config.locks", key: { state: 1, process: 1 }, name: "state_1_process_1" } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.985-0500 s20014| 2016-04-06T02:52:06.941-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 28 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:13.987-0500 c20011| 2016-04-06T02:52:06.941-0500 D COMMAND [conn10] run command config.$cmd { insert: "system.indexes", documents: [ { ns: "config.locks", key: { state: 1, process: 1 }, name: "state_1_process_1" } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:13.989-0500 c20011| 2016-04-06T02:52:06.941-0500 D STORAGE [conn10] WiredTigerKVEngine::createSortedDataInterface ident: index-26--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "state" : 1, "process" : 1 }, "name" : "state_1_process_1", "ns" : "config.locks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:13.994-0500 c20011| 2016-04-06T02:52:06.941-0500 D STORAGE [conn10] create uri: table:index-26--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "state" : 1, "process" : 1 }, "name" : "state_1_process_1", "ns" : "config.locks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:13.997-0500 c20012| 2016-04-06T02:52:06.942-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 145 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.942-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:14.005-0500 c20012| 2016-04-06T02:52:06.942-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 145 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:14.005-0500 c20011| 2016-04-06T02:52:06.942-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:14.015-0500 c20013| 2016-04-06T02:52:06.945-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.016-0500 c20011| 2016-04-06T02:52:06.945-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-26--6404702321693896372 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:14.017-0500 c20011| 2016-04-06T02:52:06.945-0500 I INDEX [conn10] build index on: config.locks properties: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:14.018-0500 c20011| 2016-04-06T02:52:06.945-0500 I INDEX [conn10] building index using bulk method [js_test:multi_coll_drop] 2016-04-06T02:52:14.018-0500 c20011| 2016-04-06T02:52:06.945-0500 D INDEX [conn10] bulk commit starting for index: state_1_process_1 [js_test:multi_coll_drop] 2016-04-06T02:52:14.019-0500 c20011| 2016-04-06T02:52:06.946-0500 D INDEX [conn10] done building bottom layer, going to commit [js_test:multi_coll_drop] 2016-04-06T02:52:14.022-0500 c20013| 2016-04-06T02:52:06.946-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|12 and ending at ts: Timestamp 1459929126000|12 [js_test:multi_coll_drop] 2016-04-06T02:52:14.026-0500 c20011| 2016-04-06T02:52:06.946-0500 I INDEX [conn10] build index done. scanned 0 total records. 0 secs [js_test:multi_coll_drop] 2016-04-06T02:52:14.026-0500 c20011| 2016-04-06T02:52:06.946-0500 D STORAGE [conn10] config.locks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:14.028-0500 c20013| 2016-04-06T02:52:06.946-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:14.032-0500 c20011| 2016-04-06T02:52:06.946-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|12, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:549 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:14.035-0500 c20012| 2016-04-06T02:52:06.946-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 145 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|13, t: 1, h: -4493284270965483374, v: 2, op: "i", ns: "config.system.indexes", o: { _id: ObjectId('5704c0263876c4cfd2eb3ec0'), ns: "config.locks", key: { state: 1, process: 1 }, name: "state_1_process_1" } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.035-0500 c20012| 2016-04-06T02:52:06.947-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|13 and ending at ts: Timestamp 1459929126000|13 [js_test:multi_coll_drop] 2016-04-06T02:52:14.036-0500 c20013| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.036-0500 c20013| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.036-0500 c20013| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.036-0500 c20013| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.040-0500 c20012| 2016-04-06T02:52:06.947-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:14.040-0500 c20013| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.041-0500 c20013| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.044-0500 c20013| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.044-0500 c20012| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.046-0500 c20013| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.048-0500 c20012| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.050-0500 c20012| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.053-0500 c20013| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.054-0500 c20012| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.054-0500 c20012| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.055-0500 c20013| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.055-0500 c20013| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.056-0500 c20013| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.059-0500 c20013| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.059-0500 c20012| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.061-0500 c20012| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.064-0500 c20012| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.085-0500 c20013| 2016-04-06T02:52:06.947-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:14.086-0500 c20012| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.086-0500 c20012| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.087-0500 c20012| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.091-0500 c20012| 2016-04-06T02:52:06.947-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:14.095-0500 c20013| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.096-0500 c20013| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.098-0500 c20012| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.116-0500 c20013| 2016-04-06T02:52:06.947-0500 D STORAGE [repl writer worker 14] WiredTigerKVEngine::createSortedDataInterface ident: index-27-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "ts" : 1 }, "name" : "ts_1", "ns" : "config.locks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:14.119-0500 c20012| 2016-04-06T02:52:06.947-0500 D STORAGE [repl writer worker 1] WiredTigerKVEngine::createSortedDataInterface ident: index-28-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "state" : 1, "process" : 1 }, "name" : "state_1_process_1", "ns" : "config.locks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:14.120-0500 c20013| 2016-04-06T02:52:06.947-0500 D STORAGE [repl writer worker 14] create uri: table:index-27-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "ts" : 1 }, "name" : "ts_1", "ns" : "config.locks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:14.125-0500 c20012| 2016-04-06T02:52:06.947-0500 D STORAGE [repl writer worker 1] create uri: table:index-28-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "state" : 1, "process" : 1 }, "name" : "state_1_process_1", "ns" : "config.locks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:14.129-0500 c20012| 2016-04-06T02:52:06.947-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.133-0500 c20013| 2016-04-06T02:52:06.948-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.136-0500 c20012| 2016-04-06T02:52:06.949-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 147 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.949-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:14.139-0500 c20012| 2016-04-06T02:52:06.949-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 147 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:14.141-0500 c20011| 2016-04-06T02:52:06.949-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:14.141-0500 c20012| 2016-04-06T02:52:06.949-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.142-0500 c20012| 2016-04-06T02:52:06.950-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.145-0500 c20011| 2016-04-06T02:52:06.950-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929126000|13, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|12, t: 1 }, name-id: "46" } [js_test:multi_coll_drop] 2016-04-06T02:52:14.148-0500 c20013| 2016-04-06T02:52:06.950-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 140 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.950-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:14.149-0500 c20013| 2016-04-06T02:52:06.950-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 140 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:14.153-0500 c20011| 2016-04-06T02:52:06.950-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:14.154-0500 c20012| 2016-04-06T02:52:06.950-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.156-0500 c20011| 2016-04-06T02:52:06.951-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|11, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:549 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:14.157-0500 c20013| 2016-04-06T02:52:06.951-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 140 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|13, t: 1, h: -4493284270965483374, v: 2, op: "i", ns: "config.system.indexes", o: { _id: ObjectId('5704c0263876c4cfd2eb3ec0'), ns: "config.locks", key: { state: 1, process: 1 }, name: "state_1_process_1" } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.158-0500 c20012| 2016-04-06T02:52:06.952-0500 D STORAGE [repl writer worker 1] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-28-6577373056560964212 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:14.163-0500 c20012| 2016-04-06T02:52:06.952-0500 I INDEX [repl writer worker 1] build index on: config.locks properties: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:14.164-0500 c20012| 2016-04-06T02:52:06.952-0500 I INDEX [repl writer worker 1] building index using bulk method [js_test:multi_coll_drop] 2016-04-06T02:52:14.166-0500 c20012| 2016-04-06T02:52:06.952-0500 D INDEX [repl writer worker 1] bulk commit starting for index: state_1_process_1 [js_test:multi_coll_drop] 2016-04-06T02:52:14.166-0500 c20012| 2016-04-06T02:52:06.953-0500 D INDEX [repl writer worker 1] done building bottom layer, going to commit [js_test:multi_coll_drop] 2016-04-06T02:52:14.168-0500 c20013| 2016-04-06T02:52:06.952-0500 D STORAGE [repl writer worker 14] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-27-751336887848580549 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:14.171-0500 c20013| 2016-04-06T02:52:06.952-0500 I INDEX [repl writer worker 14] build index on: config.locks properties: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:14.172-0500 c20013| 2016-04-06T02:52:06.952-0500 I INDEX [repl writer worker 14] building index using bulk method [js_test:multi_coll_drop] 2016-04-06T02:52:14.174-0500 c20013| 2016-04-06T02:52:06.952-0500 D INDEX [repl writer worker 14] bulk commit starting for index: ts_1 [js_test:multi_coll_drop] 2016-04-06T02:52:14.175-0500 c20013| 2016-04-06T02:52:06.953-0500 D INDEX [repl writer worker 14] done building bottom layer, going to commit [js_test:multi_coll_drop] 2016-04-06T02:52:14.182-0500 c20013| 2016-04-06T02:52:06.955-0500 I INDEX [repl writer worker 14] build index done. scanned 0 total records. 0 secs [js_test:multi_coll_drop] 2016-04-06T02:52:14.186-0500 c20012| 2016-04-06T02:52:06.955-0500 I INDEX [repl writer worker 1] build index done. scanned 0 total records. 0 secs [js_test:multi_coll_drop] 2016-04-06T02:52:14.188-0500 c20013| 2016-04-06T02:52:06.955-0500 D STORAGE [repl writer worker 14] config.locks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:14.188-0500 c20012| 2016-04-06T02:52:06.955-0500 D STORAGE [repl writer worker 1] config.locks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:14.191-0500 c20012| 2016-04-06T02:52:06.955-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.191-0500 c20012| 2016-04-06T02:52:06.955-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.193-0500 c20012| 2016-04-06T02:52:06.955-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.194-0500 c20013| 2016-04-06T02:52:06.955-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.194-0500 c20013| 2016-04-06T02:52:06.955-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.195-0500 c20013| 2016-04-06T02:52:06.955-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.196-0500 c20012| 2016-04-06T02:52:06.955-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.197-0500 c20012| 2016-04-06T02:52:06.955-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.199-0500 c20012| 2016-04-06T02:52:06.955-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.200-0500 c20013| 2016-04-06T02:52:06.955-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.202-0500 c20013| 2016-04-06T02:52:06.955-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.203-0500 c20013| 2016-04-06T02:52:06.955-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.206-0500 c20013| 2016-04-06T02:52:06.955-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|13 and ending at ts: Timestamp 1459929126000|13 [js_test:multi_coll_drop] 2016-04-06T02:52:14.207-0500 c20013| 2016-04-06T02:52:06.955-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.208-0500 c20012| 2016-04-06T02:52:06.955-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.209-0500 c20013| 2016-04-06T02:52:06.955-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.212-0500 c20012| 2016-04-06T02:52:06.955-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.213-0500 c20013| 2016-04-06T02:52:06.955-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.213-0500 c20012| 2016-04-06T02:52:06.955-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.214-0500 c20012| 2016-04-06T02:52:06.955-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.216-0500 c20012| 2016-04-06T02:52:06.955-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.219-0500 c20012| 2016-04-06T02:52:06.955-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.221-0500 c20013| 2016-04-06T02:52:06.956-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.221-0500 c20013| 2016-04-06T02:52:06.956-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.230-0500 c20013| 2016-04-06T02:52:06.956-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.235-0500 c20013| 2016-04-06T02:52:06.957-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 142 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.957-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:14.237-0500 c20013| 2016-04-06T02:52:06.958-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 142 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:14.241-0500 c20011| 2016-04-06T02:52:06.958-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:14.241-0500 c20012| 2016-04-06T02:52:06.959-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.243-0500 c20012| 2016-04-06T02:52:06.959-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.245-0500 c20012| 2016-04-06T02:52:06.960-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.254-0500 c20012| 2016-04-06T02:52:06.960-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.255-0500 c20012| 2016-04-06T02:52:06.960-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:14.256-0500 c20013| 2016-04-06T02:52:06.960-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.262-0500 c20013| 2016-04-06T02:52:06.960-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.269-0500 c20013| 2016-04-06T02:52:06.960-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.272-0500 c20013| 2016-04-06T02:52:06.960-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.282-0500 c20012| 2016-04-06T02:52:06.960-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.290-0500 c20012| 2016-04-06T02:52:06.960-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 148 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.294-0500 c20012| 2016-04-06T02:52:06.960-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 148 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:14.303-0500 c20011| 2016-04-06T02:52:06.960-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.304-0500 c20011| 2016-04-06T02:52:06.960-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:14.306-0500 c20011| 2016-04-06T02:52:06.960-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|13, t: 1 } and is durable through: { ts: Timestamp 1459929126000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.312-0500 c20011| 2016-04-06T02:52:06.960-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929126000|13, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|12, t: 1 }, name-id: "46" } [js_test:multi_coll_drop] 2016-04-06T02:52:14.316-0500 c20011| 2016-04-06T02:52:06.960-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.319-0500 c20011| 2016-04-06T02:52:06.960-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:14.320-0500 c20012| 2016-04-06T02:52:06.960-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 148 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.322-0500 c20013| 2016-04-06T02:52:06.960-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.328-0500 c20011| 2016-04-06T02:52:06.962-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.331-0500 c20011| 2016-04-06T02:52:06.962-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:14.338-0500 c20011| 2016-04-06T02:52:06.962-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.341-0500 c20011| 2016-04-06T02:52:06.962-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|12, t: 1 } and is durable through: { ts: Timestamp 1459929126000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.348-0500 c20011| 2016-04-06T02:52:06.962-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929126000|13, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|12, t: 1 }, name-id: "46" } [js_test:multi_coll_drop] 2016-04-06T02:52:14.358-0500 c20011| 2016-04-06T02:52:06.962-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:14.365-0500 c20013| 2016-04-06T02:52:06.961-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:14.371-0500 c20013| 2016-04-06T02:52:06.961-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.376-0500 c20013| 2016-04-06T02:52:06.961-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 143 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.377-0500 c20013| 2016-04-06T02:52:06.962-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 143 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:14.379-0500 c20013| 2016-04-06T02:52:06.962-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 143 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.382-0500 c20013| 2016-04-06T02:52:06.962-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:14.382-0500 c20013| 2016-04-06T02:52:06.962-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.384-0500 c20013| 2016-04-06T02:52:06.962-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.385-0500 c20013| 2016-04-06T02:52:06.962-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.386-0500 c20013| 2016-04-06T02:52:06.962-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.386-0500 c20013| 2016-04-06T02:52:06.962-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.390-0500 c20013| 2016-04-06T02:52:06.962-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.391-0500 c20013| 2016-04-06T02:52:06.962-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.392-0500 c20013| 2016-04-06T02:52:06.963-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.392-0500 c20013| 2016-04-06T02:52:06.963-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.395-0500 c20013| 2016-04-06T02:52:06.963-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.395-0500 c20013| 2016-04-06T02:52:06.963-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.398-0500 c20013| 2016-04-06T02:52:06.963-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.398-0500 c20013| 2016-04-06T02:52:06.962-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.400-0500 c20013| 2016-04-06T02:52:06.962-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.401-0500 c20013| 2016-04-06T02:52:06.963-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.402-0500 c20013| 2016-04-06T02:52:06.963-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.402-0500 c20013| 2016-04-06T02:52:06.963-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:14.406-0500 c20013| 2016-04-06T02:52:06.963-0500 D STORAGE [repl writer worker 1] WiredTigerKVEngine::createSortedDataInterface ident: index-28-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "state" : 1, "process" : 1 }, "name" : "state_1_process_1", "ns" : "config.locks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:14.410-0500 c20013| 2016-04-06T02:52:06.963-0500 D STORAGE [repl writer worker 1] create uri: table:index-28-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "state" : 1, "process" : 1 }, "name" : "state_1_process_1", "ns" : "config.locks" }), [js_test:multi_coll_drop] 2016-04-06T02:52:14.413-0500 c20012| 2016-04-06T02:52:06.964-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.422-0500 c20012| 2016-04-06T02:52:06.964-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 150 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.423-0500 c20012| 2016-04-06T02:52:06.964-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 150 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:14.425-0500 c20011| 2016-04-06T02:52:06.964-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.426-0500 c20011| 2016-04-06T02:52:06.964-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:14.430-0500 c20011| 2016-04-06T02:52:06.964-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|13, t: 1 } and is durable through: { ts: Timestamp 1459929126000|13, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.432-0500 c20011| 2016-04-06T02:52:06.964-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|13, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.435-0500 c20011| 2016-04-06T02:52:06.964-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.443-0500 c20011| 2016-04-06T02:52:06.964-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:14.449-0500 c20012| 2016-04-06T02:52:06.964-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 150 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.451-0500 c20011| 2016-04-06T02:52:06.964-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|12, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 15ms [js_test:multi_coll_drop] 2016-04-06T02:52:14.459-0500 c20011| 2016-04-06T02:52:06.964-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|12, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:14.461-0500 c20012| 2016-04-06T02:52:06.965-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 147 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.463-0500 c20013| 2016-04-06T02:52:06.965-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 142 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.465-0500 c20013| 2016-04-06T02:52:06.965-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|13, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.469-0500 c20013| 2016-04-06T02:52:06.965-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:14.471-0500 c20013| 2016-04-06T02:52:06.965-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 146 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.965-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|13, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:14.479-0500 c20011| 2016-04-06T02:52:06.965-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.480-0500 c20011| 2016-04-06T02:52:06.965-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:14.488-0500 c20011| 2016-04-06T02:52:06.965-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.490-0500 c20011| 2016-04-06T02:52:06.965-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|13, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:14.492-0500 c20011| 2016-04-06T02:52:06.965-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|12, t: 1 } and is durable through: { ts: Timestamp 1459929126000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.498-0500 c20011| 2016-04-06T02:52:06.965-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:14.503-0500 c20013| 2016-04-06T02:52:06.965-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.507-0500 c20013| 2016-04-06T02:52:06.965-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 147 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.510-0500 c20013| 2016-04-06T02:52:06.965-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 147 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:14.511-0500 c20013| 2016-04-06T02:52:06.965-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 146 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:14.512-0500 c20013| 2016-04-06T02:52:06.965-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 147 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.515-0500 c20012| 2016-04-06T02:52:06.966-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|13, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.516-0500 c20012| 2016-04-06T02:52:06.966-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:14.521-0500 c20012| 2016-04-06T02:52:06.966-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 153 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.966-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|13, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:14.523-0500 c20012| 2016-04-06T02:52:06.966-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 153 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:14.525-0500 c20011| 2016-04-06T02:52:06.966-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|13, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:14.525-0500 c20013| 2016-04-06T02:52:06.968-0500 D STORAGE [repl writer worker 1] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-28-751336887848580549 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:14.527-0500 c20013| 2016-04-06T02:52:06.968-0500 I INDEX [repl writer worker 1] build index on: config.locks properties: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:14.527-0500 c20013| 2016-04-06T02:52:06.968-0500 I INDEX [repl writer worker 1] building index using bulk method [js_test:multi_coll_drop] 2016-04-06T02:52:14.530-0500 c20013| 2016-04-06T02:52:06.968-0500 D INDEX [repl writer worker 1] bulk commit starting for index: state_1_process_1 [js_test:multi_coll_drop] 2016-04-06T02:52:14.531-0500 c20013| 2016-04-06T02:52:06.968-0500 D INDEX [repl writer worker 1] done building bottom layer, going to commit [js_test:multi_coll_drop] 2016-04-06T02:52:14.535-0500 c20011| 2016-04-06T02:52:06.971-0500 I COMMAND [conn10] command config.system.indexes command: insert { insert: "system.indexes", documents: [ { ns: "config.locks", key: { state: 1, process: 1 }, name: "state_1_process_1" } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 30ms [js_test:multi_coll_drop] 2016-04-06T02:52:14.538-0500 s20014| 2016-04-06T02:52:06.971-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 28 finished with response: { ok: 1, n: 1, opTime: { ts: Timestamp 1459929126000|13, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:14.542-0500 s20014| 2016-04-06T02:52:06.972-0500 D ASIO [mongosMain] startCommand: RemoteCommand 30 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:36.972-0500 cmd:{ insert: "system.indexes", documents: [ { ns: "config.lockpings", key: { ping: 1 }, name: "ping_1" } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.543-0500 s20014| 2016-04-06T02:52:06.972-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 30 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:14.548-0500 c20011| 2016-04-06T02:52:06.972-0500 D COMMAND [conn10] run command config.$cmd { insert: "system.indexes", documents: [ { ns: "config.lockpings", key: { ping: 1 }, name: "ping_1" } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.552-0500 c20011| 2016-04-06T02:52:06.972-0500 D STORAGE [conn10] WiredTigerKVEngine::createSortedDataInterface ident: index-27--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "ping" : 1 }, "name" : "ping_1", "ns" : "config.lockpings" }), [js_test:multi_coll_drop] 2016-04-06T02:52:14.558-0500 c20011| 2016-04-06T02:52:06.972-0500 D STORAGE [conn10] create uri: table:index-27--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "ping" : 1 }, "name" : "ping_1", "ns" : "config.lockpings" }), [js_test:multi_coll_drop] 2016-04-06T02:52:14.560-0500 c20013| 2016-04-06T02:52:06.976-0500 I INDEX [repl writer worker 1] build index done. scanned 0 total records. 0 secs [js_test:multi_coll_drop] 2016-04-06T02:52:14.564-0500 c20013| 2016-04-06T02:52:06.976-0500 D STORAGE [repl writer worker 1] config.locks: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:14.566-0500 c20013| 2016-04-06T02:52:06.976-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.570-0500 c20013| 2016-04-06T02:52:06.976-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.571-0500 c20013| 2016-04-06T02:52:06.977-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.575-0500 c20013| 2016-04-06T02:52:06.977-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.577-0500 c20013| 2016-04-06T02:52:06.977-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.580-0500 c20013| 2016-04-06T02:52:06.977-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.582-0500 c20013| 2016-04-06T02:52:06.977-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.582-0500 c20013| 2016-04-06T02:52:06.977-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.583-0500 c20013| 2016-04-06T02:52:06.977-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.584-0500 c20013| 2016-04-06T02:52:06.977-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.585-0500 c20013| 2016-04-06T02:52:06.977-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.586-0500 c20013| 2016-04-06T02:52:06.977-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.588-0500 c20013| 2016-04-06T02:52:06.977-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.589-0500 c20013| 2016-04-06T02:52:06.977-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.590-0500 c20013| 2016-04-06T02:52:06.977-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.591-0500 c20013| 2016-04-06T02:52:06.977-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.592-0500 c20013| 2016-04-06T02:52:06.977-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:14.601-0500 c20011| 2016-04-06T02:52:06.978-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.602-0500 c20011| 2016-04-06T02:52:06.978-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:14.604-0500 c20011| 2016-04-06T02:52:06.978-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.607-0500 c20011| 2016-04-06T02:52:06.978-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|13, t: 1 } and is durable through: { ts: Timestamp 1459929126000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.623-0500 c20011| 2016-04-06T02:52:06.978-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:14.628-0500 c20013| 2016-04-06T02:52:06.978-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.632-0500 c20013| 2016-04-06T02:52:06.978-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 149 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.633-0500 c20013| 2016-04-06T02:52:06.978-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 149 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:14.634-0500 c20013| 2016-04-06T02:52:06.978-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 149 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.637-0500 c20011| 2016-04-06T02:52:06.980-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.638-0500 c20011| 2016-04-06T02:52:06.980-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:14.643-0500 c20013| 2016-04-06T02:52:06.980-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.648-0500 c20013| 2016-04-06T02:52:06.980-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 151 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.649-0500 c20013| 2016-04-06T02:52:06.980-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 151 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:14.657-0500 c20011| 2016-04-06T02:52:06.980-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.660-0500 c20011| 2016-04-06T02:52:06.980-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|13, t: 1 } and is durable through: { ts: Timestamp 1459929126000|13, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.672-0500 c20011| 2016-04-06T02:52:06.980-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:14.679-0500 c20013| 2016-04-06T02:52:06.980-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 151 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.691-0500 c20011| 2016-04-06T02:52:06.980-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-27--6404702321693896372 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:14.704-0500 c20011| 2016-04-06T02:52:06.980-0500 I INDEX [conn10] build index on: config.lockpings properties: { v: 1, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" } [js_test:multi_coll_drop] 2016-04-06T02:52:14.707-0500 c20011| 2016-04-06T02:52:06.980-0500 I INDEX [conn10] building index using bulk method [js_test:multi_coll_drop] 2016-04-06T02:52:14.708-0500 c20011| 2016-04-06T02:52:06.980-0500 D INDEX [conn10] bulk commit starting for index: ping_1 [js_test:multi_coll_drop] 2016-04-06T02:52:14.710-0500 c20011| 2016-04-06T02:52:06.981-0500 D INDEX [conn10] done building bottom layer, going to commit [js_test:multi_coll_drop] 2016-04-06T02:52:14.711-0500 c20011| 2016-04-06T02:52:06.988-0500 I INDEX [conn10] build index done. scanned 1 total records. 0 secs [js_test:multi_coll_drop] 2016-04-06T02:52:14.711-0500 c20011| 2016-04-06T02:52:06.988-0500 D STORAGE [conn10] config.lockpings: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:14.712-0500 c20011| 2016-04-06T02:52:06.992-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|13, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:528 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 25ms [js_test:multi_coll_drop] 2016-04-06T02:52:14.715-0500 c20012| 2016-04-06T02:52:06.992-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 153 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|14, t: 1, h: 3966918412185610701, v: 2, op: "i", ns: "config.system.indexes", o: { _id: ObjectId('5704c0263876c4cfd2eb3ec1'), ns: "config.lockpings", key: { ping: 1 }, name: "ping_1" } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.717-0500 c20011| 2016-04-06T02:52:06.992-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|13, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:528 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 26ms [js_test:multi_coll_drop] 2016-04-06T02:52:14.725-0500 c20013| 2016-04-06T02:52:06.992-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 146 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929126000|14, t: 1, h: 3966918412185610701, v: 2, op: "i", ns: "config.system.indexes", o: { _id: ObjectId('5704c0263876c4cfd2eb3ec1'), ns: "config.lockpings", key: { ping: 1 }, name: "ping_1" } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.726-0500 c20013| 2016-04-06T02:52:06.993-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|14 and ending at ts: Timestamp 1459929126000|14 [js_test:multi_coll_drop] 2016-04-06T02:52:14.729-0500 c20012| 2016-04-06T02:52:06.993-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929126000|14 and ending at ts: Timestamp 1459929126000|14 [js_test:multi_coll_drop] 2016-04-06T02:52:14.734-0500 c20012| 2016-04-06T02:52:06.993-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:14.735-0500 c20012| 2016-04-06T02:52:06.994-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.736-0500 c20012| 2016-04-06T02:52:06.994-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.737-0500 c20012| 2016-04-06T02:52:06.994-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.738-0500 c20012| 2016-04-06T02:52:06.994-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.738-0500 c20012| 2016-04-06T02:52:06.994-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.739-0500 c20012| 2016-04-06T02:52:06.994-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.741-0500 c20012| 2016-04-06T02:52:06.994-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.742-0500 c20012| 2016-04-06T02:52:06.994-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.743-0500 c20013| 2016-04-06T02:52:06.994-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:14.744-0500 c20012| 2016-04-06T02:52:06.994-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.746-0500 c20012| 2016-04-06T02:52:06.994-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.747-0500 c20012| 2016-04-06T02:52:06.994-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.749-0500 c20012| 2016-04-06T02:52:06.994-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:14.750-0500 c20013| 2016-04-06T02:52:06.994-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.750-0500 c20013| 2016-04-06T02:52:06.994-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.752-0500 c20013| 2016-04-06T02:52:06.994-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.753-0500 c20013| 2016-04-06T02:52:06.994-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.756-0500 c20012| 2016-04-06T02:52:06.994-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.757-0500 c20012| 2016-04-06T02:52:06.994-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.757-0500 c20012| 2016-04-06T02:52:06.994-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.758-0500 c20013| 2016-04-06T02:52:06.994-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.759-0500 c20012| 2016-04-06T02:52:06.994-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.764-0500 c20012| 2016-04-06T02:52:06.994-0500 D STORAGE [repl writer worker 3] WiredTigerKVEngine::createSortedDataInterface ident: index-29-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "ping" : 1 }, "name" : "ping_1", "ns" : "config.lockpings" }), [js_test:multi_coll_drop] 2016-04-06T02:52:14.778-0500 c20012| 2016-04-06T02:52:06.994-0500 D STORAGE [repl writer worker 3] create uri: table:index-29-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "ping" : 1 }, "name" : "ping_1", "ns" : "config.lockpings" }), [js_test:multi_coll_drop] 2016-04-06T02:52:14.783-0500 c20013| 2016-04-06T02:52:06.994-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.784-0500 c20013| 2016-04-06T02:52:06.994-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.787-0500 c20013| 2016-04-06T02:52:06.994-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.789-0500 c20013| 2016-04-06T02:52:06.994-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.790-0500 c20013| 2016-04-06T02:52:06.994-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.791-0500 c20013| 2016-04-06T02:52:06.994-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.828-0500 c20013| 2016-04-06T02:52:06.995-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.829-0500 c20013| 2016-04-06T02:52:06.995-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:14.833-0500 c20013| 2016-04-06T02:52:06.995-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.835-0500 c20013| 2016-04-06T02:52:06.995-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.836-0500 c20012| 2016-04-06T02:52:06.995-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.841-0500 c20013| 2016-04-06T02:52:06.995-0500 D STORAGE [repl writer worker 0] WiredTigerKVEngine::createSortedDataInterface ident: index-29-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "ping" : 1 }, "name" : "ping_1", "ns" : "config.lockpings" }), [js_test:multi_coll_drop] 2016-04-06T02:52:14.843-0500 c20013| 2016-04-06T02:52:06.995-0500 D STORAGE [repl writer worker 0] create uri: table:index-29-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "ping" : 1 }, "name" : "ping_1", "ns" : "config.lockpings" }), [js_test:multi_coll_drop] 2016-04-06T02:52:14.844-0500 c20013| 2016-04-06T02:52:06.995-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.846-0500 c20012| 2016-04-06T02:52:06.995-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 155 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.995-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|13, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:14.848-0500 c20012| 2016-04-06T02:52:06.997-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 155 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:14.854-0500 c20011| 2016-04-06T02:52:06.997-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|13, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:14.856-0500 c20011| 2016-04-06T02:52:06.998-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|13, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:14.859-0500 c20011| 2016-04-06T02:52:06.999-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929126000|14, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|13, t: 1 }, name-id: "49" } [js_test:multi_coll_drop] 2016-04-06T02:52:14.860-0500 c20013| 2016-04-06T02:52:06.998-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 154 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:11.998-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|13, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:14.862-0500 c20013| 2016-04-06T02:52:06.998-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 154 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:14.863-0500 c20013| 2016-04-06T02:52:06.995-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.866-0500 c20013| 2016-04-06T02:52:07.029-0500 D STORAGE [repl writer worker 0] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-29-751336887848580549 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:14.867-0500 c20013| 2016-04-06T02:52:07.029-0500 I INDEX [repl writer worker 0] build index on: config.lockpings properties: { v: 1, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" } [js_test:multi_coll_drop] 2016-04-06T02:52:14.868-0500 c20013| 2016-04-06T02:52:07.029-0500 I INDEX [repl writer worker 0] building index using bulk method [js_test:multi_coll_drop] 2016-04-06T02:52:14.869-0500 c20013| 2016-04-06T02:52:07.029-0500 D INDEX [repl writer worker 0] bulk commit starting for index: ping_1 [js_test:multi_coll_drop] 2016-04-06T02:52:14.870-0500 c20013| 2016-04-06T02:52:07.029-0500 D INDEX [repl writer worker 0] done building bottom layer, going to commit [js_test:multi_coll_drop] 2016-04-06T02:52:14.871-0500 c20012| 2016-04-06T02:52:07.034-0500 D STORAGE [repl writer worker 3] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-29-6577373056560964212 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:14.872-0500 c20012| 2016-04-06T02:52:07.034-0500 I INDEX [repl writer worker 3] build index on: config.lockpings properties: { v: 1, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" } [js_test:multi_coll_drop] 2016-04-06T02:52:14.874-0500 c20012| 2016-04-06T02:52:07.035-0500 I INDEX [repl writer worker 3] building index using bulk method [js_test:multi_coll_drop] 2016-04-06T02:52:14.875-0500 c20012| 2016-04-06T02:52:07.035-0500 D INDEX [repl writer worker 3] bulk commit starting for index: ping_1 [js_test:multi_coll_drop] 2016-04-06T02:52:14.875-0500 c20012| 2016-04-06T02:52:07.035-0500 D INDEX [repl writer worker 3] done building bottom layer, going to commit [js_test:multi_coll_drop] 2016-04-06T02:52:14.877-0500 c20013| 2016-04-06T02:52:07.036-0500 I INDEX [repl writer worker 0] build index done. scanned 1 total records. 0 secs [js_test:multi_coll_drop] 2016-04-06T02:52:14.879-0500 c20013| 2016-04-06T02:52:07.036-0500 D STORAGE [repl writer worker 0] config.lockpings: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:14.880-0500 c20012| 2016-04-06T02:52:07.039-0500 I INDEX [repl writer worker 3] build index done. scanned 1 total records. 0 secs [js_test:multi_coll_drop] 2016-04-06T02:52:14.888-0500 c20012| 2016-04-06T02:52:07.039-0500 D STORAGE [repl writer worker 3] config.lockpings: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:14.892-0500 c20013| 2016-04-06T02:52:07.039-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.893-0500 c20013| 2016-04-06T02:52:07.039-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.894-0500 c20013| 2016-04-06T02:52:07.039-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.896-0500 c20012| 2016-04-06T02:52:07.039-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.897-0500 c20012| 2016-04-06T02:52:07.039-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.899-0500 c20012| 2016-04-06T02:52:07.039-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.900-0500 c20012| 2016-04-06T02:52:07.039-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.901-0500 c20012| 2016-04-06T02:52:07.040-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.901-0500 c20012| 2016-04-06T02:52:07.040-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.903-0500 c20012| 2016-04-06T02:52:07.040-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.903-0500 c20012| 2016-04-06T02:52:07.040-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.903-0500 c20012| 2016-04-06T02:52:07.040-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.905-0500 c20013| 2016-04-06T02:52:07.040-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.905-0500 c20013| 2016-04-06T02:52:07.040-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.906-0500 c20013| 2016-04-06T02:52:07.040-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.908-0500 c20012| 2016-04-06T02:52:07.040-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.909-0500 c20012| 2016-04-06T02:52:07.040-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.910-0500 c20012| 2016-04-06T02:52:07.040-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.911-0500 c20012| 2016-04-06T02:52:07.040-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.912-0500 c20012| 2016-04-06T02:52:07.040-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.913-0500 c20012| 2016-04-06T02:52:07.040-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.914-0500 c20013| 2016-04-06T02:52:07.041-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.915-0500 c20013| 2016-04-06T02:52:07.041-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.915-0500 c20013| 2016-04-06T02:52:07.041-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.916-0500 c20013| 2016-04-06T02:52:07.041-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.920-0500 c20013| 2016-04-06T02:52:07.041-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.921-0500 c20012| 2016-04-06T02:52:07.041-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.921-0500 c20013| 2016-04-06T02:52:07.041-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.922-0500 c20013| 2016-04-06T02:52:07.041-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.923-0500 c20013| 2016-04-06T02:52:07.041-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.925-0500 c20013| 2016-04-06T02:52:07.042-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.926-0500 c20013| 2016-04-06T02:52:07.042-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:14.927-0500 c20012| 2016-04-06T02:52:07.042-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:14.929-0500 c20012| 2016-04-06T02:52:07.042-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.940-0500 c20012| 2016-04-06T02:52:07.042-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 156 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.941-0500 c20012| 2016-04-06T02:52:07.042-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 156 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:14.945-0500 c20011| 2016-04-06T02:52:07.042-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.945-0500 c20011| 2016-04-06T02:52:07.042-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:14.948-0500 c20011| 2016-04-06T02:52:07.042-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.950-0500 c20011| 2016-04-06T02:52:07.042-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.952-0500 c20011| 2016-04-06T02:52:07.042-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:14.955-0500 c20011| 2016-04-06T02:52:07.042-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|14, t: 1 } and is durable through: { ts: Timestamp 1459929126000|13, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.958-0500 c20011| 2016-04-06T02:52:07.042-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929126000|14, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|13, t: 1 }, name-id: "49" } [js_test:multi_coll_drop] 2016-04-06T02:52:14.962-0500 c20011| 2016-04-06T02:52:07.042-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:14.966-0500 c20011| 2016-04-06T02:52:07.042-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|14, t: 1 } and is durable through: { ts: Timestamp 1459929126000|13, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.966-0500 c20011| 2016-04-06T02:52:07.042-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929126000|14, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929126000|13, t: 1 }, name-id: "49" } [js_test:multi_coll_drop] 2016-04-06T02:52:14.968-0500 c20011| 2016-04-06T02:52:07.042-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.972-0500 c20011| 2016-04-06T02:52:07.042-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:14.973-0500 c20012| 2016-04-06T02:52:07.042-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 156 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.974-0500 c20013| 2016-04-06T02:52:07.042-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:14.977-0500 c20013| 2016-04-06T02:52:07.042-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.980-0500 c20013| 2016-04-06T02:52:07.042-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 155 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.981-0500 c20013| 2016-04-06T02:52:07.042-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 155 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:14.982-0500 c20013| 2016-04-06T02:52:07.042-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 155 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.986-0500 c20013| 2016-04-06T02:52:07.043-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.989-0500 c20013| 2016-04-06T02:52:07.043-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 157 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.989-0500 c20013| 2016-04-06T02:52:07.043-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 157 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:14.993-0500 c20011| 2016-04-06T02:52:07.043-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:14.994-0500 c20011| 2016-04-06T02:52:07.043-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:14.995-0500 c20011| 2016-04-06T02:52:07.043-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.996-0500 c20011| 2016-04-06T02:52:07.043-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|14, t: 1 } and is durable through: { ts: Timestamp 1459929126000|14, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:14.997-0500 c20011| 2016-04-06T02:52:07.043-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|14, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.012-0500 c20011| 2016-04-06T02:52:07.043-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:15.016-0500 c20011| 2016-04-06T02:52:07.043-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|13, t: 1 } } cursorid:17466612721 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 45ms [js_test:multi_coll_drop] 2016-04-06T02:52:15.023-0500 c20011| 2016-04-06T02:52:07.043-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|13, t: 1 } } cursorid:20785203637 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 46ms [js_test:multi_coll_drop] 2016-04-06T02:52:15.026-0500 c20012| 2016-04-06T02:52:07.043-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 155 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.028-0500 c20013| 2016-04-06T02:52:07.043-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 157 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.044-0500 c20013| 2016-04-06T02:52:07.043-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 154 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.045-0500 c20013| 2016-04-06T02:52:07.044-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|14, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.048-0500 c20011| 2016-04-06T02:52:07.044-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:15.053-0500 c20011| 2016-04-06T02:52:07.044-0500 I COMMAND [conn10] command config.system.indexes command: insert { insert: "system.indexes", documents: [ { ns: "config.lockpings", key: { ping: 1 }, name: "ping_1" } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 72ms [js_test:multi_coll_drop] 2016-04-06T02:52:15.053-0500 c20012| 2016-04-06T02:52:07.044-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929126000|14, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.060-0500 c20012| 2016-04-06T02:52:07.044-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:15.061-0500 c20013| 2016-04-06T02:52:07.044-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:15.068-0500 c20013| 2016-04-06T02:52:07.044-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 160 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.044-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:15.069-0500 c20013| 2016-04-06T02:52:07.044-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 160 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:15.075-0500 c20012| 2016-04-06T02:52:07.044-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 159 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.044-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:15.080-0500 s20014| 2016-04-06T02:52:07.044-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 30 finished with response: { ok: 1, n: 1, opTime: { ts: Timestamp 1459929126000|14, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:15.083-0500 c20012| 2016-04-06T02:52:07.045-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 159 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:15.085-0500 s20014| 2016-04-06T02:52:07.045-0500 D ASIO [mongosMain] startCommand: RemoteCommand 32 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.045-0500 cmd:{ insert: "system.indexes", documents: [ { ns: "config.tags", key: { ns: 1, min: 1 }, name: "ns_1_min_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.087-0500 c20011| 2016-04-06T02:52:07.045-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:15.089-0500 s20014| 2016-04-06T02:52:07.045-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 32 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:15.091-0500 c20011| 2016-04-06T02:52:07.045-0500 D COMMAND [conn10] run command config.$cmd { insert: "system.indexes", documents: [ { ns: "config.tags", key: { ns: 1, min: 1 }, name: "ns_1_min_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.092-0500 c20011| 2016-04-06T02:52:07.045-0500 D STORAGE [conn10] stored meta data for config.tags @ RecordId(12) [js_test:multi_coll_drop] 2016-04-06T02:52:15.096-0500 c20011| 2016-04-06T02:52:07.045-0500 D STORAGE [conn10] WiredTigerKVEngine::createRecordStore uri: table:collection-28--6404702321693896372 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:15.100-0500 c20012| 2016-04-06T02:52:07.049-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:15.106-0500 c20012| 2016-04-06T02:52:07.049-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 160 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:15.107-0500 c20012| 2016-04-06T02:52:07.049-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 160 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:15.113-0500 c20012| 2016-04-06T02:52:07.049-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 160 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.119-0500 c20011| 2016-04-06T02:52:07.049-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:15.120-0500 c20011| 2016-04-06T02:52:07.049-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:15.126-0500 c20011| 2016-04-06T02:52:07.049-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929126000|14, t: 1 } and is durable through: { ts: Timestamp 1459929126000|14, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.130-0500 c20011| 2016-04-06T02:52:07.049-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.135-0500 c20011| 2016-04-06T02:52:07.049-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:15.138-0500 c20011| 2016-04-06T02:52:07.060-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-28--6404702321693896372 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:15.140-0500 c20011| 2016-04-06T02:52:07.060-0500 D STORAGE [conn10] config.tags: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:15.145-0500 c20011| 2016-04-06T02:52:07.060-0500 D STORAGE [conn10] WiredTigerKVEngine::createSortedDataInterface ident: index-29--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.tags" }), [js_test:multi_coll_drop] 2016-04-06T02:52:15.148-0500 c20011| 2016-04-06T02:52:07.060-0500 D STORAGE [conn10] create uri: table:index-29--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.tags" }), [js_test:multi_coll_drop] 2016-04-06T02:52:15.149-0500 c20011| 2016-04-06T02:52:07.073-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-29--6404702321693896372 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:15.150-0500 c20011| 2016-04-06T02:52:07.073-0500 D STORAGE [conn10] config.tags: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:15.154-0500 c20011| 2016-04-06T02:52:07.073-0500 D STORAGE [conn10] WiredTigerKVEngine::createSortedDataInterface ident: index-30--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "ns" : 1, "min" : 1 }, "name" : "ns_1_min_1", "ns" : "config.tags" }), [js_test:multi_coll_drop] 2016-04-06T02:52:15.157-0500 c20011| 2016-04-06T02:52:07.073-0500 D STORAGE [conn10] create uri: table:index-30--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "ns" : 1, "min" : 1 }, "name" : "ns_1_min_1", "ns" : "config.tags" }), [js_test:multi_coll_drop] 2016-04-06T02:52:15.160-0500 c20011| 2016-04-06T02:52:07.088-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|14, t: 1 } } cursorid:20785203637 numYields:1 nreturned:1 reslen:456 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 43ms [js_test:multi_coll_drop] 2016-04-06T02:52:15.166-0500 c20011| 2016-04-06T02:52:07.088-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|14, t: 1 } } cursorid:17466612721 numYields:1 nreturned:1 reslen:456 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 44ms [js_test:multi_coll_drop] 2016-04-06T02:52:15.171-0500 c20013| 2016-04-06T02:52:07.088-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 160 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|1, t: 1, h: -2366190959945118044, v: 2, op: "c", ns: "config.$cmd", o: { create: "tags" } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.173-0500 c20013| 2016-04-06T02:52:07.089-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|1 and ending at ts: Timestamp 1459929127000|1 [js_test:multi_coll_drop] 2016-04-06T02:52:15.174-0500 c20013| 2016-04-06T02:52:07.089-0500 D REPL [rsBackgroundSync-0] bgsync buffer has 0 bytes [js_test:multi_coll_drop] 2016-04-06T02:52:15.176-0500 c20013| 2016-04-06T02:52:07.089-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:15.177-0500 c20013| 2016-04-06T02:52:07.089-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.179-0500 c20013| 2016-04-06T02:52:07.089-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.182-0500 c20013| 2016-04-06T02:52:07.089-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.184-0500 c20013| 2016-04-06T02:52:07.089-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.185-0500 c20013| 2016-04-06T02:52:07.089-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.186-0500 c20013| 2016-04-06T02:52:07.089-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.187-0500 c20013| 2016-04-06T02:52:07.089-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.187-0500 c20013| 2016-04-06T02:52:07.089-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.188-0500 c20013| 2016-04-06T02:52:07.089-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.190-0500 c20013| 2016-04-06T02:52:07.089-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.190-0500 c20013| 2016-04-06T02:52:07.089-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.191-0500 c20013| 2016-04-06T02:52:07.089-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.192-0500 c20013| 2016-04-06T02:52:07.089-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:15.194-0500 c20013| 2016-04-06T02:52:07.089-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.218-0500 c20013| 2016-04-06T02:52:07.089-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.218-0500 c20013| 2016-04-06T02:52:07.089-0500 D STORAGE [repl writer worker 0] create collection config.tags {} [js_test:multi_coll_drop] 2016-04-06T02:52:15.219-0500 c20013| 2016-04-06T02:52:07.090-0500 D STORAGE [repl writer worker 0] stored meta data for config.tags @ RecordId(13) [js_test:multi_coll_drop] 2016-04-06T02:52:15.224-0500 c20013| 2016-04-06T02:52:07.090-0500 D STORAGE [repl writer worker 0] WiredTigerKVEngine::createRecordStore uri: table:collection-30-751336887848580549 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:15.226-0500 c20013| 2016-04-06T02:52:07.089-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.229-0500 c20013| 2016-04-06T02:52:07.091-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 162 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.091-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:15.229-0500 c20013| 2016-04-06T02:52:07.091-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 162 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:15.232-0500 c20011| 2016-04-06T02:52:07.091-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:15.235-0500 c20012| 2016-04-06T02:52:07.088-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 159 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|1, t: 1, h: -2366190959945118044, v: 2, op: "c", ns: "config.$cmd", o: { create: "tags" } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.239-0500 c20012| 2016-04-06T02:52:07.089-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|1 and ending at ts: Timestamp 1459929127000|1 [js_test:multi_coll_drop] 2016-04-06T02:52:15.239-0500 c20012| 2016-04-06T02:52:07.089-0500 D REPL [rsBackgroundSync-0] bgsync buffer has 0 bytes [js_test:multi_coll_drop] 2016-04-06T02:52:15.241-0500 c20012| 2016-04-06T02:52:07.090-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:15.243-0500 c20012| 2016-04-06T02:52:07.090-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.243-0500 c20012| 2016-04-06T02:52:07.090-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.244-0500 c20012| 2016-04-06T02:52:07.090-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.245-0500 c20012| 2016-04-06T02:52:07.090-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.246-0500 c20012| 2016-04-06T02:52:07.090-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.247-0500 c20012| 2016-04-06T02:52:07.090-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.247-0500 c20012| 2016-04-06T02:52:07.090-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.249-0500 c20012| 2016-04-06T02:52:07.090-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.251-0500 c20012| 2016-04-06T02:52:07.090-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.253-0500 c20012| 2016-04-06T02:52:07.090-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.260-0500 c20012| 2016-04-06T02:52:07.090-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.267-0500 c20012| 2016-04-06T02:52:07.090-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.269-0500 c20012| 2016-04-06T02:52:07.090-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.271-0500 c20012| 2016-04-06T02:52:07.090-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:15.275-0500 c20012| 2016-04-06T02:52:07.090-0500 D STORAGE [repl writer worker 5] create collection config.tags {} [js_test:multi_coll_drop] 2016-04-06T02:52:15.277-0500 c20012| 2016-04-06T02:52:07.090-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.281-0500 c20012| 2016-04-06T02:52:07.090-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.282-0500 c20012| 2016-04-06T02:52:07.090-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.283-0500 c20012| 2016-04-06T02:52:07.090-0500 D STORAGE [repl writer worker 5] stored meta data for config.tags @ RecordId(13) [js_test:multi_coll_drop] 2016-04-06T02:52:15.287-0500 c20012| 2016-04-06T02:52:07.090-0500 D STORAGE [repl writer worker 5] WiredTigerKVEngine::createRecordStore uri: table:collection-30-6577373056560964212 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:15.290-0500 c20012| 2016-04-06T02:52:07.091-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 163 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.091-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:15.290-0500 c20012| 2016-04-06T02:52:07.091-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 163 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:15.293-0500 c20011| 2016-04-06T02:52:07.092-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:15.296-0500 c20013| 2016-04-06T02:52:07.092-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.296-0500 c20011| 2016-04-06T02:52:07.094-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-30--6404702321693896372 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:15.297-0500 c20011| 2016-04-06T02:52:07.094-0500 I INDEX [conn10] build index on: config.tags properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" } [js_test:multi_coll_drop] 2016-04-06T02:52:15.298-0500 c20011| 2016-04-06T02:52:07.094-0500 I INDEX [conn10] building index using bulk method [js_test:multi_coll_drop] 2016-04-06T02:52:15.299-0500 c20011| 2016-04-06T02:52:07.094-0500 D INDEX [conn10] bulk commit starting for index: ns_1_min_1 [js_test:multi_coll_drop] 2016-04-06T02:52:15.302-0500 c20013| 2016-04-06T02:52:07.094-0500 D STORAGE [repl writer worker 0] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-30-751336887848580549 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:15.303-0500 c20013| 2016-04-06T02:52:07.094-0500 D STORAGE [repl writer worker 0] config.tags: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:15.305-0500 c20013| 2016-04-06T02:52:07.095-0500 D STORAGE [repl writer worker 0] WiredTigerKVEngine::createSortedDataInterface ident: index-31-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.tags" }), [js_test:multi_coll_drop] 2016-04-06T02:52:15.307-0500 c20013| 2016-04-06T02:52:07.095-0500 D STORAGE [repl writer worker 0] create uri: table:index-31-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.tags" }), [js_test:multi_coll_drop] 2016-04-06T02:52:15.308-0500 c20011| 2016-04-06T02:52:07.095-0500 D INDEX [conn10] done building bottom layer, going to commit [js_test:multi_coll_drop] 2016-04-06T02:52:15.310-0500 c20011| 2016-04-06T02:52:07.099-0500 I INDEX [conn10] build index done. scanned 0 total records. 0 secs [js_test:multi_coll_drop] 2016-04-06T02:52:15.315-0500 c20011| 2016-04-06T02:52:07.099-0500 D STORAGE [conn10] config.tags: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:15.315-0500 c20013| 2016-04-06T02:52:07.100-0500 D STORAGE [repl writer worker 0] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-31-751336887848580549 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:15.316-0500 c20013| 2016-04-06T02:52:07.100-0500 D STORAGE [repl writer worker 0] config.tags: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:15.316-0500 c20013| 2016-04-06T02:52:07.100-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.317-0500 c20013| 2016-04-06T02:52:07.100-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.321-0500 c20013| 2016-04-06T02:52:07.100-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.325-0500 c20013| 2016-04-06T02:52:07.100-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.326-0500 c20013| 2016-04-06T02:52:07.100-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.345-0500 c20011| 2016-04-06T02:52:07.100-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|14, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:543 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 8ms [js_test:multi_coll_drop] 2016-04-06T02:52:15.348-0500 c20013| 2016-04-06T02:52:07.100-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 162 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|2, t: 1, h: -7750154275698888387, v: 2, op: "i", ns: "config.system.indexes", o: { _id: ObjectId('5704c0273876c4cfd2eb3ec2'), ns: "config.tags", key: { ns: 1, min: 1 }, name: "ns_1_min_1", unique: true } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.349-0500 c20013| 2016-04-06T02:52:07.100-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.354-0500 c20013| 2016-04-06T02:52:07.100-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|2 and ending at ts: Timestamp 1459929127000|2 [js_test:multi_coll_drop] 2016-04-06T02:52:15.356-0500 c20013| 2016-04-06T02:52:07.100-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.358-0500 c20013| 2016-04-06T02:52:07.100-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.359-0500 c20013| 2016-04-06T02:52:07.100-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.360-0500 c20013| 2016-04-06T02:52:07.100-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.366-0500 c20011| 2016-04-06T02:52:07.101-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|14, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:543 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 8ms [js_test:multi_coll_drop] 2016-04-06T02:52:15.369-0500 c20013| 2016-04-06T02:52:07.101-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.375-0500 c20013| 2016-04-06T02:52:07.101-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.376-0500 c20013| 2016-04-06T02:52:07.101-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.381-0500 c20011| 2016-04-06T02:52:07.101-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:15.382-0500 c20011| 2016-04-06T02:52:07.101-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:15.392-0500 c20011| 2016-04-06T02:52:07.101-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|1, t: 1 } and is durable through: { ts: Timestamp 1459929126000|14, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.399-0500 c20011| 2016-04-06T02:52:07.101-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.400-0500 c20013| 2016-04-06T02:52:07.101-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.408-0500 c20011| 2016-04-06T02:52:07.101-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:15.410-0500 c20013| 2016-04-06T02:52:07.101-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.415-0500 c20012| 2016-04-06T02:52:07.094-0500 D STORAGE [repl writer worker 5] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-30-6577373056560964212 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:15.418-0500 c20012| 2016-04-06T02:52:07.094-0500 D STORAGE [repl writer worker 5] config.tags: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:15.426-0500 c20012| 2016-04-06T02:52:07.095-0500 D STORAGE [repl writer worker 5] WiredTigerKVEngine::createSortedDataInterface ident: index-31-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.tags" }), [js_test:multi_coll_drop] 2016-04-06T02:52:15.441-0500 c20012| 2016-04-06T02:52:07.095-0500 D STORAGE [repl writer worker 5] create uri: table:index-31-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.tags" }), [js_test:multi_coll_drop] 2016-04-06T02:52:15.451-0500 c20012| 2016-04-06T02:52:07.100-0500 D STORAGE [repl writer worker 5] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-31-6577373056560964212 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:15.452-0500 c20012| 2016-04-06T02:52:07.100-0500 D STORAGE [repl writer worker 5] config.tags: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:15.456-0500 c20012| 2016-04-06T02:52:07.100-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.461-0500 c20012| 2016-04-06T02:52:07.100-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.463-0500 c20012| 2016-04-06T02:52:07.100-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.466-0500 c20012| 2016-04-06T02:52:07.100-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.466-0500 c20012| 2016-04-06T02:52:07.100-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.467-0500 c20012| 2016-04-06T02:52:07.100-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.469-0500 c20012| 2016-04-06T02:52:07.100-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.470-0500 c20012| 2016-04-06T02:52:07.100-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.472-0500 c20012| 2016-04-06T02:52:07.100-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.476-0500 c20012| 2016-04-06T02:52:07.100-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.477-0500 c20012| 2016-04-06T02:52:07.100-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.478-0500 c20012| 2016-04-06T02:52:07.100-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.480-0500 c20012| 2016-04-06T02:52:07.100-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.480-0500 c20012| 2016-04-06T02:52:07.100-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.482-0500 c20012| 2016-04-06T02:52:07.101-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.483-0500 c20012| 2016-04-06T02:52:07.101-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.485-0500 c20012| 2016-04-06T02:52:07.101-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:15.490-0500 c20012| 2016-04-06T02:52:07.101-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 163 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|2, t: 1, h: -7750154275698888387, v: 2, op: "i", ns: "config.system.indexes", o: { _id: ObjectId('5704c0273876c4cfd2eb3ec2'), ns: "config.tags", key: { ns: 1, min: 1 }, name: "ns_1_min_1", unique: true } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.497-0500 c20012| 2016-04-06T02:52:07.101-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:15.504-0500 c20012| 2016-04-06T02:52:07.101-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 165 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:15.506-0500 c20012| 2016-04-06T02:52:07.101-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|2 and ending at ts: Timestamp 1459929127000|2 [js_test:multi_coll_drop] 2016-04-06T02:52:15.509-0500 c20012| 2016-04-06T02:52:07.101-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 165 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:15.510-0500 c20012| 2016-04-06T02:52:07.101-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:15.511-0500 c20012| 2016-04-06T02:52:07.101-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.515-0500 c20012| 2016-04-06T02:52:07.101-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 165 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.515-0500 c20012| 2016-04-06T02:52:07.101-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.516-0500 c20012| 2016-04-06T02:52:07.101-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.518-0500 c20012| 2016-04-06T02:52:07.101-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.519-0500 c20012| 2016-04-06T02:52:07.101-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.520-0500 c20013| 2016-04-06T02:52:07.101-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.520-0500 c20012| 2016-04-06T02:52:07.101-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.521-0500 c20012| 2016-04-06T02:52:07.101-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.521-0500 c20012| 2016-04-06T02:52:07.101-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.524-0500 c20012| 2016-04-06T02:52:07.101-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.526-0500 c20012| 2016-04-06T02:52:07.101-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.527-0500 c20012| 2016-04-06T02:52:07.101-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.529-0500 c20012| 2016-04-06T02:52:07.101-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:15.530-0500 c20012| 2016-04-06T02:52:07.101-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.530-0500 c20012| 2016-04-06T02:52:07.101-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.534-0500 c20012| 2016-04-06T02:52:07.101-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.535-0500 c20012| 2016-04-06T02:52:07.101-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.535-0500 c20012| 2016-04-06T02:52:07.102-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.538-0500 c20012| 2016-04-06T02:52:07.102-0500 D STORAGE [repl writer worker 6] WiredTigerKVEngine::createSortedDataInterface ident: index-32-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "ns" : 1, "min" : 1 }, "name" : "ns_1_min_1", "ns" : "config.tags" }), [js_test:multi_coll_drop] 2016-04-06T02:52:15.542-0500 c20012| 2016-04-06T02:52:07.102-0500 D STORAGE [repl writer worker 6] create uri: table:index-32-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "ns" : 1, "min" : 1 }, "name" : "ns_1_min_1", "ns" : "config.tags" }), [js_test:multi_coll_drop] 2016-04-06T02:52:15.544-0500 c20013| 2016-04-06T02:52:07.102-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:15.547-0500 c20013| 2016-04-06T02:52:07.102-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:15.550-0500 c20013| 2016-04-06T02:52:07.102-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:15.556-0500 c20013| 2016-04-06T02:52:07.102-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 164 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:15.559-0500 c20013| 2016-04-06T02:52:07.102-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 164 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:15.561-0500 c20013| 2016-04-06T02:52:07.102-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.562-0500 c20013| 2016-04-06T02:52:07.102-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.563-0500 c20013| 2016-04-06T02:52:07.102-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.565-0500 c20013| 2016-04-06T02:52:07.102-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.566-0500 c20013| 2016-04-06T02:52:07.102-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.568-0500 c20013| 2016-04-06T02:52:07.102-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 164 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.570-0500 c20013| 2016-04-06T02:52:07.102-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.576-0500 c20011| 2016-04-06T02:52:07.102-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:15.578-0500 c20011| 2016-04-06T02:52:07.102-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:15.579-0500 c20011| 2016-04-06T02:52:07.102-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.582-0500 c20011| 2016-04-06T02:52:07.102-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|1, t: 1 } and is durable through: { ts: Timestamp 1459929126000|14, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.588-0500 c20011| 2016-04-06T02:52:07.102-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929126000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:15.595-0500 c20011| 2016-04-06T02:52:07.102-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:15.595-0500 c20011| 2016-04-06T02:52:07.102-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:15.596-0500 c20013| 2016-04-06T02:52:07.102-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.597-0500 c20013| 2016-04-06T02:52:07.102-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.603-0500 c20011| 2016-04-06T02:52:07.102-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|1, t: 1 } and is durable through: { ts: Timestamp 1459929127000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.605-0500 c20011| 2016-04-06T02:52:07.102-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.611-0500 c20011| 2016-04-06T02:52:07.102-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:15.615-0500 c20013| 2016-04-06T02:52:07.102-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.616-0500 c20013| 2016-04-06T02:52:07.102-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.617-0500 c20013| 2016-04-06T02:52:07.102-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.618-0500 c20013| 2016-04-06T02:52:07.102-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:15.619-0500 c20013| 2016-04-06T02:52:07.102-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.629-0500 c20013| 2016-04-06T02:52:07.102-0500 D STORAGE [repl writer worker 1] WiredTigerKVEngine::createSortedDataInterface ident: index-32-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "ns" : 1, "min" : 1 }, "name" : "ns_1_min_1", "ns" : "config.tags" }), [js_test:multi_coll_drop] 2016-04-06T02:52:15.631-0500 c20013| 2016-04-06T02:52:07.102-0500 D STORAGE [repl writer worker 1] create uri: table:index-32-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "unique" : true, "key" : { "ns" : 1, "min" : 1 }, "name" : "ns_1_min_1", "ns" : "config.tags" }), [js_test:multi_coll_drop] 2016-04-06T02:52:15.632-0500 c20013| 2016-04-06T02:52:07.102-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.636-0500 c20013| 2016-04-06T02:52:07.103-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.645-0500 c20013| 2016-04-06T02:52:07.103-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 166 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.103-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:15.646-0500 c20013| 2016-04-06T02:52:07.103-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 166 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:15.647-0500 c20013| 2016-04-06T02:52:07.103-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.650-0500 c20011| 2016-04-06T02:52:07.103-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:15.653-0500 c20011| 2016-04-06T02:52:07.103-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:15.661-0500 c20013| 2016-04-06T02:52:07.103-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:15.673-0500 c20013| 2016-04-06T02:52:07.104-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 167 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:15.674-0500 c20013| 2016-04-06T02:52:07.104-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 167 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:15.678-0500 c20013| 2016-04-06T02:52:07.104-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 167 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.683-0500 c20011| 2016-04-06T02:52:07.104-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:15.685-0500 c20011| 2016-04-06T02:52:07.104-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:15.687-0500 c20011| 2016-04-06T02:52:07.104-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.695-0500 c20011| 2016-04-06T02:52:07.104-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|1, t: 1 } and is durable through: { ts: Timestamp 1459929127000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.696-0500 c20011| 2016-04-06T02:52:07.104-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.705-0500 c20011| 2016-04-06T02:52:07.104-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:15.708-0500 c20011| 2016-04-06T02:52:07.104-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|14, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:15.712-0500 c20011| 2016-04-06T02:52:07.104-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|14, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:15.715-0500 c20013| 2016-04-06T02:52:07.104-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 166 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.717-0500 c20013| 2016-04-06T02:52:07.104-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.719-0500 c20013| 2016-04-06T02:52:07.104-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:15.723-0500 c20013| 2016-04-06T02:52:07.104-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 170 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.104-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:15.725-0500 c20013| 2016-04-06T02:52:07.104-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 170 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:15.728-0500 c20011| 2016-04-06T02:52:07.104-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:15.732-0500 c20011| 2016-04-06T02:52:07.104-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:15.734-0500 c20011| 2016-04-06T02:52:07.105-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929127000|2, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|1, t: 1 }, name-id: "56" } [js_test:multi_coll_drop] 2016-04-06T02:52:15.737-0500 c20013| 2016-04-06T02:52:07.105-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.749-0500 c20012| 2016-04-06T02:52:07.102-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:15.761-0500 c20012| 2016-04-06T02:52:07.102-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 167 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:15.762-0500 c20012| 2016-04-06T02:52:07.102-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 167 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:15.764-0500 c20012| 2016-04-06T02:52:07.102-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 167 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.773-0500 c20012| 2016-04-06T02:52:07.103-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 169 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.103-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929126000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:15.780-0500 c20012| 2016-04-06T02:52:07.103-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 169 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:15.795-0500 c20012| 2016-04-06T02:52:07.104-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 169 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.800-0500 c20012| 2016-04-06T02:52:07.104-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.803-0500 c20012| 2016-04-06T02:52:07.104-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:15.807-0500 c20012| 2016-04-06T02:52:07.104-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 171 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.104-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:15.808-0500 c20012| 2016-04-06T02:52:07.104-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 171 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:15.811-0500 c20013| 2016-04-06T02:52:07.107-0500 D STORAGE [repl writer worker 1] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-32-751336887848580549 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:15.813-0500 c20013| 2016-04-06T02:52:07.107-0500 I INDEX [repl writer worker 1] build index on: config.tags properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" } [js_test:multi_coll_drop] 2016-04-06T02:52:15.813-0500 c20013| 2016-04-06T02:52:07.107-0500 I INDEX [repl writer worker 1] building index using bulk method [js_test:multi_coll_drop] 2016-04-06T02:52:15.816-0500 c20013| 2016-04-06T02:52:07.107-0500 D INDEX [repl writer worker 1] bulk commit starting for index: ns_1_min_1 [js_test:multi_coll_drop] 2016-04-06T02:52:15.819-0500 c20013| 2016-04-06T02:52:07.108-0500 D INDEX [repl writer worker 1] done building bottom layer, going to commit [js_test:multi_coll_drop] 2016-04-06T02:52:15.822-0500 c20013| 2016-04-06T02:52:07.111-0500 I INDEX [repl writer worker 1] build index done. scanned 0 total records. 0 secs [js_test:multi_coll_drop] 2016-04-06T02:52:15.824-0500 c20013| 2016-04-06T02:52:07.111-0500 D STORAGE [repl writer worker 1] config.tags: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:15.824-0500 c20013| 2016-04-06T02:52:07.111-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.825-0500 c20013| 2016-04-06T02:52:07.111-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.827-0500 c20013| 2016-04-06T02:52:07.111-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.831-0500 c20013| 2016-04-06T02:52:07.111-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.832-0500 c20013| 2016-04-06T02:52:07.112-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.834-0500 c20013| 2016-04-06T02:52:07.112-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.839-0500 c20013| 2016-04-06T02:52:07.112-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.842-0500 c20013| 2016-04-06T02:52:07.112-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.843-0500 c20013| 2016-04-06T02:52:07.112-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.844-0500 c20013| 2016-04-06T02:52:07.112-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.846-0500 c20013| 2016-04-06T02:52:07.112-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.849-0500 c20013| 2016-04-06T02:52:07.112-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.849-0500 c20013| 2016-04-06T02:52:07.112-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.851-0500 c20013| 2016-04-06T02:52:07.112-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.854-0500 c20013| 2016-04-06T02:52:07.113-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.856-0500 c20013| 2016-04-06T02:52:07.113-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.857-0500 c20012| 2016-04-06T02:52:07.112-0500 D STORAGE [repl writer worker 6] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-32-6577373056560964212 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:15.859-0500 c20012| 2016-04-06T02:52:07.112-0500 I INDEX [repl writer worker 6] build index on: config.tags properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" } [js_test:multi_coll_drop] 2016-04-06T02:52:15.862-0500 c20012| 2016-04-06T02:52:07.112-0500 I INDEX [repl writer worker 6] building index using bulk method [js_test:multi_coll_drop] 2016-04-06T02:52:15.863-0500 c20012| 2016-04-06T02:52:07.112-0500 D INDEX [repl writer worker 6] bulk commit starting for index: ns_1_min_1 [js_test:multi_coll_drop] 2016-04-06T02:52:15.866-0500 c20012| 2016-04-06T02:52:07.112-0500 D INDEX [repl writer worker 6] done building bottom layer, going to commit [js_test:multi_coll_drop] 2016-04-06T02:52:15.867-0500 c20012| 2016-04-06T02:52:07.113-0500 I INDEX [repl writer worker 6] build index done. scanned 0 total records. 0 secs [js_test:multi_coll_drop] 2016-04-06T02:52:15.869-0500 c20012| 2016-04-06T02:52:07.113-0500 D STORAGE [repl writer worker 6] config.tags: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:15.869-0500 c20012| 2016-04-06T02:52:07.113-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.874-0500 c20012| 2016-04-06T02:52:07.113-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.876-0500 c20012| 2016-04-06T02:52:07.113-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.877-0500 c20012| 2016-04-06T02:52:07.113-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.878-0500 c20012| 2016-04-06T02:52:07.113-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.879-0500 c20012| 2016-04-06T02:52:07.113-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.880-0500 c20012| 2016-04-06T02:52:07.113-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.880-0500 c20012| 2016-04-06T02:52:07.113-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.881-0500 c20012| 2016-04-06T02:52:07.113-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.884-0500 c20012| 2016-04-06T02:52:07.113-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.886-0500 c20013| 2016-04-06T02:52:07.114-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:15.891-0500 c20013| 2016-04-06T02:52:07.114-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:15.894-0500 c20013| 2016-04-06T02:52:07.114-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 171 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:15.896-0500 c20013| 2016-04-06T02:52:07.114-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 171 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:15.898-0500 c20013| 2016-04-06T02:52:07.115-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 171 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.903-0500 c20011| 2016-04-06T02:52:07.114-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:15.904-0500 c20011| 2016-04-06T02:52:07.114-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:15.906-0500 c20011| 2016-04-06T02:52:07.114-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.909-0500 c20011| 2016-04-06T02:52:07.114-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|2, t: 1 } and is durable through: { ts: Timestamp 1459929127000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.913-0500 c20011| 2016-04-06T02:52:07.114-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929127000|2, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|1, t: 1 }, name-id: "56" } [js_test:multi_coll_drop] 2016-04-06T02:52:15.917-0500 c20011| 2016-04-06T02:52:07.114-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:15.919-0500 2016-04-06T02:52:07.115-0500 W NETWORK [thread1] Failed to connect to 127.0.0.1:20014, reason: Connection refused [js_test:multi_coll_drop] 2016-04-06T02:52:15.922-0500 c20012| 2016-04-06T02:52:07.115-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.925-0500 c20012| 2016-04-06T02:52:07.115-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.926-0500 c20012| 2016-04-06T02:52:07.115-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.929-0500 c20012| 2016-04-06T02:52:07.115-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.931-0500 c20012| 2016-04-06T02:52:07.119-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:15.934-0500 c20013| 2016-04-06T02:52:07.121-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:15.941-0500 c20013| 2016-04-06T02:52:07.121-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 173 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:15.942-0500 c20013| 2016-04-06T02:52:07.121-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 173 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:15.948-0500 c20011| 2016-04-06T02:52:07.121-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:15.949-0500 c20011| 2016-04-06T02:52:07.122-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:15.950-0500 c20011| 2016-04-06T02:52:07.122-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.952-0500 c20011| 2016-04-06T02:52:07.122-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|2, t: 1 } and is durable through: { ts: Timestamp 1459929127000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.956-0500 c20011| 2016-04-06T02:52:07.122-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.960-0500 c20011| 2016-04-06T02:52:07.122-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:15.963-0500 c20013| 2016-04-06T02:52:07.122-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 173 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:15.969-0500 c20011| 2016-04-06T02:52:07.122-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|1, t: 1 } } cursorid:17466612721 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 17ms [js_test:multi_coll_drop] 2016-04-06T02:52:15.977-0500 c20011| 2016-04-06T02:52:07.122-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:15.978-0500 c20011| 2016-04-06T02:52:07.122-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:15.988-0500 c20011| 2016-04-06T02:52:07.122-0500 I COMMAND [conn10] command config.system.indexes command: insert { insert: "system.indexes", documents: [ { ns: "config.tags", key: { ns: 1, min: 1 }, name: "ns_1_min_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 3, w: 3 } }, Database: { acquireCount: { w: 2, W: 1 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 76ms [js_test:multi_coll_drop] 2016-04-06T02:52:15.995-0500 c20011| 2016-04-06T02:52:07.122-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|1, t: 1 } } cursorid:20785203637 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 17ms [js_test:multi_coll_drop] 2016-04-06T02:52:15.997-0500 c20011| 2016-04-06T02:52:07.122-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|2, t: 1 } and is durable through: { ts: Timestamp 1459929127000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.002-0500 c20011| 2016-04-06T02:52:07.122-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.006-0500 c20011| 2016-04-06T02:52:07.122-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:16.009-0500 c20012| 2016-04-06T02:52:07.121-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.010-0500 c20012| 2016-04-06T02:52:07.122-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:16.014-0500 c20012| 2016-04-06T02:52:07.122-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:16.019-0500 c20012| 2016-04-06T02:52:07.122-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 172 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:16.021-0500 c20012| 2016-04-06T02:52:07.122-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 172 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:16.022-0500 c20012| 2016-04-06T02:52:07.122-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 171 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.024-0500 c20012| 2016-04-06T02:52:07.122-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.025-0500 c20012| 2016-04-06T02:52:07.122-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 172 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.025-0500 c20012| 2016-04-06T02:52:07.122-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:16.027-0500 c20012| 2016-04-06T02:52:07.122-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 175 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.122-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:16.030-0500 c20012| 2016-04-06T02:52:07.122-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 175 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:16.030-0500 c20013| 2016-04-06T02:52:07.122-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 170 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.031-0500 c20013| 2016-04-06T02:52:07.122-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.034-0500 c20013| 2016-04-06T02:52:07.122-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:16.036-0500 c20013| 2016-04-06T02:52:07.122-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 176 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.122-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:16.037-0500 c20013| 2016-04-06T02:52:07.122-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 176 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:16.043-0500 s20014| 2016-04-06T02:52:07.122-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 32 finished with response: { ok: 1, n: 1, opTime: { ts: Timestamp 1459929127000|2, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:16.045-0500 c20011| 2016-04-06T02:52:07.122-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:16.046-0500 c20011| 2016-04-06T02:52:07.122-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:16.048-0500 s20014| 2016-04-06T02:52:07.123-0500 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker [js_test:multi_coll_drop] 2016-04-06T02:52:16.048-0500 s20014| 2016-04-06T02:52:07.123-0500 D COMMAND [Balancer] BackgroundJob starting: Balancer [js_test:multi_coll_drop] 2016-04-06T02:52:16.050-0500 s20014| 2016-04-06T02:52:07.123-0500 I SHARDING [Balancer] about to contact config servers and shards [js_test:multi_coll_drop] 2016-04-06T02:52:16.056-0500 s20014| 2016-04-06T02:52:07.123-0500 D ASIO [mongosMain] startCommand: RemoteCommand 34 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:37.123-0500 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.061-0500 s20014| 2016-04-06T02:52:07.123-0500 D ASIO [Balancer] startCommand: RemoteCommand 35 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:37.123-0500 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.063-0500 s20014| 2016-04-06T02:52:07.123-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 34 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:16.063-0500 s20014| 2016-04-06T02:52:07.123-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:16.065-0500 s20014| 2016-04-06T02:52:07.123-0500 D COMMAND [ClusterCursorCleanupJob] BackgroundJob starting: ClusterCursorCleanupJob [js_test:multi_coll_drop] 2016-04-06T02:52:16.069-0500 c20011| 2016-04-06T02:52:07.123-0500 D COMMAND [conn10] run command admin.$cmd { _getUserCacheGeneration: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.084-0500 c20011| 2016-04-06T02:52:07.123-0500 D COMMAND [conn10] command: _getUserCacheGeneration [js_test:multi_coll_drop] 2016-04-06T02:52:16.087-0500 c20012| 2016-04-06T02:52:07.123-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:36634 #7 (5 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:16.089-0500 s20014| 2016-04-06T02:52:07.123-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 36 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:16.092-0500 s20014| 2016-04-06T02:52:07.123-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 34 finished with response: { cacheGeneration: ObjectId('5704c01c3876c4cfd2eb3eb7'), ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.094-0500 c20012| 2016-04-06T02:52:07.124-0500 D COMMAND [conn7] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:52:16.099-0500 c20011| 2016-04-06T02:52:07.123-0500 I COMMAND [conn10] command admin.$cmd command: _getUserCacheGeneration { _getUserCacheGeneration: 1, maxTimeMS: 30000 } numYields:0 reslen:337 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:16.111-0500 c20012| 2016-04-06T02:52:07.124-0500 I COMMAND [conn7] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20014" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:16.114-0500 c20012| 2016-04-06T02:52:07.124-0500 D COMMAND [conn7] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.120-0500 c20012| 2016-04-06T02:52:07.124-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|2, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:16.126-0500 c20012| 2016-04-06T02:52:07.124-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.130-0500 c20012| 2016-04-06T02:52:07.124-0500 D QUERY [conn7] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:16.132-0500 c20012| 2016-04-06T02:52:07.124-0500 I COMMAND [conn7] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|2, t: 1 } }, maxTimeMS: 30000 } planSummary: COLLSCAN keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:370 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:16.136-0500 s20014| 2016-04-06T02:52:07.124-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:16.139-0500 s20014| 2016-04-06T02:52:07.124-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 36 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:52:16.141-0500 s20014| 2016-04-06T02:52:07.124-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 35 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:16.144-0500 s20014| 2016-04-06T02:52:07.124-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 35 finished with response: { waitedMS: 0, cursor: { firstBatch: [], id: 0, ns: "config.shards" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.149-0500 s20014| 2016-04-06T02:52:07.124-0500 D SHARDING [Balancer] found 0 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp 1459929127000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.151-0500 s20014| 2016-04-06T02:52:07.125-0500 I SHARDING [Balancer] config servers and shards contacted successfully [js_test:multi_coll_drop] 2016-04-06T02:52:16.151-0500 s20014| 2016-04-06T02:52:07.125-0500 I SHARDING [Balancer] balancer id: mongovm16:20014 started [js_test:multi_coll_drop] 2016-04-06T02:52:16.163-0500 s20014| 2016-04-06T02:52:07.125-0500 D ASIO [Balancer] startCommand: RemoteCommand 39 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.125-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929127125), up: 0, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.164-0500 s20014| 2016-04-06T02:52:07.125-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 39 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:16.168-0500 c20011| 2016-04-06T02:52:07.125-0500 D COMMAND [conn10] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929127125), up: 0, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.170-0500 c20011| 2016-04-06T02:52:07.125-0500 D STORAGE [conn10] create collection config.mongos {} [js_test:multi_coll_drop] 2016-04-06T02:52:16.175-0500 c20011| 2016-04-06T02:52:07.125-0500 D STORAGE [conn10] stored meta data for config.mongos @ RecordId(13) [js_test:multi_coll_drop] 2016-04-06T02:52:16.182-0500 c20011| 2016-04-06T02:52:07.125-0500 D STORAGE [conn10] WiredTigerKVEngine::createRecordStore uri: table:collection-31--6404702321693896372 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:16.189-0500 c20012| 2016-04-06T02:52:07.131-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:16.197-0500 c20012| 2016-04-06T02:52:07.131-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 176 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:16.198-0500 c20012| 2016-04-06T02:52:07.131-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 176 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:16.201-0500 c20012| 2016-04-06T02:52:07.131-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 176 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.211-0500 c20011| 2016-04-06T02:52:07.131-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:16.211-0500 c20011| 2016-04-06T02:52:07.131-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:16.220-0500 c20011| 2016-04-06T02:52:07.131-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|2, t: 1 } and is durable through: { ts: Timestamp 1459929127000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.222-0500 c20011| 2016-04-06T02:52:07.131-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.228-0500 c20011| 2016-04-06T02:52:07.131-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:16.230-0500 s20014| 2016-04-06T02:52:07.131-0500 D COMMAND [UserCacheInvalidatorThread] BackgroundJob starting: UserCacheInvalidatorThread [js_test:multi_coll_drop] 2016-04-06T02:52:16.231-0500 s20014| 2016-04-06T02:52:07.131-0500 D NETWORK [mongosMain] fd limit hard:64000 soft:64000 max conn: 51200 [js_test:multi_coll_drop] 2016-04-06T02:52:16.234-0500 s20014| 2016-04-06T02:52:07.132-0500 D COMMAND [PeriodicTaskRunner] BackgroundJob starting: PeriodicTaskRunner [js_test:multi_coll_drop] 2016-04-06T02:52:16.237-0500 c20011| 2016-04-06T02:52:07.132-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-31--6404702321693896372 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:16.238-0500 c20011| 2016-04-06T02:52:07.132-0500 D STORAGE [conn10] config.mongos: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:16.239-0500 c20011| 2016-04-06T02:52:07.133-0500 D STORAGE [conn10] WiredTigerKVEngine::createSortedDataInterface ident: index-32--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.mongos" }), [js_test:multi_coll_drop] 2016-04-06T02:52:16.243-0500 c20011| 2016-04-06T02:52:07.133-0500 D STORAGE [conn10] create uri: table:index-32--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.mongos" }), [js_test:multi_coll_drop] 2016-04-06T02:52:16.246-0500 c20011| 2016-04-06T02:52:07.138-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 37 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:17.138-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.247-0500 c20011| 2016-04-06T02:52:07.138-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 37 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:16.255-0500 c20011| 2016-04-06T02:52:07.139-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 37 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, opTime: { ts: Timestamp 1459929127000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:16.257-0500 c20013| 2016-04-06T02:52:07.138-0500 D COMMAND [conn7] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.257-0500 c20013| 2016-04-06T02:52:07.138-0500 D COMMAND [conn7] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:16.262-0500 c20013| 2016-04-06T02:52:07.139-0500 I COMMAND [conn7] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:16.264-0500 c20011| 2016-04-06T02:52:07.139-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:09.139Z [js_test:multi_coll_drop] 2016-04-06T02:52:16.265-0500 c20011| 2016-04-06T02:52:07.140-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-32--6404702321693896372 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:16.267-0500 c20011| 2016-04-06T02:52:07.140-0500 D STORAGE [conn10] config.mongos: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:16.268-0500 c20011| 2016-04-06T02:52:07.140-0500 D QUERY [conn10] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:52:16.275-0500 c20011| 2016-04-06T02:52:07.140-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|2, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:458 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 17ms [js_test:multi_coll_drop] 2016-04-06T02:52:16.281-0500 c20011| 2016-04-06T02:52:07.140-0500 I WRITE [conn10] update config.mongos query: { _id: "mongovm16:20014" } update: { $set: { _id: "mongovm16:20014", ping: new Date(1459929127125), up: 0, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:0 docsExamined:0 nMatched:0 nModified:0 upsert:1 numYields:0 locks:{ Global: { acquireCount: { r: 5, w: 5 } }, Database: { acquireCount: { w: 4, W: 1 } }, Collection: { acquireCount: { w: 2 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } 15ms [js_test:multi_coll_drop] 2016-04-06T02:52:16.287-0500 c20013| 2016-04-06T02:52:07.140-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 176 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|3, t: 1, h: 75019575566361923, v: 2, op: "c", ns: "config.$cmd", o: { create: "mongos" } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.293-0500 c20011| 2016-04-06T02:52:07.140-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|2, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:458 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 17ms [js_test:multi_coll_drop] 2016-04-06T02:52:16.298-0500 c20012| 2016-04-06T02:52:07.140-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 175 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|3, t: 1, h: 75019575566361923, v: 2, op: "c", ns: "config.$cmd", o: { create: "mongos" } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.302-0500 c20012| 2016-04-06T02:52:07.140-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|3 and ending at ts: Timestamp 1459929127000|3 [js_test:multi_coll_drop] 2016-04-06T02:52:16.304-0500 c20013| 2016-04-06T02:52:07.140-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|3 and ending at ts: Timestamp 1459929127000|3 [js_test:multi_coll_drop] 2016-04-06T02:52:16.305-0500 c20013| 2016-04-06T02:52:07.140-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:16.308-0500 c20013| 2016-04-06T02:52:07.141-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.311-0500 c20013| 2016-04-06T02:52:07.141-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.314-0500 c20013| 2016-04-06T02:52:07.141-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.315-0500 c20013| 2016-04-06T02:52:07.141-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.316-0500 c20013| 2016-04-06T02:52:07.141-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.320-0500 c20013| 2016-04-06T02:52:07.141-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.323-0500 c20013| 2016-04-06T02:52:07.141-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.323-0500 c20013| 2016-04-06T02:52:07.141-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.325-0500 c20013| 2016-04-06T02:52:07.141-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.326-0500 c20013| 2016-04-06T02:52:07.141-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.327-0500 c20013| 2016-04-06T02:52:07.141-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.329-0500 c20013| 2016-04-06T02:52:07.141-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.332-0500 c20013| 2016-04-06T02:52:07.141-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.333-0500 c20013| 2016-04-06T02:52:07.141-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.334-0500 c20013| 2016-04-06T02:52:07.141-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:16.337-0500 c20013| 2016-04-06T02:52:07.141-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.339-0500 c20013| 2016-04-06T02:52:07.141-0500 D STORAGE [repl writer worker 15] create collection config.mongos {} [js_test:multi_coll_drop] 2016-04-06T02:52:16.341-0500 c20013| 2016-04-06T02:52:07.141-0500 D STORAGE [repl writer worker 15] stored meta data for config.mongos @ RecordId(14) [js_test:multi_coll_drop] 2016-04-06T02:52:16.363-0500 c20012| 2016-04-06T02:52:07.141-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:16.370-0500 c20013| 2016-04-06T02:52:07.141-0500 D STORAGE [repl writer worker 15] WiredTigerKVEngine::createRecordStore uri: table:collection-33-751336887848580549 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:16.372-0500 c20012| 2016-04-06T02:52:07.141-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.373-0500 c20012| 2016-04-06T02:52:07.141-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.375-0500 c20012| 2016-04-06T02:52:07.141-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.376-0500 c20012| 2016-04-06T02:52:07.141-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.379-0500 c20012| 2016-04-06T02:52:07.141-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.382-0500 c20012| 2016-04-06T02:52:07.141-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.383-0500 c20012| 2016-04-06T02:52:07.141-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.386-0500 c20012| 2016-04-06T02:52:07.141-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.389-0500 c20012| 2016-04-06T02:52:07.141-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.389-0500 c20012| 2016-04-06T02:52:07.142-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.390-0500 c20012| 2016-04-06T02:52:07.142-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.393-0500 c20012| 2016-04-06T02:52:07.142-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.396-0500 c20013| 2016-04-06T02:52:07.142-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.396-0500 c20012| 2016-04-06T02:52:07.142-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:16.398-0500 c20012| 2016-04-06T02:52:07.142-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.399-0500 c20012| 2016-04-06T02:52:07.142-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.402-0500 c20012| 2016-04-06T02:52:07.142-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.404-0500 c20012| 2016-04-06T02:52:07.142-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.409-0500 c20012| 2016-04-06T02:52:07.142-0500 D STORAGE [repl writer worker 12] create collection config.mongos {} [js_test:multi_coll_drop] 2016-04-06T02:52:16.414-0500 c20012| 2016-04-06T02:52:07.142-0500 D STORAGE [repl writer worker 12] stored meta data for config.mongos @ RecordId(14) [js_test:multi_coll_drop] 2016-04-06T02:52:16.423-0500 c20012| 2016-04-06T02:52:07.142-0500 D STORAGE [repl writer worker 12] WiredTigerKVEngine::createRecordStore uri: table:collection-33-6577373056560964212 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:16.426-0500 c20012| 2016-04-06T02:52:07.142-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 179 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.142-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:16.428-0500 c20011| 2016-04-06T02:52:07.142-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929127000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|2, t: 1 }, name-id: "58" } [js_test:multi_coll_drop] 2016-04-06T02:52:16.432-0500 c20011| 2016-04-06T02:52:07.143-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:16.436-0500 c20011| 2016-04-06T02:52:07.143-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|2, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:538 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:16.438-0500 c20012| 2016-04-06T02:52:07.143-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 179 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:16.443-0500 c20013| 2016-04-06T02:52:07.142-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 178 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.142-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:16.449-0500 c20013| 2016-04-06T02:52:07.143-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 178 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:16.456-0500 c20013| 2016-04-06T02:52:07.143-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 178 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|4, t: 1, h: 856344690196641645, v: 2, op: "i", ns: "config.mongos", o: { _id: "mongovm16:20014", ping: new Date(1459929127125), up: 0, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.458-0500 c20011| 2016-04-06T02:52:07.143-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:16.462-0500 c20013| 2016-04-06T02:52:07.143-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|4 and ending at ts: Timestamp 1459929127000|4 [js_test:multi_coll_drop] 2016-04-06T02:52:16.469-0500 c20011| 2016-04-06T02:52:07.144-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|2, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:538 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:16.474-0500 c20012| 2016-04-06T02:52:07.144-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 179 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|4, t: 1, h: 856344690196641645, v: 2, op: "i", ns: "config.mongos", o: { _id: "mongovm16:20014", ping: new Date(1459929127125), up: 0, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.478-0500 c20012| 2016-04-06T02:52:07.144-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|4 and ending at ts: Timestamp 1459929127000|4 [js_test:multi_coll_drop] 2016-04-06T02:52:16.481-0500 c20011| 2016-04-06T02:52:07.145-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:16.483-0500 c20013| 2016-04-06T02:52:07.145-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 180 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.145-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:16.486-0500 c20013| 2016-04-06T02:52:07.145-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 180 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:16.493-0500 c20012| 2016-04-06T02:52:07.146-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 181 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.146-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:16.494-0500 c20012| 2016-04-06T02:52:07.146-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 181 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:16.498-0500 c20011| 2016-04-06T02:52:07.146-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:16.502-0500 c20013| 2016-04-06T02:52:07.149-0500 D STORAGE [repl writer worker 15] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-33-751336887848580549 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:16.503-0500 c20013| 2016-04-06T02:52:07.149-0500 D STORAGE [repl writer worker 15] config.mongos: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:16.506-0500 c20013| 2016-04-06T02:52:07.149-0500 D STORAGE [repl writer worker 15] WiredTigerKVEngine::createSortedDataInterface ident: index-34-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.mongos" }), [js_test:multi_coll_drop] 2016-04-06T02:52:16.509-0500 c20013| 2016-04-06T02:52:07.149-0500 D STORAGE [repl writer worker 15] create uri: table:index-34-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.mongos" }), [js_test:multi_coll_drop] 2016-04-06T02:52:16.514-0500 c20012| 2016-04-06T02:52:07.148-0500 D STORAGE [repl writer worker 12] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-33-6577373056560964212 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:16.519-0500 c20012| 2016-04-06T02:52:07.148-0500 D STORAGE [repl writer worker 12] config.mongos: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:16.526-0500 c20012| 2016-04-06T02:52:07.148-0500 D STORAGE [repl writer worker 12] WiredTigerKVEngine::createSortedDataInterface ident: index-34-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.mongos" }), [js_test:multi_coll_drop] 2016-04-06T02:52:16.534-0500 c20012| 2016-04-06T02:52:07.148-0500 D STORAGE [repl writer worker 12] create uri: table:index-34-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.mongos" }), [js_test:multi_coll_drop] 2016-04-06T02:52:16.540-0500 c20013| 2016-04-06T02:52:07.152-0500 D STORAGE [repl writer worker 15] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-34-751336887848580549 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:16.542-0500 c20013| 2016-04-06T02:52:07.152-0500 D STORAGE [repl writer worker 15] config.mongos: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:16.546-0500 c20013| 2016-04-06T02:52:07.153-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.546-0500 c20013| 2016-04-06T02:52:07.153-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.550-0500 c20013| 2016-04-06T02:52:07.153-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.553-0500 c20013| 2016-04-06T02:52:07.153-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.555-0500 c20013| 2016-04-06T02:52:07.153-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.556-0500 c20013| 2016-04-06T02:52:07.153-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.560-0500 c20013| 2016-04-06T02:52:07.153-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.561-0500 c20013| 2016-04-06T02:52:07.153-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.569-0500 c20013| 2016-04-06T02:52:07.153-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.571-0500 c20013| 2016-04-06T02:52:07.153-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.572-0500 c20013| 2016-04-06T02:52:07.153-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.582-0500 c20013| 2016-04-06T02:52:07.153-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.586-0500 c20013| 2016-04-06T02:52:07.153-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.589-0500 c20013| 2016-04-06T02:52:07.153-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.594-0500 c20013| 2016-04-06T02:52:07.153-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.599-0500 c20013| 2016-04-06T02:52:07.153-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.601-0500 c20013| 2016-04-06T02:52:07.153-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:16.603-0500 c20013| 2016-04-06T02:52:07.153-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:16.611-0500 c20013| 2016-04-06T02:52:07.154-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:16.618-0500 c20013| 2016-04-06T02:52:07.154-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 181 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:16.619-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.625-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.625-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.630-0500 c20013| 2016-04-06T02:52:07.154-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 181 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:16.633-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.636-0500 c20011| 2016-04-06T02:52:07.154-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:16.637-0500 c20011| 2016-04-06T02:52:07.154-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:16.638-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.639-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.641-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.645-0500 c20011| 2016-04-06T02:52:07.154-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.649-0500 c20011| 2016-04-06T02:52:07.154-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|3, t: 1 } and is durable through: { ts: Timestamp 1459929127000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.653-0500 c20011| 2016-04-06T02:52:07.154-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929127000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|2, t: 1 }, name-id: "58" } [js_test:multi_coll_drop] 2016-04-06T02:52:16.654-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.663-0500 c20011| 2016-04-06T02:52:07.154-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:16.667-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.668-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.669-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.672-0500 c20013| 2016-04-06T02:52:07.154-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 181 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.675-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.677-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.680-0500 c20013| 2016-04-06T02:52:07.154-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:16.681-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.682-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.684-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.685-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.686-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.688-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.689-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.690-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.691-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.693-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.696-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.699-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.701-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.703-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.705-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.705-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.706-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.708-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.710-0500 c20013| 2016-04-06T02:52:07.154-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.711-0500 c20013| 2016-04-06T02:52:07.154-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:16.721-0500 c20013| 2016-04-06T02:52:07.154-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:16.731-0500 c20013| 2016-04-06T02:52:07.154-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 183 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:16.733-0500 c20013| 2016-04-06T02:52:07.154-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 183 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:16.738-0500 c20011| 2016-04-06T02:52:07.154-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:16.741-0500 c20011| 2016-04-06T02:52:07.154-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:16.743-0500 c20011| 2016-04-06T02:52:07.155-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.746-0500 c20011| 2016-04-06T02:52:07.155-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|4, t: 1 } and is durable through: { ts: Timestamp 1459929127000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.747-0500 c20011| 2016-04-06T02:52:07.155-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929127000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|2, t: 1 }, name-id: "58" } [js_test:multi_coll_drop] 2016-04-06T02:52:16.750-0500 c20011| 2016-04-06T02:52:07.155-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:16.754-0500 c20013| 2016-04-06T02:52:07.155-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 183 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.758-0500 c20012| 2016-04-06T02:52:07.155-0500 D STORAGE [repl writer worker 12] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-34-6577373056560964212 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:16.761-0500 c20012| 2016-04-06T02:52:07.155-0500 D STORAGE [repl writer worker 12] config.mongos: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:16.762-0500 c20012| 2016-04-06T02:52:07.155-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.765-0500 c20012| 2016-04-06T02:52:07.155-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.768-0500 c20012| 2016-04-06T02:52:07.155-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.771-0500 c20012| 2016-04-06T02:52:07.155-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.772-0500 c20012| 2016-04-06T02:52:07.155-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.775-0500 c20012| 2016-04-06T02:52:07.155-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.781-0500 c20012| 2016-04-06T02:52:07.155-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.783-0500 c20012| 2016-04-06T02:52:07.155-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.785-0500 c20012| 2016-04-06T02:52:07.155-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.788-0500 c20012| 2016-04-06T02:52:07.155-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.788-0500 c20012| 2016-04-06T02:52:07.155-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.793-0500 c20012| 2016-04-06T02:52:07.155-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.795-0500 c20012| 2016-04-06T02:52:07.155-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.795-0500 c20012| 2016-04-06T02:52:07.155-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.796-0500 c20012| 2016-04-06T02:52:07.155-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.797-0500 c20012| 2016-04-06T02:52:07.155-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.804-0500 c20013| 2016-04-06T02:52:07.155-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:16.813-0500 c20013| 2016-04-06T02:52:07.155-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 185 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:16.815-0500 c20013| 2016-04-06T02:52:07.155-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 185 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:16.817-0500 c20011| 2016-04-06T02:52:07.155-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:16.819-0500 c20011| 2016-04-06T02:52:07.155-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:16.824-0500 c20011| 2016-04-06T02:52:07.156-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.827-0500 c20011| 2016-04-06T02:52:07.156-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|4, t: 1 } and is durable through: { ts: Timestamp 1459929127000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.829-0500 c20011| 2016-04-06T02:52:07.156-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.830-0500 c20011| 2016-04-06T02:52:07.156-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929127000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|3, t: 1 }, name-id: "61" } [js_test:multi_coll_drop] 2016-04-06T02:52:16.834-0500 c20011| 2016-04-06T02:52:07.156-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929127000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|3, t: 1 }, name-id: "61" } [js_test:multi_coll_drop] 2016-04-06T02:52:16.840-0500 c20011| 2016-04-06T02:52:07.156-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:16.841-0500 c20011| 2016-04-06T02:52:07.156-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 39 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:17.156-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.843-0500 c20013| 2016-04-06T02:52:07.156-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 185 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.845-0500 c20011| 2016-04-06T02:52:07.156-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 39 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:16.848-0500 c20012| 2016-04-06T02:52:07.155-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:16.854-0500 c20012| 2016-04-06T02:52:07.156-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:16.854-0500 c20012| 2016-04-06T02:52:07.156-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.855-0500 c20012| 2016-04-06T02:52:07.156-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.856-0500 c20012| 2016-04-06T02:52:07.156-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.857-0500 c20012| 2016-04-06T02:52:07.156-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.860-0500 c20012| 2016-04-06T02:52:07.156-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.861-0500 c20012| 2016-04-06T02:52:07.156-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.876-0500 c20012| 2016-04-06T02:52:07.156-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:16.878-0500 c20012| 2016-04-06T02:52:07.156-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.885-0500 c20012| 2016-04-06T02:52:07.156-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 182 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:16.886-0500 c20012| 2016-04-06T02:52:07.156-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.889-0500 c20012| 2016-04-06T02:52:07.156-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.890-0500 c20012| 2016-04-06T02:52:07.156-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.892-0500 c20012| 2016-04-06T02:52:07.156-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 182 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:16.894-0500 c20012| 2016-04-06T02:52:07.156-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.895-0500 c20012| 2016-04-06T02:52:07.156-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.897-0500 c20012| 2016-04-06T02:52:07.156-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.897-0500 c20012| 2016-04-06T02:52:07.156-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:16.897-0500 c20012| 2016-04-06T02:52:07.156-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.897-0500 c20012| 2016-04-06T02:52:07.156-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:16.900-0500 c20012| 2016-04-06T02:52:07.156-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.906-0500 c20011| 2016-04-06T02:52:07.156-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:16.906-0500 c20011| 2016-04-06T02:52:07.156-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:16.910-0500 c20011| 2016-04-06T02:52:07.156-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|3, t: 1 } and is durable through: { ts: Timestamp 1459929127000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.915-0500 c20011| 2016-04-06T02:52:07.156-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929127000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|3, t: 1 }, name-id: "61" } [js_test:multi_coll_drop] 2016-04-06T02:52:16.924-0500 c20011| 2016-04-06T02:52:07.156-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.930-0500 c20011| 2016-04-06T02:52:07.156-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:16.933-0500 c20012| 2016-04-06T02:52:07.156-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.934-0500 c20012| 2016-04-06T02:52:07.156-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 182 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.943-0500 c20011| 2016-04-06T02:52:07.156-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|2, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 10ms [js_test:multi_coll_drop] 2016-04-06T02:52:16.953-0500 c20012| 2016-04-06T02:52:07.156-0500 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:16.957-0500 c20011| 2016-04-06T02:52:07.156-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 39 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, opTime: { ts: Timestamp 1459929127000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:16.961-0500 c20013| 2016-04-06T02:52:07.156-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 180 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.964-0500 c20012| 2016-04-06T02:52:07.156-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.965-0500 c20012| 2016-04-06T02:52:07.156-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.966-0500 c20012| 2016-04-06T02:52:07.156-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.968-0500 c20012| 2016-04-06T02:52:07.156-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.970-0500 c20012| 2016-04-06T02:52:07.156-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.973-0500 c20012| 2016-04-06T02:52:07.156-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.974-0500 c20012| 2016-04-06T02:52:07.157-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:16.979-0500 c20013| 2016-04-06T02:52:07.157-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.979-0500 c20013| 2016-04-06T02:52:07.157-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:16.986-0500 c20011| 2016-04-06T02:52:07.157-0500 D REPL [ReplicationExecutor] Ignoring older committed snapshot optime: { ts: Timestamp 1459929127000|2, t: 1 }, currentCommittedOpTime: { ts: Timestamp 1459929127000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:16.987-0500 c20011| 2016-04-06T02:52:07.157-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:09.157Z [js_test:multi_coll_drop] 2016-04-06T02:52:16.995-0500 c20011| 2016-04-06T02:52:07.157-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|2, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 10ms [js_test:multi_coll_drop] 2016-04-06T02:52:17.001-0500 c20013| 2016-04-06T02:52:07.157-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 188 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.157-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:17.002-0500 c20012| 2016-04-06T02:52:07.157-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 181 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.002-0500 c20012| 2016-04-06T02:52:07.157-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.005-0500 c20012| 2016-04-06T02:52:07.157-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.011-0500 c20012| 2016-04-06T02:52:07.157-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.014-0500 c20012| 2016-04-06T02:52:07.157-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.015-0500 c20012| 2016-04-06T02:52:07.157-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.015-0500 c20012| 2016-04-06T02:52:07.157-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:17.018-0500 c20012| 2016-04-06T02:52:07.157-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 185 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.157-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:17.023-0500 c20013| 2016-04-06T02:52:07.157-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 188 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:17.024-0500 c20012| 2016-04-06T02:52:07.157-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.024-0500 c20012| 2016-04-06T02:52:07.157-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.031-0500 c20012| 2016-04-06T02:52:07.157-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.036-0500 c20012| 2016-04-06T02:52:07.157-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 185 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:17.042-0500 c20012| 2016-04-06T02:52:07.157-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.042-0500 c20012| 2016-04-06T02:52:07.157-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.044-0500 c20012| 2016-04-06T02:52:07.157-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.050-0500 c20011| 2016-04-06T02:52:07.157-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:17.061-0500 c20011| 2016-04-06T02:52:07.157-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:17.065-0500 c20012| 2016-04-06T02:52:07.157-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:17.070-0500 c20012| 2016-04-06T02:52:07.157-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.076-0500 c20012| 2016-04-06T02:52:07.157-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 186 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.077-0500 c20012| 2016-04-06T02:52:07.157-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 186 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:17.078-0500 c20012| 2016-04-06T02:52:07.157-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 186 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.082-0500 c20012| 2016-04-06T02:52:07.157-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.094-0500 c20012| 2016-04-06T02:52:07.158-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 187 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.095-0500 c20012| 2016-04-06T02:52:07.158-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:17.097-0500 c20012| 2016-04-06T02:52:07.158-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 187 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:17.099-0500 c20012| 2016-04-06T02:52:07.158-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 188 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:17.105-0500 c20013| 2016-04-06T02:52:07.158-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.114-0500 c20013| 2016-04-06T02:52:07.158-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 189 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.116-0500 c20013| 2016-04-06T02:52:07.158-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 189 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:17.123-0500 c20011| 2016-04-06T02:52:07.157-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.124-0500 c20011| 2016-04-06T02:52:07.157-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:17.131-0500 c20011| 2016-04-06T02:52:07.157-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|4, t: 1 } and is durable through: { ts: Timestamp 1459929127000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.133-0500 c20011| 2016-04-06T02:52:07.157-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929127000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|3, t: 1 }, name-id: "61" } [js_test:multi_coll_drop] 2016-04-06T02:52:17.139-0500 c20011| 2016-04-06T02:52:07.157-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.148-0500 c20011| 2016-04-06T02:52:07.157-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:17.152-0500 c20011| 2016-04-06T02:52:07.158-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.153-0500 c20011| 2016-04-06T02:52:07.158-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:17.154-0500 c20011| 2016-04-06T02:52:07.158-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|4, t: 1 } and is durable through: { ts: Timestamp 1459929127000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.158-0500 c20011| 2016-04-06T02:52:07.158-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929127000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|3, t: 1 }, name-id: "61" } [js_test:multi_coll_drop] 2016-04-06T02:52:17.160-0500 c20011| 2016-04-06T02:52:07.158-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.166-0500 c20011| 2016-04-06T02:52:07.158-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:17.170-0500 c20011| 2016-04-06T02:52:07.158-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.172-0500 c20011| 2016-04-06T02:52:07.158-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:58968 #17 (13 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:17.174-0500 c20012| 2016-04-06T02:52:07.158-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 187 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.175-0500 c20011| 2016-04-06T02:52:07.158-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:17.179-0500 c20011| 2016-04-06T02:52:07.158-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.181-0500 c20011| 2016-04-06T02:52:07.158-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|4, t: 1 } and is durable through: { ts: Timestamp 1459929127000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.184-0500 c20011| 2016-04-06T02:52:07.158-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.190-0500 c20011| 2016-04-06T02:52:07.158-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:17.191-0500 c20013| 2016-04-06T02:52:07.158-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 189 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.193-0500 c20013| 2016-04-06T02:52:07.158-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 188 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.195-0500 c20012| 2016-04-06T02:52:07.158-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 185 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.198-0500 c20011| 2016-04-06T02:52:07.158-0500 D COMMAND [conn17] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20012" } [js_test:multi_coll_drop] 2016-04-06T02:52:17.201-0500 c20011| 2016-04-06T02:52:07.158-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|3, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:17.206-0500 c20011| 2016-04-06T02:52:07.158-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|3, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:17.210-0500 c20012| 2016-04-06T02:52:07.158-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.214-0500 c20011| 2016-04-06T02:52:07.158-0500 I COMMAND [conn10] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929127125), up: 0, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:445 locks:{ Global: { acquireCount: { r: 5, w: 5 } }, Database: { acquireCount: { w: 4, W: 1 } }, Collection: { acquireCount: { w: 2 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 33ms [js_test:multi_coll_drop] 2016-04-06T02:52:17.217-0500 c20013| 2016-04-06T02:52:07.158-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.222-0500 c20012| 2016-04-06T02:52:07.158-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 192 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.223-0500 c20012| 2016-04-06T02:52:07.158-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.223-0500 c20013| 2016-04-06T02:52:07.158-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:17.233-0500 c20012| 2016-04-06T02:52:07.158-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:17.236-0500 c20012| 2016-04-06T02:52:07.158-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 192 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:17.240-0500 c20012| 2016-04-06T02:52:07.158-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 193 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.158-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:17.242-0500 c20012| 2016-04-06T02:52:07.158-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 193 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:17.246-0500 c20011| 2016-04-06T02:52:07.158-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.249-0500 c20011| 2016-04-06T02:52:07.158-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:17.251-0500 c20011| 2016-04-06T02:52:07.158-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|4, t: 1 } and is durable through: { ts: Timestamp 1459929127000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.257-0500 c20011| 2016-04-06T02:52:07.158-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:17.259-0500 s20014| 2016-04-06T02:52:07.158-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 39 finished with response: { ok: 1, nModified: 0, n: 1, upserted: [ { index: 0, _id: "mongovm16:20014" } ], opTime: { ts: Timestamp 1459929127000|4, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:17.260-0500 c20011| 2016-04-06T02:52:07.158-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.265-0500 c20011| 2016-04-06T02:52:07.158-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:17.266-0500 c20012| 2016-04-06T02:52:07.158-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 192 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.269-0500 s20014| 2016-04-06T02:52:07.158-0500 D ASIO [Balancer] startCommand: RemoteCommand 41 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.158-0500 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|4, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.272-0500 c20013| 2016-04-06T02:52:07.158-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 192 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.158-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:17.273-0500 c20013| 2016-04-06T02:52:07.158-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 192 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:17.276-0500 s20014| 2016-04-06T02:52:07.159-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 41 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:17.278-0500 c20012| 2016-04-06T02:52:07.159-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:17.280-0500 c20012| 2016-04-06T02:52:07.159-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 188 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:52:17.283-0500 c20011| 2016-04-06T02:52:07.158-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:17.285-0500 c20011| 2016-04-06T02:52:07.159-0500 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20012" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:17.287-0500 c20011| 2016-04-06T02:52:07.159-0500 D COMMAND [conn10] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|4, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.290-0500 c20011| 2016-04-06T02:52:07.159-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|4, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:17.294-0500 c20011| 2016-04-06T02:52:07.159-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|4, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.298-0500 c20011| 2016-04-06T02:52:07.159-0500 D QUERY [conn10] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:17.303-0500 c20011| 2016-04-06T02:52:07.159-0500 I COMMAND [conn10] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|4, t: 1 } }, maxTimeMS: 30000 } planSummary: COLLSCAN keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:390 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:17.311-0500 s20014| 2016-04-06T02:52:07.159-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 41 finished with response: { waitedMS: 0, cursor: { firstBatch: [], id: 0, ns: "config.shards" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.314-0500 s20014| 2016-04-06T02:52:07.159-0500 D SHARDING [Balancer] found 0 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp 1459929127000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.316-0500 s20014| 2016-04-06T02:52:07.159-0500 D ASIO [Balancer] startCommand: RemoteCommand 43 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:52:37.159-0500 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|4, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.320-0500 s20014| 2016-04-06T02:52:07.159-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 43 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:17.325-0500 c20013| 2016-04-06T02:52:07.159-0500 D COMMAND [conn10] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|4, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.327-0500 c20013| 2016-04-06T02:52:07.159-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|4, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:17.331-0500 c20013| 2016-04-06T02:52:07.159-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|4, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.332-0500 c20013| 2016-04-06T02:52:07.159-0500 D QUERY [conn10] Using idhack: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:17.335-0500 s20014| 2016-04-06T02:52:07.159-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 43 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "chunksize", value: 50 } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.339-0500 c20013| 2016-04-06T02:52:07.159-0500 I COMMAND [conn10] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|4, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:414 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:17.340-0500 s20014| 2016-04-06T02:52:07.159-0500 D SHARDING [Balancer] Refreshing MaxChunkSize: 50MB [js_test:multi_coll_drop] 2016-04-06T02:52:17.347-0500 s20014| 2016-04-06T02:52:07.159-0500 D ASIO [Balancer] startCommand: RemoteCommand 45 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:37.159-0500 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|4, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.349-0500 s20014| 2016-04-06T02:52:07.160-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 45 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:17.351-0500 c20012| 2016-04-06T02:52:07.160-0500 D COMMAND [conn7] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|4, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.358-0500 c20012| 2016-04-06T02:52:07.160-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|4, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:17.361-0500 c20012| 2016-04-06T02:52:07.160-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|4, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.362-0500 c20012| 2016-04-06T02:52:07.160-0500 D QUERY [conn7] Using idhack: query: { _id: "balancer" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:17.367-0500 c20012| 2016-04-06T02:52:07.160-0500 I COMMAND [conn7] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|4, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:0 docsExamined:0 idhack:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:372 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:17.369-0500 s20014| 2016-04-06T02:52:07.160-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 45 finished with response: { waitedMS: 0, cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.370-0500 s20014| 2016-04-06T02:52:07.160-0500 D SHARDING [Balancer] trying to acquire new distributed lock for balancer ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c02706c33406d4d9c0bb, why: doing balance round [js_test:multi_coll_drop] 2016-04-06T02:52:17.374-0500 s20014| 2016-04-06T02:52:07.160-0500 D ASIO [Balancer] startCommand: RemoteCommand 47 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.160-0500 cmd:{ findAndModify: "locks", query: { _id: "balancer", state: 0 }, update: { $set: { ts: ObjectId('5704c02706c33406d4d9c0bb'), state: 2, who: "mongovm16:20014:1459929123:-665935931:Balancer", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929127160), why: "doing balance round" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.375-0500 s20014| 2016-04-06T02:52:07.160-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 47 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:17.379-0500 c20011| 2016-04-06T02:52:07.160-0500 D COMMAND [conn10] run command config.$cmd { findAndModify: "locks", query: { _id: "balancer", state: 0 }, update: { $set: { ts: ObjectId('5704c02706c33406d4d9c0bb'), state: 2, who: "mongovm16:20014:1459929123:-665935931:Balancer", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929127160), why: "doing balance round" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.389-0500 c20011| 2016-04-06T02:52:07.160-0500 D QUERY [conn10] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:17.392-0500 c20011| 2016-04-06T02:52:07.160-0500 D QUERY [conn10] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:17.392-0500 c20011| 2016-04-06T02:52:07.160-0500 D QUERY [conn10] Only one plan is available; it will be run but will not be cached. query: { _id: "balancer", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.395-0500 c20011| 2016-04-06T02:52:07.161-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|4, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:635 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:17.399-0500 c20013| 2016-04-06T02:52:07.161-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 192 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|5, t: 1, h: 4397371748224475088, v: 2, op: "i", ns: "config.locks", o: { _id: "balancer", state: 2, ts: ObjectId('5704c02706c33406d4d9c0bb'), who: "mongovm16:20014:1459929123:-665935931:Balancer", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929127160), why: "doing balance round" } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.400-0500 c20011| 2016-04-06T02:52:07.162-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929127000|5, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|4, t: 1 }, name-id: "62" } [js_test:multi_coll_drop] 2016-04-06T02:52:17.403-0500 c20011| 2016-04-06T02:52:07.165-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|4, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:635 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:17.406-0500 c20012| 2016-04-06T02:52:07.165-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 193 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|5, t: 1, h: 4397371748224475088, v: 2, op: "i", ns: "config.locks", o: { _id: "balancer", state: 2, ts: ObjectId('5704c02706c33406d4d9c0bb'), who: "mongovm16:20014:1459929123:-665935931:Balancer", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929127160), why: "doing balance round" } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.408-0500 c20012| 2016-04-06T02:52:07.165-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|5 and ending at ts: Timestamp 1459929127000|5 [js_test:multi_coll_drop] 2016-04-06T02:52:17.409-0500 c20012| 2016-04-06T02:52:07.165-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:17.410-0500 c20012| 2016-04-06T02:52:07.165-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.414-0500 c20013| 2016-04-06T02:52:07.165-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|5 and ending at ts: Timestamp 1459929127000|5 [js_test:multi_coll_drop] 2016-04-06T02:52:17.415-0500 c20012| 2016-04-06T02:52:07.165-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.419-0500 c20012| 2016-04-06T02:52:07.165-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.420-0500 c20012| 2016-04-06T02:52:07.165-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.420-0500 c20012| 2016-04-06T02:52:07.165-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.422-0500 c20012| 2016-04-06T02:52:07.165-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.424-0500 c20012| 2016-04-06T02:52:07.165-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.425-0500 c20012| 2016-04-06T02:52:07.165-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.427-0500 c20012| 2016-04-06T02:52:07.165-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.428-0500 c20012| 2016-04-06T02:52:07.165-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.428-0500 c20012| 2016-04-06T02:52:07.165-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.429-0500 c20012| 2016-04-06T02:52:07.165-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.430-0500 c20012| 2016-04-06T02:52:07.165-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:17.434-0500 c20012| 2016-04-06T02:52:07.165-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.435-0500 c20012| 2016-04-06T02:52:07.165-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.436-0500 c20012| 2016-04-06T02:52:07.165-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.437-0500 c20012| 2016-04-06T02:52:07.166-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.439-0500 c20012| 2016-04-06T02:52:07.166-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.439-0500 c20012| 2016-04-06T02:52:07.166-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.441-0500 c20012| 2016-04-06T02:52:07.166-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.441-0500 c20012| 2016-04-06T02:52:07.166-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.443-0500 c20012| 2016-04-06T02:52:07.166-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.443-0500 c20012| 2016-04-06T02:52:07.166-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.444-0500 c20012| 2016-04-06T02:52:07.166-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.445-0500 c20012| 2016-04-06T02:52:07.166-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.446-0500 c20012| 2016-04-06T02:52:07.166-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.446-0500 c20012| 2016-04-06T02:52:07.166-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.447-0500 c20012| 2016-04-06T02:52:07.166-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.448-0500 c20012| 2016-04-06T02:52:07.166-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.449-0500 c20012| 2016-04-06T02:52:07.166-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.451-0500 c20012| 2016-04-06T02:52:07.166-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.452-0500 c20012| 2016-04-06T02:52:07.166-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.456-0500 c20012| 2016-04-06T02:52:07.167-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 196 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.167-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:17.459-0500 c20012| 2016-04-06T02:52:07.167-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 196 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:17.461-0500 c20013| 2016-04-06T02:52:07.167-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 194 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.167-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:17.462-0500 c20013| 2016-04-06T02:52:07.167-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 194 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:17.464-0500 c20011| 2016-04-06T02:52:07.167-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:17.467-0500 c20011| 2016-04-06T02:52:07.167-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:17.469-0500 c20013| 2016-04-06T02:52:07.171-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:17.471-0500 c20013| 2016-04-06T02:52:07.171-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.473-0500 c20013| 2016-04-06T02:52:07.171-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.474-0500 c20013| 2016-04-06T02:52:07.171-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.477-0500 c20013| 2016-04-06T02:52:07.171-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.479-0500 c20013| 2016-04-06T02:52:07.171-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.479-0500 c20013| 2016-04-06T02:52:07.171-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.480-0500 c20013| 2016-04-06T02:52:07.171-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.481-0500 c20013| 2016-04-06T02:52:07.171-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.483-0500 c20013| 2016-04-06T02:52:07.171-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.484-0500 c20013| 2016-04-06T02:52:07.171-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.484-0500 c20013| 2016-04-06T02:52:07.171-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.485-0500 c20013| 2016-04-06T02:52:07.171-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.487-0500 c20013| 2016-04-06T02:52:07.171-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.488-0500 c20013| 2016-04-06T02:52:07.171-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.488-0500 c20013| 2016-04-06T02:52:07.171-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:17.489-0500 c20013| 2016-04-06T02:52:07.171-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.490-0500 c20013| 2016-04-06T02:52:07.172-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.491-0500 c20013| 2016-04-06T02:52:07.172-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.492-0500 c20013| 2016-04-06T02:52:07.172-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.495-0500 c20013| 2016-04-06T02:52:07.172-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.496-0500 c20013| 2016-04-06T02:52:07.172-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.497-0500 c20013| 2016-04-06T02:52:07.172-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.498-0500 c20013| 2016-04-06T02:52:07.172-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.500-0500 c20013| 2016-04-06T02:52:07.172-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.513-0500 c20013| 2016-04-06T02:52:07.172-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.514-0500 c20013| 2016-04-06T02:52:07.172-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.514-0500 c20013| 2016-04-06T02:52:07.172-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.518-0500 c20013| 2016-04-06T02:52:07.172-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.518-0500 c20013| 2016-04-06T02:52:07.172-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.520-0500 c20013| 2016-04-06T02:52:07.172-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.521-0500 c20013| 2016-04-06T02:52:07.172-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.523-0500 c20012| 2016-04-06T02:52:07.175-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.525-0500 c20012| 2016-04-06T02:52:07.175-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:17.528-0500 c20012| 2016-04-06T02:52:07.176-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.531-0500 c20012| 2016-04-06T02:52:07.176-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 197 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.532-0500 c20012| 2016-04-06T02:52:07.176-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 197 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:17.539-0500 c20011| 2016-04-06T02:52:07.176-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.539-0500 c20011| 2016-04-06T02:52:07.176-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:17.545-0500 c20011| 2016-04-06T02:52:07.176-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|5, t: 1 } and is durable through: { ts: Timestamp 1459929127000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.548-0500 c20011| 2016-04-06T02:52:07.176-0500 D REPL [conn17] Required snapshot optime: { ts: Timestamp 1459929127000|5, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|4, t: 1 }, name-id: "62" } [js_test:multi_coll_drop] 2016-04-06T02:52:17.551-0500 c20011| 2016-04-06T02:52:07.177-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.554-0500 c20011| 2016-04-06T02:52:07.177-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:17.554-0500 c20012| 2016-04-06T02:52:07.177-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 197 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.559-0500 c20012| 2016-04-06T02:52:07.177-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.570-0500 c20012| 2016-04-06T02:52:07.177-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 198 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.571-0500 c20012| 2016-04-06T02:52:07.177-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 198 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:17.579-0500 c20011| 2016-04-06T02:52:07.177-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.582-0500 c20011| 2016-04-06T02:52:07.177-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:17.584-0500 c20011| 2016-04-06T02:52:07.177-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|5, t: 1 } and is durable through: { ts: Timestamp 1459929127000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.585-0500 c20011| 2016-04-06T02:52:07.177-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.586-0500 c20012| 2016-04-06T02:52:07.177-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 198 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.588-0500 c20011| 2016-04-06T02:52:07.177-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.592-0500 c20011| 2016-04-06T02:52:07.177-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:17.597-0500 c20011| 2016-04-06T02:52:07.177-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|4, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 9ms [js_test:multi_coll_drop] 2016-04-06T02:52:17.599-0500 c20013| 2016-04-06T02:52:07.177-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 194 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.606-0500 c20011| 2016-04-06T02:52:07.177-0500 I COMMAND [conn10] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "balancer", state: 0 }, update: { $set: { ts: ObjectId('5704c02706c33406d4d9c0bb'), state: 2, who: "mongovm16:20014:1459929123:-665935931:Balancer", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929127160), why: "doing balance round" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c02706c33406d4d9c0bb'), state: 2, who: "mongovm16:20014:1459929123:-665935931:Balancer", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929127160), why: "doing balance round" } } keysExamined:0 docsExamined:0 nMatched:0 nModified:0 upsert:1 numYields:0 reslen:585 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 17ms [js_test:multi_coll_drop] 2016-04-06T02:52:17.607-0500 c20011| 2016-04-06T02:52:07.178-0500 D COMMAND [conn10] run command config.$cmd { find: "collections", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|5, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.610-0500 c20011| 2016-04-06T02:52:07.178-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|5, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:17.611-0500 c20011| 2016-04-06T02:52:07.178-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "collections", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|5, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.612-0500 c20011| 2016-04-06T02:52:07.178-0500 D QUERY [conn10] Collection config.collections does not exist. Using EOF plan: query: {} sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:52:17.615-0500 c20011| 2016-04-06T02:52:07.178-0500 I COMMAND [conn10] command config.collections command: find { find: "collections", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|5, t: 1 } }, maxTimeMS: 30000 } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:395 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:17.617-0500 c20011| 2016-04-06T02:52:07.178-0500 D COMMAND [conn10] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c02706c33406d4d9c0bb') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.619-0500 c20011| 2016-04-06T02:52:07.178-0500 D QUERY [conn10] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:17.623-0500 c20011| 2016-04-06T02:52:07.178-0500 D QUERY [conn10] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c02706c33406d4d9c0bb') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.624-0500 c20011| 2016-04-06T02:52:07.180-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929127000|6, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|5, t: 1 }, name-id: "63" } [js_test:multi_coll_drop] 2016-04-06T02:52:17.625-0500 c20013| 2016-04-06T02:52:07.172-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.626-0500 c20013| 2016-04-06T02:52:07.185-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.629-0500 c20011| 2016-04-06T02:52:07.185-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|4, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:489 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 18ms [js_test:multi_coll_drop] 2016-04-06T02:52:17.632-0500 c20013| 2016-04-06T02:52:07.185-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.634-0500 c20013| 2016-04-06T02:52:07.185-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:17.637-0500 c20013| 2016-04-06T02:52:07.185-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 196 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.185-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:17.642-0500 c20013| 2016-04-06T02:52:07.185-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:17.642-0500 c20013| 2016-04-06T02:52:07.185-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 196 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:17.644-0500 s20014| 2016-04-06T02:52:07.178-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 47 finished with response: { lastErrorObject: { updatedExisting: false, n: 1, upserted: "balancer" }, value: { _id: "balancer", state: 2, ts: ObjectId('5704c02706c33406d4d9c0bb'), who: "mongovm16:20014:1459929123:-665935931:Balancer", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929127160), why: "doing balance round" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.645-0500 s20014| 2016-04-06T02:52:07.178-0500 I SHARDING [Balancer] distributed lock 'balancer' acquired for 'doing balance round', ts : 5704c02706c33406d4d9c0bb [js_test:multi_coll_drop] 2016-04-06T02:52:17.646-0500 s20014| 2016-04-06T02:52:07.178-0500 D SHARDING [Balancer] *** start balancing round. waitForDelete: 0, secondaryThrottle: {} [js_test:multi_coll_drop] 2016-04-06T02:52:17.648-0500 s20014| 2016-04-06T02:52:07.178-0500 D ASIO [Balancer] startCommand: RemoteCommand 49 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.178-0500 cmd:{ find: "collections", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|5, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.650-0500 s20014| 2016-04-06T02:52:07.178-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 49 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:17.654-0500 s20014| 2016-04-06T02:52:07.178-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 49 finished with response: { waitedMS: 0, cursor: { id: 0, ns: "config.collections", firstBatch: [] }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.655-0500 s20014| 2016-04-06T02:52:07.178-0500 D SHARDING [Balancer] no collections to balance [js_test:multi_coll_drop] 2016-04-06T02:52:17.656-0500 s20014| 2016-04-06T02:52:07.178-0500 D SHARDING [Balancer] no need to move any chunk [js_test:multi_coll_drop] 2016-04-06T02:52:17.657-0500 s20014| 2016-04-06T02:52:07.178-0500 D SHARDING [Balancer] *** End of balancing round [js_test:multi_coll_drop] 2016-04-06T02:52:17.662-0500 s20014| 2016-04-06T02:52:07.178-0500 D ASIO [Balancer] startCommand: RemoteCommand 51 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.178-0500 cmd:{ findAndModify: "locks", query: { ts: ObjectId('5704c02706c33406d4d9c0bb') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.666-0500 s20014| 2016-04-06T02:52:07.178-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 51 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:17.675-0500 c20013| 2016-04-06T02:52:07.185-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.676-0500 c20011| 2016-04-06T02:52:07.185-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:17.682-0500 c20013| 2016-04-06T02:52:07.185-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 197 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.688-0500 c20013| 2016-04-06T02:52:07.186-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 197 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:17.693-0500 c20011| 2016-04-06T02:52:07.186-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|5, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:489 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:17.695-0500 c20011| 2016-04-06T02:52:07.186-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.700-0500 c20013| 2016-04-06T02:52:07.186-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 196 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|6, t: 1, h: -6953880955376886912, v: 2, op: "u", ns: "config.locks", o2: { _id: "balancer" }, o: { $set: { state: 0 } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.701-0500 c20011| 2016-04-06T02:52:07.186-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:17.710-0500 c20011| 2016-04-06T02:52:07.186-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.716-0500 c20011| 2016-04-06T02:52:07.186-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|5, t: 1 } and is durable through: { ts: Timestamp 1459929127000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.720-0500 c20013| 2016-04-06T02:52:07.186-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|6 and ending at ts: Timestamp 1459929127000|6 [js_test:multi_coll_drop] 2016-04-06T02:52:17.723-0500 c20011| 2016-04-06T02:52:07.186-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929127000|6, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|5, t: 1 }, name-id: "63" } [js_test:multi_coll_drop] 2016-04-06T02:52:17.727-0500 c20011| 2016-04-06T02:52:07.186-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:17.730-0500 c20013| 2016-04-06T02:52:07.186-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 197 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.731-0500 c20013| 2016-04-06T02:52:07.186-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:17.732-0500 c20013| 2016-04-06T02:52:07.186-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.735-0500 c20013| 2016-04-06T02:52:07.186-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.740-0500 c20013| 2016-04-06T02:52:07.186-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.741-0500 c20013| 2016-04-06T02:52:07.186-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.741-0500 c20013| 2016-04-06T02:52:07.186-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.742-0500 c20013| 2016-04-06T02:52:07.186-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.743-0500 c20013| 2016-04-06T02:52:07.186-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.743-0500 c20013| 2016-04-06T02:52:07.186-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.744-0500 c20013| 2016-04-06T02:52:07.186-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.747-0500 c20013| 2016-04-06T02:52:07.186-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.748-0500 c20013| 2016-04-06T02:52:07.186-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.754-0500 c20013| 2016-04-06T02:52:07.186-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.756-0500 c20013| 2016-04-06T02:52:07.186-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:17.757-0500 c20013| 2016-04-06T02:52:07.186-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.760-0500 c20013| 2016-04-06T02:52:07.186-0500 D QUERY [repl writer worker 6] Using idhack: { _id: "balancer" } [js_test:multi_coll_drop] 2016-04-06T02:52:17.762-0500 c20013| 2016-04-06T02:52:07.186-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.765-0500 c20012| 2016-04-06T02:52:07.186-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 196 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|6, t: 1, h: -6953880955376886912, v: 2, op: "u", ns: "config.locks", o2: { _id: "balancer" }, o: { $set: { state: 0 } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.768-0500 c20012| 2016-04-06T02:52:07.187-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.770-0500 c20012| 2016-04-06T02:52:07.187-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|6 and ending at ts: Timestamp 1459929127000|6 [js_test:multi_coll_drop] 2016-04-06T02:52:17.772-0500 c20013| 2016-04-06T02:52:07.187-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.773-0500 c20013| 2016-04-06T02:52:07.187-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.774-0500 c20013| 2016-04-06T02:52:07.187-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.777-0500 c20013| 2016-04-06T02:52:07.187-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.779-0500 c20013| 2016-04-06T02:52:07.187-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.780-0500 c20013| 2016-04-06T02:52:07.187-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.783-0500 c20013| 2016-04-06T02:52:07.187-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.793-0500 c20013| 2016-04-06T02:52:07.187-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.795-0500 c20013| 2016-04-06T02:52:07.187-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.796-0500 c20013| 2016-04-06T02:52:07.187-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.799-0500 c20013| 2016-04-06T02:52:07.187-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.800-0500 c20013| 2016-04-06T02:52:07.187-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.802-0500 c20013| 2016-04-06T02:52:07.187-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.803-0500 c20013| 2016-04-06T02:52:07.187-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.803-0500 c20013| 2016-04-06T02:52:07.187-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.804-0500 c20013| 2016-04-06T02:52:07.187-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.804-0500 c20013| 2016-04-06T02:52:07.187-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.806-0500 c20013| 2016-04-06T02:52:07.187-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.809-0500 c20013| 2016-04-06T02:52:07.188-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 200 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.188-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:17.810-0500 c20013| 2016-04-06T02:52:07.188-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 200 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:17.813-0500 c20011| 2016-04-06T02:52:07.188-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:17.816-0500 c20012| 2016-04-06T02:52:07.188-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:17.817-0500 c20012| 2016-04-06T02:52:07.188-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.817-0500 c20012| 2016-04-06T02:52:07.188-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.818-0500 c20012| 2016-04-06T02:52:07.188-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.821-0500 c20012| 2016-04-06T02:52:07.188-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.828-0500 c20012| 2016-04-06T02:52:07.188-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.828-0500 c20012| 2016-04-06T02:52:07.188-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.829-0500 c20012| 2016-04-06T02:52:07.188-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.840-0500 c20012| 2016-04-06T02:52:07.188-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.843-0500 c20012| 2016-04-06T02:52:07.188-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.844-0500 c20012| 2016-04-06T02:52:07.188-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.844-0500 c20012| 2016-04-06T02:52:07.188-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:17.845-0500 c20012| 2016-04-06T02:52:07.188-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.845-0500 c20012| 2016-04-06T02:52:07.188-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.846-0500 c20012| 2016-04-06T02:52:07.188-0500 D QUERY [repl writer worker 4] Using idhack: { _id: "balancer" } [js_test:multi_coll_drop] 2016-04-06T02:52:17.847-0500 c20012| 2016-04-06T02:52:07.188-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.848-0500 c20012| 2016-04-06T02:52:07.188-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.851-0500 c20012| 2016-04-06T02:52:07.189-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.852-0500 c20012| 2016-04-06T02:52:07.188-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.856-0500 c20012| 2016-04-06T02:52:07.189-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.859-0500 c20012| 2016-04-06T02:52:07.189-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.860-0500 c20012| 2016-04-06T02:52:07.189-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.860-0500 c20012| 2016-04-06T02:52:07.189-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.861-0500 c20012| 2016-04-06T02:52:07.189-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.863-0500 c20012| 2016-04-06T02:52:07.189-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.866-0500 c20012| 2016-04-06T02:52:07.189-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.867-0500 c20012| 2016-04-06T02:52:07.189-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.868-0500 c20012| 2016-04-06T02:52:07.189-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.869-0500 c20013| 2016-04-06T02:52:07.189-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:17.870-0500 c20012| 2016-04-06T02:52:07.189-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.871-0500 c20012| 2016-04-06T02:52:07.189-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.872-0500 c20012| 2016-04-06T02:52:07.189-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.872-0500 c20012| 2016-04-06T02:52:07.189-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.873-0500 c20012| 2016-04-06T02:52:07.189-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.879-0500 c20012| 2016-04-06T02:52:07.189-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 202 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.189-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:17.883-0500 c20013| 2016-04-06T02:52:07.189-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.886-0500 c20013| 2016-04-06T02:52:07.189-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 201 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.886-0500 c20013| 2016-04-06T02:52:07.189-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 201 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:17.891-0500 c20011| 2016-04-06T02:52:07.189-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.891-0500 c20011| 2016-04-06T02:52:07.189-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:17.899-0500 c20011| 2016-04-06T02:52:07.189-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.906-0500 c20011| 2016-04-06T02:52:07.189-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|6, t: 1 } and is durable through: { ts: Timestamp 1459929127000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.908-0500 c20011| 2016-04-06T02:52:07.189-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929127000|6, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|5, t: 1 }, name-id: "63" } [js_test:multi_coll_drop] 2016-04-06T02:52:17.914-0500 c20011| 2016-04-06T02:52:07.189-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:17.915-0500 c20013| 2016-04-06T02:52:07.189-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 201 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.917-0500 c20012| 2016-04-06T02:52:07.189-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 202 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:17.923-0500 c20011| 2016-04-06T02:52:07.189-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:17.932-0500 c20013| 2016-04-06T02:52:07.190-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.937-0500 c20013| 2016-04-06T02:52:07.190-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 203 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.938-0500 c20012| 2016-04-06T02:52:07.190-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.939-0500 c20012| 2016-04-06T02:52:07.190-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:17.941-0500 c20013| 2016-04-06T02:52:07.190-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 203 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:17.948-0500 c20011| 2016-04-06T02:52:07.190-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.949-0500 c20011| 2016-04-06T02:52:07.190-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:17.951-0500 c20011| 2016-04-06T02:52:07.190-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.952-0500 c20013| 2016-04-06T02:52:07.190-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 203 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.954-0500 c20011| 2016-04-06T02:52:07.190-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|6, t: 1 } and is durable through: { ts: Timestamp 1459929127000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.958-0500 c20011| 2016-04-06T02:52:07.190-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929127000|6, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|5, t: 1 }, name-id: "63" } [js_test:multi_coll_drop] 2016-04-06T02:52:17.963-0500 c20011| 2016-04-06T02:52:07.190-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:17.964-0500 c20012| 2016-04-06T02:52:07.190-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:17.966-0500 s20014| 2016-04-06T02:52:07.190-0500 I NETWORK [mongosMain] waiting for connections on port 20014 [js_test:multi_coll_drop] 2016-04-06T02:52:17.970-0500 c20011| 2016-04-06T02:52:07.191-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.971-0500 c20011| 2016-04-06T02:52:07.191-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:17.974-0500 c20011| 2016-04-06T02:52:07.191-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.976-0500 c20011| 2016-04-06T02:52:07.191-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|6, t: 1 } and is durable through: { ts: Timestamp 1459929127000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.980-0500 c20011| 2016-04-06T02:52:07.191-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:17.986-0500 c20011| 2016-04-06T02:52:07.191-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:17.991-0500 c20011| 2016-04-06T02:52:07.191-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|5, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:17.995-0500 c20013| 2016-04-06T02:52:07.191-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:17.997-0500 c20013| 2016-04-06T02:52:07.191-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 205 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:18.013-0500 c20013| 2016-04-06T02:52:07.191-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 205 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:18.015-0500 c20013| 2016-04-06T02:52:07.191-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 205 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.016-0500 c20013| 2016-04-06T02:52:07.191-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 200 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.017-0500 c20013| 2016-04-06T02:52:07.191-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.017-0500 c20013| 2016-04-06T02:52:07.191-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:18.024-0500 c20011| 2016-04-06T02:52:07.191-0500 I COMMAND [conn10] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c02706c33406d4d9c0bb') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:562 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 13ms [js_test:multi_coll_drop] 2016-04-06T02:52:18.028-0500 s20014| 2016-04-06T02:52:07.191-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 51 finished with response: { lastErrorObject: { updatedExisting: true, n: 1 }, value: { _id: "balancer", state: 2, ts: ObjectId('5704c02706c33406d4d9c0bb'), who: "mongovm16:20014:1459929123:-665935931:Balancer", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929127160), why: "doing balance round" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.032-0500 c20013| 2016-04-06T02:52:07.191-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 208 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.191-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:18.034-0500 c20013| 2016-04-06T02:52:07.191-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 208 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:18.037-0500 c20011| 2016-04-06T02:52:07.192-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:18.042-0500 c20011| 2016-04-06T02:52:07.192-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|5, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:18.043-0500 s20014| 2016-04-06T02:52:07.192-0500 I SHARDING [Balancer] distributed lock with ts: 5704c02706c33406d4d9c0bb' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:18.048-0500 s20014| 2016-04-06T02:52:07.192-0500 D ASIO [Balancer] startCommand: RemoteCommand 53 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.192-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929127192), up: 0, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.056-0500 c20011| 2016-04-06T02:52:07.192-0500 D COMMAND [conn10] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929127192), up: 0, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.059-0500 s20014| 2016-04-06T02:52:07.192-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 53 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:18.060-0500 c20011| 2016-04-06T02:52:07.192-0500 D QUERY [conn10] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:52:18.066-0500 c20011| 2016-04-06T02:52:07.192-0500 I WRITE [conn10] update config.mongos query: { _id: "mongovm16:20014" } update: { $set: { _id: "mongovm16:20014", ping: new Date(1459929127192), up: 0, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:18.073-0500 c20012| 2016-04-06T02:52:07.192-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 202 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.077-0500 c20012| 2016-04-06T02:52:07.192-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.077-0500 c20012| 2016-04-06T02:52:07.192-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:18.081-0500 c20012| 2016-04-06T02:52:07.192-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 204 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.192-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:18.084-0500 c20013| 2016-04-06T02:52:07.192-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 208 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|7, t: 1, h: 5599431919262435152, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20014" }, o: { $set: { ping: new Date(1459929127192), waiting: true } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.090-0500 c20011| 2016-04-06T02:52:07.192-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|6, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:510 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:18.092-0500 c20013| 2016-04-06T02:52:07.192-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|7 and ending at ts: Timestamp 1459929127000|7 [js_test:multi_coll_drop] 2016-04-06T02:52:18.092-0500 c20013| 2016-04-06T02:52:07.193-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:18.096-0500 c20011| 2016-04-06T02:52:07.193-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:18.097-0500 c20012| 2016-04-06T02:52:07.193-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 204 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:18.100-0500 c20013| 2016-04-06T02:52:07.193-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.101-0500 c20013| 2016-04-06T02:52:07.193-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.103-0500 c20013| 2016-04-06T02:52:07.193-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.104-0500 c20013| 2016-04-06T02:52:07.193-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.105-0500 c20013| 2016-04-06T02:52:07.193-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.105-0500 c20013| 2016-04-06T02:52:07.193-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.106-0500 c20013| 2016-04-06T02:52:07.193-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.107-0500 c20013| 2016-04-06T02:52:07.193-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.109-0500 c20013| 2016-04-06T02:52:07.193-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.110-0500 c20013| 2016-04-06T02:52:07.193-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.113-0500 c20013| 2016-04-06T02:52:07.193-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.120-0500 c20011| 2016-04-06T02:52:07.193-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|6, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:510 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:18.126-0500 c20012| 2016-04-06T02:52:07.193-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 204 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|7, t: 1, h: 5599431919262435152, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20014" }, o: { $set: { ping: new Date(1459929127192), waiting: true } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.126-0500 c20013| 2016-04-06T02:52:07.193-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.127-0500 c20013| 2016-04-06T02:52:07.193-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.129-0500 c20013| 2016-04-06T02:52:07.193-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.130-0500 c20013| 2016-04-06T02:52:07.193-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:18.132-0500 c20013| 2016-04-06T02:52:07.193-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.133-0500 c20013| 2016-04-06T02:52:07.193-0500 D QUERY [repl writer worker 4] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:52:18.134-0500 c20012| 2016-04-06T02:52:07.193-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|7 and ending at ts: Timestamp 1459929127000|7 [js_test:multi_coll_drop] 2016-04-06T02:52:18.136-0500 c20012| 2016-04-06T02:52:07.193-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:18.136-0500 c20012| 2016-04-06T02:52:07.193-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.136-0500 c20012| 2016-04-06T02:52:07.193-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.137-0500 c20012| 2016-04-06T02:52:07.193-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.139-0500 c20012| 2016-04-06T02:52:07.193-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.140-0500 c20012| 2016-04-06T02:52:07.193-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.141-0500 c20012| 2016-04-06T02:52:07.193-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.143-0500 c20012| 2016-04-06T02:52:07.193-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.147-0500 c20012| 2016-04-06T02:52:07.193-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.148-0500 c20012| 2016-04-06T02:52:07.193-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.150-0500 c20012| 2016-04-06T02:52:07.193-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.151-0500 c20012| 2016-04-06T02:52:07.193-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.153-0500 c20012| 2016-04-06T02:52:07.194-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.158-0500 c20012| 2016-04-06T02:52:07.194-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.160-0500 c20012| 2016-04-06T02:52:07.194-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.162-0500 c20012| 2016-04-06T02:52:07.194-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:18.170-0500 c20013| 2016-04-06T02:52:07.194-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.171-0500 c20012| 2016-04-06T02:52:07.194-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.173-0500 c20012| 2016-04-06T02:52:07.194-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:52:18.176-0500 c20013| 2016-04-06T02:52:07.194-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.177-0500 c20013| 2016-04-06T02:52:07.194-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.179-0500 c20012| 2016-04-06T02:52:07.194-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.181-0500 c20012| 2016-04-06T02:52:07.194-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.183-0500 c20013| 2016-04-06T02:52:07.194-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.187-0500 c20013| 2016-04-06T02:52:07.194-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.189-0500 c20012| 2016-04-06T02:52:07.194-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.189-0500 c20013| 2016-04-06T02:52:07.194-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.190-0500 c20012| 2016-04-06T02:52:07.194-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.197-0500 c20013| 2016-04-06T02:52:07.194-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.197-0500 c20012| 2016-04-06T02:52:07.194-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.200-0500 c20012| 2016-04-06T02:52:07.194-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.201-0500 c20012| 2016-04-06T02:52:07.194-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.203-0500 c20013| 2016-04-06T02:52:07.194-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.203-0500 c20012| 2016-04-06T02:52:07.194-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.209-0500 c20012| 2016-04-06T02:52:07.194-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.212-0500 c20012| 2016-04-06T02:52:07.194-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.212-0500 c20012| 2016-04-06T02:52:07.194-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.223-0500 c20012| 2016-04-06T02:52:07.194-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.235-0500 c20013| 2016-04-06T02:52:07.194-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.240-0500 c20013| 2016-04-06T02:52:07.195-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.243-0500 c20013| 2016-04-06T02:52:07.195-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.254-0500 c20013| 2016-04-06T02:52:07.195-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 210 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.195-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:18.259-0500 c20013| 2016-04-06T02:52:07.195-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 210 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:18.261-0500 c20013| 2016-04-06T02:52:07.195-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.262-0500 c20011| 2016-04-06T02:52:07.195-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929127000|7, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|6, t: 1 }, name-id: "64" } [js_test:multi_coll_drop] 2016-04-06T02:52:18.265-0500 c20011| 2016-04-06T02:52:07.195-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:18.266-0500 c20013| 2016-04-06T02:52:07.195-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.267-0500 c20013| 2016-04-06T02:52:07.195-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.274-0500 c20011| 2016-04-06T02:52:07.195-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:18.275-0500 c20011| 2016-04-06T02:52:07.195-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:18.279-0500 c20011| 2016-04-06T02:52:07.195-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|6, t: 1 } and is durable through: { ts: Timestamp 1459929127000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.283-0500 c20011| 2016-04-06T02:52:07.195-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929127000|7, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|6, t: 1 }, name-id: "64" } [js_test:multi_coll_drop] 2016-04-06T02:52:18.286-0500 c20011| 2016-04-06T02:52:07.195-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929127000|7, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|6, t: 1 }, name-id: "64" } [js_test:multi_coll_drop] 2016-04-06T02:52:18.293-0500 c20011| 2016-04-06T02:52:07.195-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.299-0500 c20011| 2016-04-06T02:52:07.195-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:18.300-0500 c20013| 2016-04-06T02:52:07.195-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.303-0500 c20013| 2016-04-06T02:52:07.195-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.305-0500 c20013| 2016-04-06T02:52:07.195-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.311-0500 c20012| 2016-04-06T02:52:07.195-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:18.311-0500 c20012| 2016-04-06T02:52:07.195-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.316-0500 c20012| 2016-04-06T02:52:07.195-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 206 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:18.316-0500 c20012| 2016-04-06T02:52:07.195-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.320-0500 c20012| 2016-04-06T02:52:07.195-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.322-0500 c20012| 2016-04-06T02:52:07.195-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 206 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:18.322-0500 c20012| 2016-04-06T02:52:07.195-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.324-0500 c20012| 2016-04-06T02:52:07.195-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.326-0500 c20012| 2016-04-06T02:52:07.195-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:18.329-0500 c20012| 2016-04-06T02:52:07.195-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 206 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.336-0500 c20012| 2016-04-06T02:52:07.195-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:18.340-0500 c20012| 2016-04-06T02:52:07.195-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 207 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:18.341-0500 c20012| 2016-04-06T02:52:07.195-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 207 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:18.344-0500 c20012| 2016-04-06T02:52:07.195-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 209 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.195-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:18.350-0500 c20011| 2016-04-06T02:52:07.195-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:18.355-0500 c20011| 2016-04-06T02:52:07.195-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:18.357-0500 c20011| 2016-04-06T02:52:07.195-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|7, t: 1 } and is durable through: { ts: Timestamp 1459929127000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.358-0500 c20011| 2016-04-06T02:52:07.195-0500 D REPL [conn17] Required snapshot optime: { ts: Timestamp 1459929127000|7, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|6, t: 1 }, name-id: "64" } [js_test:multi_coll_drop] 2016-04-06T02:52:18.360-0500 c20011| 2016-04-06T02:52:07.195-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.368-0500 c20011| 2016-04-06T02:52:07.195-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:18.369-0500 c20012| 2016-04-06T02:52:07.195-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 209 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:18.373-0500 c20011| 2016-04-06T02:52:07.195-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:18.375-0500 c20013| 2016-04-06T02:52:07.196-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:18.384-0500 c20013| 2016-04-06T02:52:07.196-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:18.391-0500 c20013| 2016-04-06T02:52:07.196-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 211 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:18.393-0500 c20013| 2016-04-06T02:52:07.196-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 211 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:18.396-0500 c20011| 2016-04-06T02:52:07.196-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:18.396-0500 c20011| 2016-04-06T02:52:07.196-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:18.402-0500 c20011| 2016-04-06T02:52:07.196-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.406-0500 c20011| 2016-04-06T02:52:07.196-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|7, t: 1 } and is durable through: { ts: Timestamp 1459929127000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.418-0500 c20011| 2016-04-06T02:52:07.196-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929127000|7, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|6, t: 1 }, name-id: "64" } [js_test:multi_coll_drop] 2016-04-06T02:52:18.422-0500 c20011| 2016-04-06T02:52:07.196-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:18.422-0500 c20013| 2016-04-06T02:52:07.196-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 211 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.423-0500 c20012| 2016-04-06T02:52:07.196-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 207 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.429-0500 c20013| 2016-04-06T02:52:07.198-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:18.431-0500 c20013| 2016-04-06T02:52:07.198-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 213 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:18.432-0500 c20013| 2016-04-06T02:52:07.198-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 213 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:18.437-0500 c20011| 2016-04-06T02:52:07.198-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:18.438-0500 c20011| 2016-04-06T02:52:07.198-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:18.439-0500 c20011| 2016-04-06T02:52:07.198-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.440-0500 c20011| 2016-04-06T02:52:07.198-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|7, t: 1 } and is durable through: { ts: Timestamp 1459929127000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.441-0500 c20011| 2016-04-06T02:52:07.198-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.445-0500 c20011| 2016-04-06T02:52:07.198-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:18.449-0500 c20011| 2016-04-06T02:52:07.198-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|6, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:18.454-0500 c20011| 2016-04-06T02:52:07.198-0500 I COMMAND [conn10] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929127192), up: 0, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:386 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:18.457-0500 s20014| 2016-04-06T02:52:07.198-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 53 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929127000|7, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:18.458-0500 c20013| 2016-04-06T02:52:07.198-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 213 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.460-0500 c20013| 2016-04-06T02:52:07.198-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 210 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.468-0500 c20011| 2016-04-06T02:52:07.198-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:18.471-0500 c20011| 2016-04-06T02:52:07.198-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|6, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:18.475-0500 c20012| 2016-04-06T02:52:07.198-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:18.480-0500 c20012| 2016-04-06T02:52:07.198-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 211 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:18.481-0500 c20012| 2016-04-06T02:52:07.198-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 211 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:18.485-0500 c20012| 2016-04-06T02:52:07.199-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 209 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.486-0500 c20012| 2016-04-06T02:52:07.199-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.486-0500 c20012| 2016-04-06T02:52:07.199-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:18.487-0500 c20013| 2016-04-06T02:52:07.199-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.488-0500 c20013| 2016-04-06T02:52:07.199-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:18.490-0500 c20013| 2016-04-06T02:52:07.199-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 216 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.199-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:18.490-0500 c20011| 2016-04-06T02:52:07.198-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:18.494-0500 c20011| 2016-04-06T02:52:07.199-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|7, t: 1 } and is durable through: { ts: Timestamp 1459929127000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.499-0500 c20011| 2016-04-06T02:52:07.199-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.504-0500 c20011| 2016-04-06T02:52:07.199-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:18.506-0500 c20013| 2016-04-06T02:52:07.199-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 216 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:18.510-0500 c20012| 2016-04-06T02:52:07.199-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 213 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.199-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:18.511-0500 c20012| 2016-04-06T02:52:07.199-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 211 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.512-0500 c20011| 2016-04-06T02:52:07.199-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:18.514-0500 c20012| 2016-04-06T02:52:07.199-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 213 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:18.517-0500 c20011| 2016-04-06T02:52:07.199-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:18.517-0500 s20014| 2016-04-06T02:52:07.316-0500 I NETWORK [mongosMain] connection accepted from 127.0.0.1:55066 #1 (1 connection now open) [js_test:multi_coll_drop] 2016-04-06T02:52:18.521-0500 2016-04-06T02:52:07.318-0500 I - [thread1] shell: started program (sh80193): /data/mci/src/mongos --configdb multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013 -vv --chunkSize 50 --port 20015 --setParameter enableTestCommands=1 [js_test:multi_coll_drop] 2016-04-06T02:52:18.521-0500 2016-04-06T02:52:07.318-0500 W NETWORK [thread1] Failed to connect to 127.0.0.1:20015, reason: Connection refused [js_test:multi_coll_drop] 2016-04-06T02:52:18.522-0500 s20015| 2016-04-06T02:52:07.334-0500 I CONTROL [main] [js_test:multi_coll_drop] 2016-04-06T02:52:18.524-0500 s20015| 2016-04-06T02:52:07.334-0500 I CONTROL [main] ** NOTE: This is a development version (3.3.4-37-g36f3ff8) of MongoDB. [js_test:multi_coll_drop] 2016-04-06T02:52:18.524-0500 s20015| 2016-04-06T02:52:07.334-0500 I CONTROL [main] ** Not recommended for production. [js_test:multi_coll_drop] 2016-04-06T02:52:18.525-0500 s20015| 2016-04-06T02:52:07.334-0500 I CONTROL [main] [js_test:multi_coll_drop] 2016-04-06T02:52:18.526-0500 s20015| 2016-04-06T02:52:07.334-0500 I CONTROL [main] ** WARNING: Insecure configuration, access control is not enabled and no --bind_ip has been specified. [js_test:multi_coll_drop] 2016-04-06T02:52:18.528-0500 s20015| 2016-04-06T02:52:07.334-0500 I CONTROL [main] ** Read and write access to data and configuration is unrestricted, [js_test:multi_coll_drop] 2016-04-06T02:52:18.529-0500 s20015| 2016-04-06T02:52:07.334-0500 I CONTROL [main] ** and the server listens on all available network interfaces. [js_test:multi_coll_drop] 2016-04-06T02:52:18.531-0500 s20015| 2016-04-06T02:52:07.334-0500 I CONTROL [main] ** WARNING: You are running this process as the root user, which is not recommended. [js_test:multi_coll_drop] 2016-04-06T02:52:18.532-0500 s20015| 2016-04-06T02:52:07.334-0500 I CONTROL [main] [js_test:multi_coll_drop] 2016-04-06T02:52:18.535-0500 s20015| 2016-04-06T02:52:07.334-0500 I SHARDING [mongosMain] MongoS version 3.3.4-37-g36f3ff8 starting: pid=80193 port=20015 64-bit host=mongovm16 (--help for usage) [js_test:multi_coll_drop] 2016-04-06T02:52:18.535-0500 s20015| 2016-04-06T02:52:07.334-0500 I CONTROL [mongosMain] db version v3.3.4-37-g36f3ff8 [js_test:multi_coll_drop] 2016-04-06T02:52:18.536-0500 s20015| 2016-04-06T02:52:07.334-0500 I CONTROL [mongosMain] git version: 36f3ff8da1f7ae3710ceacc4e13adfd4abdb99da [js_test:multi_coll_drop] 2016-04-06T02:52:18.538-0500 s20015| 2016-04-06T02:52:07.334-0500 I CONTROL [mongosMain] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 [js_test:multi_coll_drop] 2016-04-06T02:52:18.539-0500 s20015| 2016-04-06T02:52:07.334-0500 I CONTROL [mongosMain] allocator: tcmalloc [js_test:multi_coll_drop] 2016-04-06T02:52:18.539-0500 s20015| 2016-04-06T02:52:07.334-0500 I CONTROL [mongosMain] modules: enterprise [js_test:multi_coll_drop] 2016-04-06T02:52:18.540-0500 s20015| 2016-04-06T02:52:07.334-0500 I CONTROL [mongosMain] build environment: [js_test:multi_coll_drop] 2016-04-06T02:52:18.541-0500 s20015| 2016-04-06T02:52:07.334-0500 I CONTROL [mongosMain] distmod: rhel71 [js_test:multi_coll_drop] 2016-04-06T02:52:18.541-0500 s20015| 2016-04-06T02:52:07.334-0500 I CONTROL [mongosMain] distarch: ppc64le [js_test:multi_coll_drop] 2016-04-06T02:52:18.541-0500 s20015| 2016-04-06T02:52:07.334-0500 I CONTROL [mongosMain] target_arch: ppc64le [js_test:multi_coll_drop] 2016-04-06T02:52:18.544-0500 s20015| 2016-04-06T02:52:07.334-0500 I CONTROL [mongosMain] options: { net: { port: 20015 }, setParameter: { enableTestCommands: "1" }, sharding: { chunkSize: 50, configDB: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013" }, systemLog: { verbosity: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:52:18.545-0500 s20015| 2016-04-06T02:52:07.335-0500 I SHARDING [mongosMain] Updating config server connection string to: multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:18.546-0500 s20015| 2016-04-06T02:52:07.335-0500 I NETWORK [mongosMain] Starting new replica set monitor for multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:18.546-0500 s20015| 2016-04-06T02:52:07.335-0500 D COMMAND [ReplicaSetMonitorWatcher] BackgroundJob starting: ReplicaSetMonitorWatcher [js_test:multi_coll_drop] 2016-04-06T02:52:18.547-0500 s20015| 2016-04-06T02:52:07.335-0500 I NETWORK [ReplicaSetMonitorWatcher] starting [js_test:multi_coll_drop] 2016-04-06T02:52:18.548-0500 s20015| 2016-04-06T02:52:07.335-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-TaskExecutor-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.549-0500 s20015| 2016-04-06T02:52:07.335-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-2-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.549-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-4-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.551-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-5-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.552-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-7-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.553-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-9-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.556-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-3-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.557-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-8-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.559-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-11-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.561-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-12-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.561-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-15-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.562-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-10-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.563-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-16-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.564-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.565-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-6-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.568-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-13-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.568-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-17-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.569-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-14-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.570-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-18-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.572-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-19-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.573-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-21-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.574-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-20-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.575-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-22-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.579-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-23-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.579-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-25-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.582-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-26-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.812-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-27-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.813-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-28-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.815-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-30-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.815-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-29-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.818-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-31-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.819-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-32-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.820-0500 s20015| 2016-04-06T02:52:07.336-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-33-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.821-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-34-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.821-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-35-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.821-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-36-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.822-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-37-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.822-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-38-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.823-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-40-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.825-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-39-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.827-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-41-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.829-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-42-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.830-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-1-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.831-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-43-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.832-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-44-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.834-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-45-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.837-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-46-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.838-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-47-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.839-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-48-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.841-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-24-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.843-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-50-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.845-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-52-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.847-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-49-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.850-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-53-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.852-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-55-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.854-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-51-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.855-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-54-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.856-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-57-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.857-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-59-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.858-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-58-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.860-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-61-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.862-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-56-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.863-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-60-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.865-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-62-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.866-0500 s20015| 2016-04-06T02:52:07.337-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-63-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.867-0500 s20015| 2016-04-06T02:52:07.338-0500 D NETWORK [mongosMain] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:18.869-0500 s20015| 2016-04-06T02:52:07.338-0500 I SHARDING [thread1] creating distributed lock ping thread for process mongovm16:20015:1459929127:-1485108316 (sleeping for 30000ms) [js_test:multi_coll_drop] 2016-04-06T02:52:18.870-0500 s20015| 2016-04-06T02:52:07.338-0500 D NETWORK [mongosMain] creating new connection to:mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:18.870-0500 s20015| 2016-04-06T02:52:07.338-0500 D NETWORK [replSetDistLockPinger] creating new connection to:mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:18.872-0500 s20015| 2016-04-06T02:52:07.339-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:52:18.873-0500 s20015| 2016-04-06T02:52:07.339-0500 D NETWORK [mongosMain] connected to server mongovm16:20012 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:52:18.875-0500 s20015| 2016-04-06T02:52:07.339-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:52:18.877-0500 c20012| 2016-04-06T02:52:07.339-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:36644 #8 (6 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:18.878-0500 c20011| 2016-04-06T02:52:07.339-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:58976 #18 (14 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:18.880-0500 s20015| 2016-04-06T02:52:07.339-0500 D NETWORK [replSetDistLockPinger] connected to server mongovm16:20011 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:52:18.881-0500 c20011| 2016-04-06T02:52:07.339-0500 D COMMAND [conn18] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:52:18.882-0500 c20011| 2016-04-06T02:52:07.339-0500 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20015" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:18.882-0500 s20015| 2016-04-06T02:52:07.340-0500 D NETWORK [replSetDistLockPinger] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:52:18.883-0500 c20011| 2016-04-06T02:52:07.340-0500 D COMMAND [conn18] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.884-0500 c20011| 2016-04-06T02:52:07.340-0500 I COMMAND [conn18] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:18.885-0500 c20011| 2016-04-06T02:52:07.340-0500 D COMMAND [conn18] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.887-0500 c20011| 2016-04-06T02:52:07.340-0500 I COMMAND [conn18] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:18.890-0500 s20015| 2016-04-06T02:52:07.340-0500 D ASIO [replSetDistLockPinger] startCommand: RemoteCommand 1 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.340-0500 cmd:{ findAndModify: "lockpings", query: { _id: "mongovm16:20015:1459929127:-1485108316" }, update: { $set: { ping: new Date(1459929127338) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.893-0500 s20015| 2016-04-06T02:52:07.340-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-0-0] The NetworkInterfaceASIO worker thread is spinning up [js_test:multi_coll_drop] 2016-04-06T02:52:18.894-0500 s20015| 2016-04-06T02:52:07.340-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:18.895-0500 s20015| 2016-04-06T02:52:07.340-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 2 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:18.896-0500 c20011| 2016-04-06T02:52:07.340-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:58977 #19 (15 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:18.898-0500 c20011| 2016-04-06T02:52:07.341-0500 D COMMAND [conn19] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:52:18.899-0500 c20011| 2016-04-06T02:52:07.341-0500 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20015" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:18.902-0500 s20015| 2016-04-06T02:52:07.341-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:18.902-0500 s20015| 2016-04-06T02:52:07.341-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 2 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:52:18.903-0500 s20015| 2016-04-06T02:52:07.341-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 1 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:18.909-0500 c20011| 2016-04-06T02:52:07.341-0500 D COMMAND [conn19] run command config.$cmd { findAndModify: "lockpings", query: { _id: "mongovm16:20015:1459929127:-1485108316" }, update: { $set: { ping: new Date(1459929127338) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.911-0500 c20011| 2016-04-06T02:52:07.341-0500 D QUERY [conn19] Using idhack: { _id: "mongovm16:20015:1459929127:-1485108316" } [js_test:multi_coll_drop] 2016-04-06T02:52:18.914-0500 c20011| 2016-04-06T02:52:07.342-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|7, t: 1 } } cursorid:17466612721 numYields:1 nreturned:1 reslen:506 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 143ms [js_test:multi_coll_drop] 2016-04-06T02:52:18.919-0500 c20011| 2016-04-06T02:52:07.342-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|7, t: 1 } } cursorid:20785203637 numYields:1 nreturned:1 reslen:506 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 143ms [js_test:multi_coll_drop] 2016-04-06T02:52:18.923-0500 c20013| 2016-04-06T02:52:07.343-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 216 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|8, t: 1, h: -5595581744911205924, v: 2, op: "i", ns: "config.lockpings", o: { _id: "mongovm16:20015:1459929127:-1485108316", ping: new Date(1459929127338) } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.925-0500 c20012| 2016-04-06T02:52:07.342-0500 D COMMAND [conn8] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:52:18.931-0500 c20012| 2016-04-06T02:52:07.343-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 213 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|8, t: 1, h: -5595581744911205924, v: 2, op: "i", ns: "config.lockpings", o: { _id: "mongovm16:20015:1459929127:-1485108316", ping: new Date(1459929127338) } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.936-0500 c20012| 2016-04-06T02:52:07.343-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|8 and ending at ts: Timestamp 1459929127000|8 [js_test:multi_coll_drop] 2016-04-06T02:52:18.936-0500 c20012| 2016-04-06T02:52:07.343-0500 I COMMAND [conn8] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20015" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:18.938-0500 c20013| 2016-04-06T02:52:07.343-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|8 and ending at ts: Timestamp 1459929127000|8 [js_test:multi_coll_drop] 2016-04-06T02:52:18.939-0500 s20015| 2016-04-06T02:52:07.343-0500 D NETWORK [mongosMain] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:52:18.940-0500 c20012| 2016-04-06T02:52:07.343-0500 D COMMAND [conn8] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.941-0500 c20012| 2016-04-06T02:52:07.343-0500 I COMMAND [conn8] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:18.943-0500 c20012| 2016-04-06T02:52:07.343-0500 D COMMAND [conn8] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.944-0500 c20012| 2016-04-06T02:52:07.343-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:18.946-0500 c20012| 2016-04-06T02:52:07.343-0500 I COMMAND [conn8] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:18.949-0500 s20015| 2016-04-06T02:52:07.343-0500 D ASIO [mongosMain] startCommand: RemoteCommand 3 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:37.343-0500 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 0|0, t: -1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.950-0500 s20015| 2016-04-06T02:52:07.343-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:18.951-0500 c20012| 2016-04-06T02:52:07.343-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.952-0500 c20012| 2016-04-06T02:52:07.343-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.953-0500 c20012| 2016-04-06T02:52:07.343-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.953-0500 c20012| 2016-04-06T02:52:07.343-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.955-0500 c20012| 2016-04-06T02:52:07.343-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.956-0500 s20015| 2016-04-06T02:52:07.343-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 4 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:18.959-0500 c20012| 2016-04-06T02:52:07.343-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:36647 #9 (7 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:18.959-0500 c20012| 2016-04-06T02:52:07.344-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.960-0500 c20012| 2016-04-06T02:52:07.344-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.960-0500 c20012| 2016-04-06T02:52:07.344-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.961-0500 c20012| 2016-04-06T02:52:07.344-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.961-0500 c20012| 2016-04-06T02:52:07.344-0500 D COMMAND [conn9] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:52:18.962-0500 c20012| 2016-04-06T02:52:07.344-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.966-0500 c20012| 2016-04-06T02:52:07.344-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.967-0500 c20012| 2016-04-06T02:52:07.344-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.968-0500 c20012| 2016-04-06T02:52:07.344-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.968-0500 c20012| 2016-04-06T02:52:07.344-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.970-0500 c20012| 2016-04-06T02:52:07.344-0500 I COMMAND [conn9] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20015" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:18.970-0500 c20012| 2016-04-06T02:52:07.344-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.972-0500 s20015| 2016-04-06T02:52:07.344-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:18.976-0500 s20015| 2016-04-06T02:52:07.344-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 4 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:52:18.977-0500 s20015| 2016-04-06T02:52:07.344-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 3 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:18.978-0500 c20012| 2016-04-06T02:52:07.344-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:18.979-0500 c20012| 2016-04-06T02:52:07.344-0500 D COMMAND [conn9] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 0|0, t: -1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.982-0500 c20013| 2016-04-06T02:52:07.344-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:18.983-0500 c20012| 2016-04-06T02:52:07.344-0500 D COMMAND [conn9] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 0|0, t: -1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:18.984-0500 c20012| 2016-04-06T02:52:07.344-0500 D COMMAND [conn9] Using 'committed' snapshot. { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 0|0, t: -1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:18.988-0500 c20012| 2016-04-06T02:52:07.344-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.990-0500 c20013| 2016-04-06T02:52:07.344-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.993-0500 c20013| 2016-04-06T02:52:07.344-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.994-0500 c20013| 2016-04-06T02:52:07.344-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.995-0500 c20013| 2016-04-06T02:52:07.344-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.995-0500 c20013| 2016-04-06T02:52:07.344-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.997-0500 c20013| 2016-04-06T02:52:07.344-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.997-0500 c20013| 2016-04-06T02:52:07.344-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:18.998-0500 c20013| 2016-04-06T02:52:07.344-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.011-0500 c20013| 2016-04-06T02:52:07.344-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.011-0500 c20013| 2016-04-06T02:52:07.344-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.013-0500 c20013| 2016-04-06T02:52:07.344-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.013-0500 c20013| 2016-04-06T02:52:07.344-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.014-0500 c20013| 2016-04-06T02:52:07.344-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.014-0500 c20013| 2016-04-06T02:52:07.344-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:19.014-0500 c20013| 2016-04-06T02:52:07.345-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.015-0500 c20013| 2016-04-06T02:52:07.345-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.015-0500 c20013| 2016-04-06T02:52:07.345-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.017-0500 c20012| 2016-04-06T02:52:07.345-0500 D QUERY [conn9] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:19.017-0500 c20012| 2016-04-06T02:52:07.345-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.019-0500 c20012| 2016-04-06T02:52:07.345-0500 I COMMAND [conn9] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 0|0, t: -1 } }, maxTimeMS: 30000 } planSummary: COLLSCAN keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:370 locks:{ Global: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 583 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.021-0500 c20013| 2016-04-06T02:52:07.345-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.021-0500 c20013| 2016-04-06T02:52:07.345-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.022-0500 c20013| 2016-04-06T02:52:07.345-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.023-0500 c20013| 2016-04-06T02:52:07.345-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.023-0500 c20013| 2016-04-06T02:52:07.345-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.024-0500 c20013| 2016-04-06T02:52:07.345-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.024-0500 c20013| 2016-04-06T02:52:07.345-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.024-0500 c20013| 2016-04-06T02:52:07.345-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.025-0500 c20013| 2016-04-06T02:52:07.345-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.026-0500 c20013| 2016-04-06T02:52:07.345-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.026-0500 c20013| 2016-04-06T02:52:07.345-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.027-0500 c20013| 2016-04-06T02:52:07.345-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.028-0500 c20013| 2016-04-06T02:52:07.345-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.031-0500 c20011| 2016-04-06T02:52:07.350-0500 D REPL [conn19] Required snapshot optime: { ts: Timestamp 1459929127000|8, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|7, t: 1 }, name-id: "65" } [js_test:multi_coll_drop] 2016-04-06T02:52:19.032-0500 c20013| 2016-04-06T02:52:07.350-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.033-0500 c20013| 2016-04-06T02:52:07.350-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.034-0500 c20013| 2016-04-06T02:52:07.350-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.039-0500 s20015| 2016-04-06T02:52:07.345-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 3 finished with response: { waitedMS: 0, cursor: { firstBatch: [], id: 0, ns: "config.shards" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.041-0500 c20012| 2016-04-06T02:52:07.345-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 216 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.345-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:19.042-0500 c20012| 2016-04-06T02:52:07.350-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 216 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.045-0500 s20015| 2016-04-06T02:52:07.350-0500 D SHARDING [mongosMain] found 0 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp 1459929127000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.049-0500 s20015| 2016-04-06T02:52:07.351-0500 D ASIO [mongosMain] startCommand: RemoteCommand 6 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:37.351-0500 cmd:{ find: "version", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|7, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.052-0500 c20011| 2016-04-06T02:52:07.351-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:19.053-0500 s20015| 2016-04-06T02:52:07.351-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 6 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:19.054-0500 c20013| 2016-04-06T02:52:07.351-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:19.056-0500 c20012| 2016-04-06T02:52:07.351-0500 D COMMAND [conn9] run command config.$cmd { find: "version", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|7, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.058-0500 c20012| 2016-04-06T02:52:07.351-0500 D COMMAND [conn9] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|7, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:19.060-0500 c20012| 2016-04-06T02:52:07.351-0500 D COMMAND [conn9] Using 'committed' snapshot. { find: "version", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|7, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.061-0500 c20012| 2016-04-06T02:52:07.351-0500 D QUERY [conn9] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:19.064-0500 c20013| 2016-04-06T02:52:07.351-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:19.067-0500 c20013| 2016-04-06T02:52:07.351-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 218 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:19.069-0500 c20013| 2016-04-06T02:52:07.351-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 218 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.075-0500 c20012| 2016-04-06T02:52:07.351-0500 I COMMAND [conn9] command config.version command: find { find: "version", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|7, t: 1 } }, maxTimeMS: 30000 } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:457 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.079-0500 c20011| 2016-04-06T02:52:07.351-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:19.079-0500 c20011| 2016-04-06T02:52:07.351-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:19.084-0500 c20011| 2016-04-06T02:52:07.351-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.087-0500 c20011| 2016-04-06T02:52:07.351-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|8, t: 1 } and is durable through: { ts: Timestamp 1459929127000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.087-0500 c20011| 2016-04-06T02:52:07.351-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929127000|8, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|7, t: 1 }, name-id: "65" } [js_test:multi_coll_drop] 2016-04-06T02:52:19.088-0500 c20012| 2016-04-06T02:52:07.351-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.091-0500 c20012| 2016-04-06T02:52:07.351-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.092-0500 c20013| 2016-04-06T02:52:07.351-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 218 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.092-0500 c20012| 2016-04-06T02:52:07.351-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.093-0500 c20012| 2016-04-06T02:52:07.351-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.095-0500 c20012| 2016-04-06T02:52:07.351-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.095-0500 c20012| 2016-04-06T02:52:07.351-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.097-0500 c20012| 2016-04-06T02:52:07.351-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.100-0500 c20011| 2016-04-06T02:52:07.351-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.103-0500 c20013| 2016-04-06T02:52:07.351-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 220 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.351-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:19.104-0500 c20013| 2016-04-06T02:52:07.351-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 220 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.107-0500 s20015| 2016-04-06T02:52:07.351-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 6 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: 1, minCompatibleVersion: 5, currentVersion: 6, clusterId: ObjectId('5704c02606c33406d4d9c0b9') } ], id: 0, ns: "config.version" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.108-0500 c20011| 2016-04-06T02:52:07.351-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:19.111-0500 s20015| 2016-04-06T02:52:07.352-0500 D ASIO [mongosMain] startCommand: RemoteCommand 8 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.352-0500 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|7, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.112-0500 s20015| 2016-04-06T02:52:07.352-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.114-0500 s20015| 2016-04-06T02:52:07.352-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 9 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.116-0500 c20011| 2016-04-06T02:52:07.352-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:58979 #20 (16 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:19.117-0500 c20011| 2016-04-06T02:52:07.352-0500 D COMMAND [conn20] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:52:19.119-0500 c20011| 2016-04-06T02:52:07.352-0500 I COMMAND [conn20] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20015" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.119-0500 s20015| 2016-04-06T02:52:07.352-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.120-0500 s20015| 2016-04-06T02:52:07.352-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 9 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:52:19.122-0500 s20015| 2016-04-06T02:52:07.352-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 8 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.124-0500 c20011| 2016-04-06T02:52:07.352-0500 D COMMAND [conn20] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|7, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.126-0500 c20011| 2016-04-06T02:52:07.352-0500 D COMMAND [conn20] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|7, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:19.132-0500 c20011| 2016-04-06T02:52:07.352-0500 D COMMAND [conn20] Using 'committed' snapshot. { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|7, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.133-0500 c20011| 2016-04-06T02:52:07.352-0500 D QUERY [conn20] Using idhack: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:19.137-0500 c20011| 2016-04-06T02:52:07.353-0500 I COMMAND [conn20] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|7, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:414 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.140-0500 c20012| 2016-04-06T02:52:07.352-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.141-0500 c20012| 2016-04-06T02:52:07.352-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.145-0500 c20012| 2016-04-06T02:52:07.352-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.147-0500 c20012| 2016-04-06T02:52:07.352-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.150-0500 c20012| 2016-04-06T02:52:07.352-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.151-0500 c20012| 2016-04-06T02:52:07.352-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.152-0500 c20012| 2016-04-06T02:52:07.352-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.154-0500 c20012| 2016-04-06T02:52:07.352-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.155-0500 c20012| 2016-04-06T02:52:07.352-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:19.172-0500 c20012| 2016-04-06T02:52:07.353-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:19.177-0500 c20012| 2016-04-06T02:52:07.353-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 217 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:19.178-0500 c20012| 2016-04-06T02:52:07.353-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 217 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.180-0500 s20015| 2016-04-06T02:52:07.353-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 8 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "chunksize", value: 50 } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.182-0500 s20015| 2016-04-06T02:52:07.353-0500 D SHARDING [mongosMain] Found MaxChunkSize: 50 [js_test:multi_coll_drop] 2016-04-06T02:52:19.189-0500 s20015| 2016-04-06T02:52:07.353-0500 D ASIO [mongosMain] startCommand: RemoteCommand 11 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.353-0500 cmd:{ insert: "system.indexes", documents: [ { ns: "config.chunks", key: { ns: 1, min: 1 }, name: "ns_1_min_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.195-0500 c20011| 2016-04-06T02:52:07.353-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:19.196-0500 c20011| 2016-04-06T02:52:07.353-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:19.197-0500 c20011| 2016-04-06T02:52:07.353-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|8, t: 1 } and is durable through: { ts: Timestamp 1459929127000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.199-0500 c20011| 2016-04-06T02:52:07.353-0500 D REPL [conn17] Required snapshot optime: { ts: Timestamp 1459929127000|8, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|7, t: 1 }, name-id: "65" } [js_test:multi_coll_drop] 2016-04-06T02:52:19.202-0500 c20011| 2016-04-06T02:52:07.353-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.210-0500 c20011| 2016-04-06T02:52:07.353-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.211-0500 s20015| 2016-04-06T02:52:07.353-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 11 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.212-0500 c20011| 2016-04-06T02:52:07.353-0500 D COMMAND [conn20] run command config.$cmd { insert: "system.indexes", documents: [ { ns: "config.chunks", key: { ns: 1, min: 1 }, name: "ns_1_min_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.214-0500 c20011| 2016-04-06T02:52:07.353-0500 D REPL [conn20] Required snapshot optime: { ts: Timestamp 1459929127000|8, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|7, t: 1 }, name-id: "65" } [js_test:multi_coll_drop] 2016-04-06T02:52:19.215-0500 c20012| 2016-04-06T02:52:07.353-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 217 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.218-0500 c20013| 2016-04-06T02:52:07.362-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:19.220-0500 c20013| 2016-04-06T02:52:07.362-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 221 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:19.223-0500 c20013| 2016-04-06T02:52:07.362-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 221 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.227-0500 c20011| 2016-04-06T02:52:07.362-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:19.227-0500 c20011| 2016-04-06T02:52:07.363-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:19.237-0500 c20011| 2016-04-06T02:52:07.363-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.241-0500 c20011| 2016-04-06T02:52:07.363-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|8, t: 1 } and is durable through: { ts: Timestamp 1459929127000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.242-0500 c20011| 2016-04-06T02:52:07.363-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.244-0500 c20011| 2016-04-06T02:52:07.363-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.246-0500 c20013| 2016-04-06T02:52:07.363-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 221 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.248-0500 c20011| 2016-04-06T02:52:07.363-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|7, t: 1 } } cursorid:20785203637 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 12ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.251-0500 c20011| 2016-04-06T02:52:07.363-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|7, t: 1 } } cursorid:17466612721 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 11ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.252-0500 c20013| 2016-04-06T02:52:07.363-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 220 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.255-0500 c20011| 2016-04-06T02:52:07.363-0500 I COMMAND [conn19] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "mongovm16:20015:1459929127:-1485108316" }, update: { $set: { ping: new Date(1459929127338) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ping: new Date(1459929127338) } } keysExamined:0 docsExamined:0 nMatched:0 nModified:0 upsert:1 numYields:0 reslen:415 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 22ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.256-0500 c20012| 2016-04-06T02:52:07.363-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 216 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.257-0500 c20012| 2016-04-06T02:52:07.363-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.260-0500 c20011| 2016-04-06T02:52:07.363-0500 I COMMAND [conn20] command config.system.indexes command: insert { insert: "system.indexes", documents: [ { ns: "config.chunks", key: { ns: 1, min: 1 }, name: "ns_1_min_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } ninserted:0 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 10ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.264-0500 c20012| 2016-04-06T02:52:07.363-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:19.270-0500 s20015| 2016-04-06T02:52:07.363-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 1 finished with response: { lastErrorObject: { updatedExisting: false, n: 1, upserted: "mongovm16:20015:1459929127:-1485108316" }, value: null, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.274-0500 c20012| 2016-04-06T02:52:07.363-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 220 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.363-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:19.277-0500 s20015| 2016-04-06T02:52:07.363-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 11 finished with response: { ok: 1, n: 0, opTime: { ts: Timestamp 1459929127000|8, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:19.279-0500 s20015| 2016-04-06T02:52:07.363-0500 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: LockStateChangeFailed: findAndModify query predicate didn't match any lock document [js_test:multi_coll_drop] 2016-04-06T02:52:19.282-0500 c20012| 2016-04-06T02:52:07.363-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 220 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.286-0500 s20015| 2016-04-06T02:52:07.363-0500 D ASIO [mongosMain] startCommand: RemoteCommand 14 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.363-0500 cmd:{ insert: "system.indexes", documents: [ { ns: "config.chunks", key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.287-0500 c20011| 2016-04-06T02:52:07.363-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:19.290-0500 s20015| 2016-04-06T02:52:07.364-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 14 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.293-0500 c20013| 2016-04-06T02:52:07.364-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.294-0500 c20013| 2016-04-06T02:52:07.364-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:19.296-0500 c20013| 2016-04-06T02:52:07.364-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 224 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.364-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:19.298-0500 c20013| 2016-04-06T02:52:07.364-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 224 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.302-0500 c20011| 2016-04-06T02:52:07.364-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:19.303-0500 c20011| 2016-04-06T02:52:07.364-0500 D COMMAND [conn20] run command config.$cmd { insert: "system.indexes", documents: [ { ns: "config.chunks", key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.306-0500 c20011| 2016-04-06T02:52:07.364-0500 I COMMAND [conn20] command config.system.indexes command: insert { insert: "system.indexes", documents: [ { ns: "config.chunks", key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } ninserted:0 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.308-0500 s20015| 2016-04-06T02:52:07.365-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 14 finished with response: { ok: 1, n: 0, opTime: { ts: Timestamp 1459929127000|8, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:19.310-0500 s20015| 2016-04-06T02:52:07.365-0500 D ASIO [mongosMain] startCommand: RemoteCommand 16 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.365-0500 cmd:{ insert: "system.indexes", documents: [ { ns: "config.chunks", key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.311-0500 s20015| 2016-04-06T02:52:07.365-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 16 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.312-0500 c20011| 2016-04-06T02:52:07.365-0500 D COMMAND [conn20] run command config.$cmd { insert: "system.indexes", documents: [ { ns: "config.chunks", key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.329-0500 c20011| 2016-04-06T02:52:07.366-0500 I COMMAND [conn20] command config.system.indexes command: insert { insert: "system.indexes", documents: [ { ns: "config.chunks", key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } ninserted:0 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.332-0500 s20015| 2016-04-06T02:52:07.366-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 16 finished with response: { ok: 1, n: 0, opTime: { ts: Timestamp 1459929127000|8, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:19.338-0500 s20015| 2016-04-06T02:52:07.366-0500 D ASIO [mongosMain] startCommand: RemoteCommand 18 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.366-0500 cmd:{ insert: "system.indexes", documents: [ { ns: "config.shards", key: { host: 1 }, name: "host_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.341-0500 s20015| 2016-04-06T02:52:07.366-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 18 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.351-0500 c20011| 2016-04-06T02:52:07.367-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:19.353-0500 c20011| 2016-04-06T02:52:07.367-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:19.355-0500 c20011| 2016-04-06T02:52:07.367-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|8, t: 1 } and is durable through: { ts: Timestamp 1459929127000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.361-0500 c20011| 2016-04-06T02:52:07.367-0500 D COMMAND [conn20] run command config.$cmd { insert: "system.indexes", documents: [ { ns: "config.shards", key: { host: 1 }, name: "host_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.365-0500 c20011| 2016-04-06T02:52:07.367-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.375-0500 c20011| 2016-04-06T02:52:07.367-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.379-0500 c20011| 2016-04-06T02:52:07.367-0500 I COMMAND [conn20] command config.system.indexes command: insert { insert: "system.indexes", documents: [ { ns: "config.shards", key: { host: 1 }, name: "host_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } ninserted:0 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.381-0500 s20015| 2016-04-06T02:52:07.367-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 18 finished with response: { ok: 1, n: 0, opTime: { ts: Timestamp 1459929127000|8, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:19.385-0500 s20015| 2016-04-06T02:52:07.367-0500 D ASIO [mongosMain] startCommand: RemoteCommand 20 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.367-0500 cmd:{ insert: "system.indexes", documents: [ { ns: "config.locks", key: { ts: 1 }, name: "ts_1" } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.387-0500 s20015| 2016-04-06T02:52:07.367-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 20 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.389-0500 c20011| 2016-04-06T02:52:07.367-0500 D COMMAND [conn20] run command config.$cmd { insert: "system.indexes", documents: [ { ns: "config.locks", key: { ts: 1 }, name: "ts_1" } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.394-0500 c20011| 2016-04-06T02:52:07.368-0500 I COMMAND [conn20] command config.system.indexes command: insert { insert: "system.indexes", documents: [ { ns: "config.locks", key: { ts: 1 }, name: "ts_1" } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } ninserted:0 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.396-0500 s20015| 2016-04-06T02:52:07.368-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 20 finished with response: { ok: 1, n: 0, opTime: { ts: Timestamp 1459929127000|8, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:19.399-0500 s20015| 2016-04-06T02:52:07.368-0500 D ASIO [mongosMain] startCommand: RemoteCommand 22 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.368-0500 cmd:{ insert: "system.indexes", documents: [ { ns: "config.locks", key: { state: 1, process: 1 }, name: "state_1_process_1" } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.403-0500 s20015| 2016-04-06T02:52:07.368-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 22 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.406-0500 c20011| 2016-04-06T02:52:07.369-0500 D COMMAND [conn20] run command config.$cmd { insert: "system.indexes", documents: [ { ns: "config.locks", key: { state: 1, process: 1 }, name: "state_1_process_1" } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.409-0500 c20011| 2016-04-06T02:52:07.369-0500 I COMMAND [conn20] command config.system.indexes command: insert { insert: "system.indexes", documents: [ { ns: "config.locks", key: { state: 1, process: 1 }, name: "state_1_process_1" } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } ninserted:0 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.412-0500 s20015| 2016-04-06T02:52:07.369-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 22 finished with response: { ok: 1, n: 0, opTime: { ts: Timestamp 1459929127000|8, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:19.414-0500 s20015| 2016-04-06T02:52:07.369-0500 D ASIO [mongosMain] startCommand: RemoteCommand 24 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.369-0500 cmd:{ insert: "system.indexes", documents: [ { ns: "config.lockpings", key: { ping: 1 }, name: "ping_1" } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.415-0500 s20015| 2016-04-06T02:52:07.369-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 24 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.421-0500 c20012| 2016-04-06T02:52:07.366-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:19.427-0500 c20012| 2016-04-06T02:52:07.367-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 221 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:19.428-0500 c20012| 2016-04-06T02:52:07.367-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 221 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.432-0500 c20012| 2016-04-06T02:52:07.367-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 221 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.436-0500 c20011| 2016-04-06T02:52:07.370-0500 D COMMAND [conn20] run command config.$cmd { insert: "system.indexes", documents: [ { ns: "config.lockpings", key: { ping: 1 }, name: "ping_1" } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.439-0500 c20011| 2016-04-06T02:52:07.371-0500 I COMMAND [conn20] command config.system.indexes command: insert { insert: "system.indexes", documents: [ { ns: "config.lockpings", key: { ping: 1 }, name: "ping_1" } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } ninserted:0 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.443-0500 s20015| 2016-04-06T02:52:07.371-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 24 finished with response: { ok: 1, n: 0, opTime: { ts: Timestamp 1459929127000|8, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:19.447-0500 s20015| 2016-04-06T02:52:07.371-0500 D ASIO [mongosMain] startCommand: RemoteCommand 26 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.371-0500 cmd:{ insert: "system.indexes", documents: [ { ns: "config.tags", key: { ns: 1, min: 1 }, name: "ns_1_min_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.449-0500 s20015| 2016-04-06T02:52:07.371-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 26 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.456-0500 c20011| 2016-04-06T02:52:07.371-0500 D COMMAND [conn20] run command config.$cmd { insert: "system.indexes", documents: [ { ns: "config.tags", key: { ns: 1, min: 1 }, name: "ns_1_min_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.466-0500 c20011| 2016-04-06T02:52:07.372-0500 I COMMAND [conn20] command config.system.indexes command: insert { insert: "system.indexes", documents: [ { ns: "config.tags", key: { ns: 1, min: 1 }, name: "ns_1_min_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } ninserted:0 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.470-0500 s20015| 2016-04-06T02:52:07.372-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 26 finished with response: { ok: 1, n: 0, opTime: { ts: Timestamp 1459929127000|8, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:19.472-0500 s20015| 2016-04-06T02:52:07.372-0500 D ASIO [mongosMain] startCommand: RemoteCommand 28 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:37.372-0500 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.473-0500 s20015| 2016-04-06T02:52:07.372-0500 D COMMAND [ClusterCursorCleanupJob] BackgroundJob starting: ClusterCursorCleanupJob [js_test:multi_coll_drop] 2016-04-06T02:52:19.475-0500 s20015| 2016-04-06T02:52:07.372-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 28 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.476-0500 s20015| 2016-04-06T02:52:07.372-0500 D COMMAND [Balancer] BackgroundJob starting: Balancer [js_test:multi_coll_drop] 2016-04-06T02:52:19.482-0500 c20011| 2016-04-06T02:52:07.372-0500 D COMMAND [conn20] run command admin.$cmd { _getUserCacheGeneration: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.485-0500 c20011| 2016-04-06T02:52:07.372-0500 D COMMAND [conn20] command: _getUserCacheGeneration [js_test:multi_coll_drop] 2016-04-06T02:52:19.486-0500 s20015| 2016-04-06T02:52:07.372-0500 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker [js_test:multi_coll_drop] 2016-04-06T02:52:19.487-0500 s20015| 2016-04-06T02:52:07.373-0500 I SHARDING [Balancer] about to contact config servers and shards [js_test:multi_coll_drop] 2016-04-06T02:52:19.491-0500 s20015| 2016-04-06T02:52:07.373-0500 D ASIO [Balancer] startCommand: RemoteCommand 29 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.373-0500 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|8, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.494-0500 c20011| 2016-04-06T02:52:07.373-0500 I COMMAND [conn20] command admin.$cmd command: _getUserCacheGeneration { _getUserCacheGeneration: 1, maxTimeMS: 30000 } numYields:0 reslen:337 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.498-0500 s20015| 2016-04-06T02:52:07.373-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 29 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.500-0500 c20011| 2016-04-06T02:52:07.373-0500 D COMMAND [conn19] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|8, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.500-0500 s20015| 2016-04-06T02:52:07.373-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 28 finished with response: { cacheGeneration: ObjectId('5704c01c3876c4cfd2eb3eb7'), ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.502-0500 s20015| 2016-04-06T02:52:07.373-0500 D COMMAND [UserCacheInvalidatorThread] BackgroundJob starting: UserCacheInvalidatorThread [js_test:multi_coll_drop] 2016-04-06T02:52:19.505-0500 s20015| 2016-04-06T02:52:07.373-0500 D NETWORK [mongosMain] fd limit hard:64000 soft:64000 max conn: 51200 [js_test:multi_coll_drop] 2016-04-06T02:52:19.507-0500 c20011| 2016-04-06T02:52:07.373-0500 D COMMAND [conn19] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|8, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:19.509-0500 c20011| 2016-04-06T02:52:07.373-0500 D COMMAND [conn19] Using 'committed' snapshot. { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|8, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.512-0500 c20011| 2016-04-06T02:52:07.373-0500 D QUERY [conn19] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:19.512-0500 s20015| 2016-04-06T02:52:07.373-0500 D COMMAND [PeriodicTaskRunner] BackgroundJob starting: PeriodicTaskRunner [js_test:multi_coll_drop] 2016-04-06T02:52:19.518-0500 c20011| 2016-04-06T02:52:07.373-0500 I COMMAND [conn19] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|8, t: 1 } }, maxTimeMS: 30000 } planSummary: COLLSCAN keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:390 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.524-0500 s20015| 2016-04-06T02:52:07.374-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 29 finished with response: { waitedMS: 0, cursor: { firstBatch: [], id: 0, ns: "config.shards" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.526-0500 s20015| 2016-04-06T02:52:07.374-0500 D SHARDING [Balancer] found 0 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp 1459929127000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.528-0500 s20015| 2016-04-06T02:52:07.374-0500 I SHARDING [Balancer] config servers and shards contacted successfully [js_test:multi_coll_drop] 2016-04-06T02:52:19.532-0500 s20015| 2016-04-06T02:52:07.374-0500 I SHARDING [Balancer] balancer id: mongovm16:20015 started [js_test:multi_coll_drop] 2016-04-06T02:52:19.537-0500 s20015| 2016-04-06T02:52:07.374-0500 D ASIO [Balancer] startCommand: RemoteCommand 32 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.374-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929127374), up: 0, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.537-0500 s20015| 2016-04-06T02:52:07.374-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 32 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.540-0500 c20011| 2016-04-06T02:52:07.374-0500 D COMMAND [conn19] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929127374), up: 0, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.541-0500 c20011| 2016-04-06T02:52:07.374-0500 D QUERY [conn19] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:52:19.544-0500 c20011| 2016-04-06T02:52:07.375-0500 I WRITE [conn19] update config.mongos query: { _id: "mongovm16:20015" } update: { $set: { _id: "mongovm16:20015", ping: new Date(1459929127374), up: 0, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:0 docsExamined:0 nMatched:0 nModified:0 upsert:1 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.546-0500 c20013| 2016-04-06T02:52:07.376-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 224 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|9, t: 1, h: 5968556817239947840, v: 2, op: "i", ns: "config.mongos", o: { _id: "mongovm16:20015", ping: new Date(1459929127374), up: 0, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.555-0500 c20012| 2016-04-06T02:52:07.376-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 220 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|9, t: 1, h: 5968556817239947840, v: 2, op: "i", ns: "config.mongos", o: { _id: "mongovm16:20015", ping: new Date(1459929127374), up: 0, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.556-0500 c20012| 2016-04-06T02:52:07.376-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|9 and ending at ts: Timestamp 1459929127000|9 [js_test:multi_coll_drop] 2016-04-06T02:52:19.560-0500 c20011| 2016-04-06T02:52:07.375-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|8, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:538 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 11ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.564-0500 c20011| 2016-04-06T02:52:07.376-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|8, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:538 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 11ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.566-0500 c20013| 2016-04-06T02:52:07.376-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|9 and ending at ts: Timestamp 1459929127000|9 [js_test:multi_coll_drop] 2016-04-06T02:52:19.567-0500 c20012| 2016-04-06T02:52:07.376-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:19.568-0500 c20013| 2016-04-06T02:52:07.377-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:19.570-0500 c20012| 2016-04-06T02:52:07.376-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.573-0500 c20012| 2016-04-06T02:52:07.376-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.574-0500 c20012| 2016-04-06T02:52:07.376-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.577-0500 c20012| 2016-04-06T02:52:07.376-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.580-0500 c20012| 2016-04-06T02:52:07.376-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.580-0500 c20012| 2016-04-06T02:52:07.376-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.581-0500 c20012| 2016-04-06T02:52:07.376-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.583-0500 c20012| 2016-04-06T02:52:07.376-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.584-0500 c20012| 2016-04-06T02:52:07.376-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.586-0500 c20012| 2016-04-06T02:52:07.376-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.588-0500 c20012| 2016-04-06T02:52:07.376-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.591-0500 c20012| 2016-04-06T02:52:07.376-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.591-0500 c20012| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.592-0500 c20012| 2016-04-06T02:52:07.376-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:19.593-0500 c20013| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.594-0500 c20013| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.594-0500 c20012| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.596-0500 c20013| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.599-0500 c20013| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.599-0500 c20013| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.600-0500 c20012| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.602-0500 c20013| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.604-0500 c20012| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.605-0500 c20012| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.625-0500 c20012| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.627-0500 c20012| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.628-0500 c20012| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.629-0500 c20012| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.629-0500 c20012| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.630-0500 c20012| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.631-0500 c20012| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.631-0500 c20012| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.631-0500 c20013| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.632-0500 c20012| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.634-0500 c20012| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.635-0500 c20013| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.636-0500 c20012| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.638-0500 c20011| 2016-04-06T02:52:07.377-0500 D REPL [conn19] Required snapshot optime: { ts: Timestamp 1459929127000|9, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|8, t: 1 }, name-id: "66" } [js_test:multi_coll_drop] 2016-04-06T02:52:19.639-0500 c20013| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.640-0500 c20013| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.641-0500 c20013| 2016-04-06T02:52:07.377-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.643-0500 c20013| 2016-04-06T02:52:07.378-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.644-0500 c20013| 2016-04-06T02:52:07.378-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.647-0500 c20013| 2016-04-06T02:52:07.378-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.647-0500 c20013| 2016-04-06T02:52:07.378-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.649-0500 c20013| 2016-04-06T02:52:07.378-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.649-0500 c20013| 2016-04-06T02:52:07.378-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:19.650-0500 c20012| 2016-04-06T02:52:07.378-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.654-0500 c20012| 2016-04-06T02:52:07.378-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.656-0500 c20012| 2016-04-06T02:52:07.378-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.661-0500 c20012| 2016-04-06T02:52:07.378-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 224 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.378-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:19.663-0500 c20012| 2016-04-06T02:52:07.378-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 224 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.664-0500 c20012| 2016-04-06T02:52:07.378-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.667-0500 c20011| 2016-04-06T02:52:07.378-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:19.668-0500 c20013| 2016-04-06T02:52:07.378-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.672-0500 c20013| 2016-04-06T02:52:07.378-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 226 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.378-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:19.673-0500 c20012| 2016-04-06T02:52:07.378-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:19.673-0500 c20013| 2016-04-06T02:52:07.378-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.675-0500 c20013| 2016-04-06T02:52:07.378-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.679-0500 c20012| 2016-04-06T02:52:07.378-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:19.680-0500 c20013| 2016-04-06T02:52:07.378-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.685-0500 c20012| 2016-04-06T02:52:07.378-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 225 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:19.686-0500 c20012| 2016-04-06T02:52:07.378-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 225 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.688-0500 c20013| 2016-04-06T02:52:07.378-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 226 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.689-0500 c20013| 2016-04-06T02:52:07.378-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.689-0500 c20013| 2016-04-06T02:52:07.378-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.690-0500 c20013| 2016-04-06T02:52:07.378-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.691-0500 c20013| 2016-04-06T02:52:07.378-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.697-0500 c20013| 2016-04-06T02:52:07.379-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.699-0500 c20013| 2016-04-06T02:52:07.379-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.701-0500 c20013| 2016-04-06T02:52:07.379-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.703-0500 c20011| 2016-04-06T02:52:07.378-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:19.704-0500 c20013| 2016-04-06T02:52:07.379-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.705-0500 c20013| 2016-04-06T02:52:07.379-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.706-0500 c20013| 2016-04-06T02:52:07.379-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.707-0500 c20013| 2016-04-06T02:52:07.379-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.709-0500 c20013| 2016-04-06T02:52:07.379-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:19.711-0500 c20013| 2016-04-06T02:52:07.379-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:19.715-0500 c20013| 2016-04-06T02:52:07.379-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:19.720-0500 c20013| 2016-04-06T02:52:07.379-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 227 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:19.721-0500 c20013| 2016-04-06T02:52:07.379-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 227 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.721-0500 c20013| 2016-04-06T02:52:07.380-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 227 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.724-0500 c20011| 2016-04-06T02:52:07.379-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:19.725-0500 c20011| 2016-04-06T02:52:07.380-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:19.726-0500 c20011| 2016-04-06T02:52:07.380-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.730-0500 c20011| 2016-04-06T02:52:07.380-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|9, t: 1 } and is durable through: { ts: Timestamp 1459929127000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.733-0500 c20011| 2016-04-06T02:52:07.380-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929127000|9, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|8, t: 1 }, name-id: "66" } [js_test:multi_coll_drop] 2016-04-06T02:52:19.738-0500 c20011| 2016-04-06T02:52:07.380-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.740-0500 c20011| 2016-04-06T02:52:07.380-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:19.740-0500 c20011| 2016-04-06T02:52:07.380-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:19.745-0500 c20011| 2016-04-06T02:52:07.380-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|9, t: 1 } and is durable through: { ts: Timestamp 1459929127000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.748-0500 c20011| 2016-04-06T02:52:07.380-0500 D REPL [conn17] Required snapshot optime: { ts: Timestamp 1459929127000|9, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|8, t: 1 }, name-id: "66" } [js_test:multi_coll_drop] 2016-04-06T02:52:19.751-0500 c20011| 2016-04-06T02:52:07.380-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.755-0500 c20011| 2016-04-06T02:52:07.380-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.755-0500 c20012| 2016-04-06T02:52:07.380-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 225 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.760-0500 c20012| 2016-04-06T02:52:07.384-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:19.765-0500 c20012| 2016-04-06T02:52:07.384-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 227 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:19.767-0500 c20012| 2016-04-06T02:52:07.384-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 227 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.769-0500 c20011| 2016-04-06T02:52:07.384-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:19.770-0500 c20011| 2016-04-06T02:52:07.384-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:19.772-0500 c20011| 2016-04-06T02:52:07.384-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|9, t: 1 } and is durable through: { ts: Timestamp 1459929127000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.773-0500 c20011| 2016-04-06T02:52:07.384-0500 D REPL [conn17] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.775-0500 c20011| 2016-04-06T02:52:07.384-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.776-0500 c20012| 2016-04-06T02:52:07.384-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 227 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.778-0500 c20011| 2016-04-06T02:52:07.384-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.780-0500 c20013| 2016-04-06T02:52:07.384-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 226 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.781-0500 c20012| 2016-04-06T02:52:07.384-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 224 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.784-0500 c20013| 2016-04-06T02:52:07.384-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.785-0500 c20013| 2016-04-06T02:52:07.384-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:19.786-0500 c20012| 2016-04-06T02:52:07.385-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.789-0500 c20012| 2016-04-06T02:52:07.385-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:19.792-0500 c20012| 2016-04-06T02:52:07.385-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 230 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.385-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:19.796-0500 c20013| 2016-04-06T02:52:07.385-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 230 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.385-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:19.797-0500 c20013| 2016-04-06T02:52:07.385-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 230 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.799-0500 c20012| 2016-04-06T02:52:07.385-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 230 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.803-0500 c20013| 2016-04-06T02:52:07.385-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:19.812-0500 c20013| 2016-04-06T02:52:07.385-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 231 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:19.814-0500 c20011| 2016-04-06T02:52:07.384-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|8, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.816-0500 c20011| 2016-04-06T02:52:07.384-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|8, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.877-0500 c20011| 2016-04-06T02:52:07.384-0500 I COMMAND [conn19] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929127374), up: 0, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:445 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 9ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.878-0500 c20011| 2016-04-06T02:52:07.385-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:19.879-0500 c20011| 2016-04-06T02:52:07.385-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:19.879-0500 c20013| 2016-04-06T02:52:07.385-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 231 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:19.880-0500 c20011| 2016-04-06T02:52:07.385-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:19.881-0500 c20011| 2016-04-06T02:52:07.385-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:19.881-0500 c20011| 2016-04-06T02:52:07.385-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.882-0500 s20015| 2016-04-06T02:52:07.385-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 32 finished with response: { ok: 1, nModified: 0, n: 1, upserted: [ { index: 0, _id: "mongovm16:20015" } ], opTime: { ts: Timestamp 1459929127000|9, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:19.883-0500 c20013| 2016-04-06T02:52:07.385-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 231 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.885-0500 s20015| 2016-04-06T02:52:07.385-0500 D ASIO [Balancer] startCommand: RemoteCommand 34 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:37.385-0500 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|9, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.887-0500 s20015| 2016-04-06T02:52:07.387-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 34 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:19.888-0500 c20011| 2016-04-06T02:52:07.385-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|9, t: 1 } and is durable through: { ts: Timestamp 1459929127000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.890-0500 c20011| 2016-04-06T02:52:07.385-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.891-0500 c20012| 2016-04-06T02:52:07.389-0500 D COMMAND [conn9] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|9, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.895-0500 c20012| 2016-04-06T02:52:07.389-0500 D COMMAND [conn9] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|9, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:19.899-0500 c20012| 2016-04-06T02:52:07.389-0500 D COMMAND [conn9] Using 'committed' snapshot. { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|9, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.901-0500 c20012| 2016-04-06T02:52:07.389-0500 D QUERY [conn9] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:19.907-0500 c20012| 2016-04-06T02:52:07.390-0500 I COMMAND [conn9] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|9, t: 1 } }, maxTimeMS: 30000 } planSummary: COLLSCAN keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:370 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.910-0500 s20015| 2016-04-06T02:52:07.390-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 34 finished with response: { waitedMS: 0, cursor: { firstBatch: [], id: 0, ns: "config.shards" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.930-0500 s20015| 2016-04-06T02:52:07.390-0500 D SHARDING [Balancer] found 0 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp 1459929127000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.932-0500 s20015| 2016-04-06T02:52:07.390-0500 D ASIO [Balancer] startCommand: RemoteCommand 36 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:37.390-0500 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|9, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.934-0500 s20015| 2016-04-06T02:52:07.390-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 36 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:19.935-0500 c20012| 2016-04-06T02:52:07.390-0500 D COMMAND [conn9] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|9, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.937-0500 c20012| 2016-04-06T02:52:07.390-0500 D COMMAND [conn9] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|9, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:19.938-0500 c20012| 2016-04-06T02:52:07.390-0500 D COMMAND [conn9] Using 'committed' snapshot. { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|9, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.938-0500 c20012| 2016-04-06T02:52:07.390-0500 D QUERY [conn9] Using idhack: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:19.944-0500 c20012| 2016-04-06T02:52:07.390-0500 I COMMAND [conn9] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|9, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:414 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:19.945-0500 s20015| 2016-04-06T02:52:07.390-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 36 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "chunksize", value: 50 } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.946-0500 s20015| 2016-04-06T02:52:07.390-0500 D SHARDING [Balancer] Refreshing MaxChunkSize: 50MB [js_test:multi_coll_drop] 2016-04-06T02:52:19.947-0500 s20015| 2016-04-06T02:52:07.390-0500 D ASIO [Balancer] startCommand: RemoteCommand 38 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:37.390-0500 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|9, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.950-0500 s20015| 2016-04-06T02:52:07.390-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 38 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:19.952-0500 c20012| 2016-04-06T02:52:07.391-0500 D COMMAND [conn9] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|9, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.978-0500 c20012| 2016-04-06T02:52:07.391-0500 D COMMAND [conn9] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|9, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:19.981-0500 c20012| 2016-04-06T02:52:07.391-0500 D COMMAND [conn9] Using 'committed' snapshot. { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|9, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:19.984-0500 c20012| 2016-04-06T02:52:07.391-0500 D QUERY [conn9] Using idhack: query: { _id: "balancer" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:20.002-0500 c20012| 2016-04-06T02:52:07.391-0500 I COMMAND [conn9] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|9, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:0 docsExamined:0 idhack:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:372 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:20.002-0500 s20015| 2016-04-06T02:52:07.391-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 38 finished with response: { waitedMS: 0, cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.003-0500 s20015| 2016-04-06T02:52:07.391-0500 D SHARDING [Balancer] trying to acquire new distributed lock for balancer ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20015:1459929127:-1485108316 ) with lockSessionID: 5704c0275ce0eed80678aa0a, why: doing balance round [js_test:multi_coll_drop] 2016-04-06T02:52:20.003-0500 s20015| 2016-04-06T02:52:07.391-0500 D ASIO [Balancer] startCommand: RemoteCommand 40 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.391-0500 cmd:{ findAndModify: "locks", query: { _id: "balancer", state: 0 }, update: { $set: { ts: ObjectId('5704c0275ce0eed80678aa0a'), state: 2, who: "mongovm16:20015:1459929127:-1485108316:Balancer", process: "mongovm16:20015:1459929127:-1485108316", when: new Date(1459929127391), why: "doing balance round" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.003-0500 s20015| 2016-04-06T02:52:07.391-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 40 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:20.010-0500 c20011| 2016-04-06T02:52:07.391-0500 D COMMAND [conn19] run command config.$cmd { findAndModify: "locks", query: { _id: "balancer", state: 0 }, update: { $set: { ts: ObjectId('5704c0275ce0eed80678aa0a'), state: 2, who: "mongovm16:20015:1459929127:-1485108316:Balancer", process: "mongovm16:20015:1459929127:-1485108316", when: new Date(1459929127391), why: "doing balance round" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.010-0500 c20011| 2016-04-06T02:52:07.391-0500 D QUERY [conn19] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:20.010-0500 c20011| 2016-04-06T02:52:07.391-0500 D QUERY [conn19] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:20.011-0500 c20011| 2016-04-06T02:52:07.391-0500 D QUERY [conn19] Only one plan is available; it will be run but will not be cached. query: { _id: "balancer", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.012-0500 c20013| 2016-04-06T02:52:07.392-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 230 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|10, t: 1, h: 539319829080699756, v: 2, op: "u", ns: "config.locks", o2: { _id: "balancer" }, o: { $set: { ts: ObjectId('5704c0275ce0eed80678aa0a'), state: 2, who: "mongovm16:20015:1459929127:-1485108316:Balancer", process: "mongovm16:20015:1459929127:-1485108316", when: new Date(1459929127391) } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.013-0500 c20011| 2016-04-06T02:52:07.392-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|9, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:628 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:52:20.019-0500 c20011| 2016-04-06T02:52:07.392-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|9, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:628 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:52:20.020-0500 c20013| 2016-04-06T02:52:07.392-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|10 and ending at ts: Timestamp 1459929127000|10 [js_test:multi_coll_drop] 2016-04-06T02:52:20.023-0500 c20012| 2016-04-06T02:52:07.392-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 230 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|10, t: 1, h: 539319829080699756, v: 2, op: "u", ns: "config.locks", o2: { _id: "balancer" }, o: { $set: { ts: ObjectId('5704c0275ce0eed80678aa0a'), state: 2, who: "mongovm16:20015:1459929127:-1485108316:Balancer", process: "mongovm16:20015:1459929127:-1485108316", when: new Date(1459929127391) } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.023-0500 c20012| 2016-04-06T02:52:07.392-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|10 and ending at ts: Timestamp 1459929127000|10 [js_test:multi_coll_drop] 2016-04-06T02:52:20.028-0500 c20013| 2016-04-06T02:52:07.392-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:20.029-0500 c20012| 2016-04-06T02:52:07.392-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:20.031-0500 c20013| 2016-04-06T02:52:07.392-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.032-0500 c20013| 2016-04-06T02:52:07.392-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.035-0500 c20012| 2016-04-06T02:52:07.392-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.035-0500 c20013| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.038-0500 c20012| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.038-0500 c20012| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.039-0500 c20012| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.040-0500 c20013| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.042-0500 c20012| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.042-0500 c20013| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.043-0500 c20012| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.044-0500 c20013| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.045-0500 c20013| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.047-0500 c20013| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.048-0500 c20012| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.049-0500 c20012| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.049-0500 c20013| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.051-0500 c20013| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.052-0500 c20012| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.055-0500 c20012| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.056-0500 c20012| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.057-0500 c20013| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.060-0500 c20012| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.064-0500 c20012| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.065-0500 c20013| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.066-0500 c20013| 2016-04-06T02:52:07.393-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:20.066-0500 c20012| 2016-04-06T02:52:07.393-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:20.068-0500 c20012| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.070-0500 c20012| 2016-04-06T02:52:07.393-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "balancer" } [js_test:multi_coll_drop] 2016-04-06T02:52:20.072-0500 c20013| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.075-0500 c20013| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.077-0500 c20012| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.078-0500 c20012| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.080-0500 c20013| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.082-0500 c20013| 2016-04-06T02:52:07.393-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "balancer" } [js_test:multi_coll_drop] 2016-04-06T02:52:20.082-0500 c20013| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.084-0500 c20012| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.085-0500 c20012| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.086-0500 c20012| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.086-0500 c20012| 2016-04-06T02:52:07.393-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.087-0500 c20013| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.089-0500 c20013| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.089-0500 c20013| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.090-0500 c20013| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.091-0500 c20013| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.091-0500 c20013| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.092-0500 c20013| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.094-0500 c20013| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.094-0500 c20013| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.095-0500 c20013| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.097-0500 c20012| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.097-0500 c20012| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.098-0500 c20013| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.098-0500 c20013| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.100-0500 c20012| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.101-0500 c20012| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.103-0500 c20013| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.107-0500 c20013| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.108-0500 c20012| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.111-0500 c20012| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.113-0500 c20013| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.115-0500 c20012| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.115-0500 c20012| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.116-0500 c20012| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.117-0500 c20012| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.117-0500 c20012| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.119-0500 c20013| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.123-0500 c20013| 2016-04-06T02:52:07.394-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 234 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.394-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:20.124-0500 c20012| 2016-04-06T02:52:07.394-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.130-0500 c20012| 2016-04-06T02:52:07.394-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 232 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.394-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:20.132-0500 c20012| 2016-04-06T02:52:07.394-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 232 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:20.134-0500 c20011| 2016-04-06T02:52:07.394-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:20.136-0500 c20013| 2016-04-06T02:52:07.394-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 234 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:20.141-0500 c20013| 2016-04-06T02:52:07.394-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:20.147-0500 c20013| 2016-04-06T02:52:07.395-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.148-0500 c20011| 2016-04-06T02:52:07.395-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:20.154-0500 c20013| 2016-04-06T02:52:07.395-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 235 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.156-0500 c20013| 2016-04-06T02:52:07.395-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 235 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:20.157-0500 c20012| 2016-04-06T02:52:07.395-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:20.161-0500 c20012| 2016-04-06T02:52:07.395-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.174-0500 c20012| 2016-04-06T02:52:07.395-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 233 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.176-0500 c20012| 2016-04-06T02:52:07.395-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 233 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:20.177-0500 c20013| 2016-04-06T02:52:07.395-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 235 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.179-0500 c20011| 2016-04-06T02:52:07.395-0500 D REPL [conn19] Required snapshot optime: { ts: Timestamp 1459929127000|10, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|9, t: 1 }, name-id: "67" } [js_test:multi_coll_drop] 2016-04-06T02:52:20.184-0500 c20011| 2016-04-06T02:52:07.395-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.185-0500 c20011| 2016-04-06T02:52:07.395-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:20.189-0500 c20011| 2016-04-06T02:52:07.395-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.193-0500 c20011| 2016-04-06T02:52:07.395-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|10, t: 1 } and is durable through: { ts: Timestamp 1459929127000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.196-0500 c20011| 2016-04-06T02:52:07.395-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929127000|10, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|9, t: 1 }, name-id: "67" } [js_test:multi_coll_drop] 2016-04-06T02:52:20.201-0500 c20011| 2016-04-06T02:52:07.395-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.205-0500 c20011| 2016-04-06T02:52:07.395-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:20.206-0500 c20011| 2016-04-06T02:52:07.395-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:20.209-0500 c20011| 2016-04-06T02:52:07.395-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|10, t: 1 } and is durable through: { ts: Timestamp 1459929127000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.210-0500 c20011| 2016-04-06T02:52:07.395-0500 D REPL [conn17] Required snapshot optime: { ts: Timestamp 1459929127000|10, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|9, t: 1 }, name-id: "67" } [js_test:multi_coll_drop] 2016-04-06T02:52:20.215-0500 c20011| 2016-04-06T02:52:07.395-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.221-0500 c20011| 2016-04-06T02:52:07.395-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:20.224-0500 c20012| 2016-04-06T02:52:07.395-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 233 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.231-0500 c20012| 2016-04-06T02:52:07.397-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.240-0500 c20012| 2016-04-06T02:52:07.397-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 235 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.244-0500 c20012| 2016-04-06T02:52:07.397-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 235 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:20.251-0500 c20011| 2016-04-06T02:52:07.397-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.256-0500 c20011| 2016-04-06T02:52:07.397-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:20.257-0500 c20011| 2016-04-06T02:52:07.397-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|10, t: 1 } and is durable through: { ts: Timestamp 1459929127000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.259-0500 c20011| 2016-04-06T02:52:07.397-0500 D REPL [conn17] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.263-0500 c20011| 2016-04-06T02:52:07.397-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.269-0500 c20011| 2016-04-06T02:52:07.397-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:20.276-0500 c20011| 2016-04-06T02:52:07.397-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|9, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:20.285-0500 c20011| 2016-04-06T02:52:07.397-0500 I COMMAND [conn19] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "balancer", state: 0 }, update: { $set: { ts: ObjectId('5704c0275ce0eed80678aa0a'), state: 2, who: "mongovm16:20015:1459929127:-1485108316:Balancer", process: "mongovm16:20015:1459929127:-1485108316", when: new Date(1459929127391), why: "doing balance round" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c0275ce0eed80678aa0a'), state: 2, who: "mongovm16:20015:1459929127:-1485108316:Balancer", process: "mongovm16:20015:1459929127:-1485108316", when: new Date(1459929127391), why: "doing balance round" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:564 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:20.293-0500 c20012| 2016-04-06T02:52:07.397-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 235 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.299-0500 s20015| 2016-04-06T02:52:07.397-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 40 finished with response: { lastErrorObject: { updatedExisting: true, n: 1 }, value: { _id: "balancer", state: 2, ts: ObjectId('5704c0275ce0eed80678aa0a'), who: "mongovm16:20015:1459929127:-1485108316:Balancer", process: "mongovm16:20015:1459929127:-1485108316", when: new Date(1459929127391), why: "doing balance round" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.300-0500 s20015| 2016-04-06T02:52:07.398-0500 I SHARDING [Balancer] distributed lock 'balancer' acquired for 'doing balance round', ts : 5704c0275ce0eed80678aa0a [js_test:multi_coll_drop] 2016-04-06T02:52:20.303-0500 s20015| 2016-04-06T02:52:07.398-0500 D SHARDING [Balancer] *** start balancing round. waitForDelete: 0, secondaryThrottle: {} [js_test:multi_coll_drop] 2016-04-06T02:52:20.307-0500 s20015| 2016-04-06T02:52:07.398-0500 D ASIO [Balancer] startCommand: RemoteCommand 42 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:37.398-0500 cmd:{ find: "collections", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|10, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.312-0500 s20015| 2016-04-06T02:52:07.398-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 42 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:20.315-0500 c20013| 2016-04-06T02:52:07.397-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 234 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.320-0500 c20012| 2016-04-06T02:52:07.403-0500 D COMMAND [conn9] run command config.$cmd { find: "collections", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|10, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.325-0500 c20012| 2016-04-06T02:52:07.403-0500 D REPL [conn9] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929127000|10, t: 1 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929127000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.330-0500 c20012| 2016-04-06T02:52:07.403-0500 D REPL [conn9] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999977μs [js_test:multi_coll_drop] 2016-04-06T02:52:20.332-0500 c20011| 2016-04-06T02:52:07.404-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|9, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 8ms [js_test:multi_coll_drop] 2016-04-06T02:52:20.335-0500 c20012| 2016-04-06T02:52:07.404-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 232 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.337-0500 c20013| 2016-04-06T02:52:07.404-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.338-0500 c20013| 2016-04-06T02:52:07.404-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:20.342-0500 c20013| 2016-04-06T02:52:07.404-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 238 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.404-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:20.345-0500 c20012| 2016-04-06T02:52:07.404-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.346-0500 c20012| 2016-04-06T02:52:07.404-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:20.351-0500 c20012| 2016-04-06T02:52:07.404-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 238 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.404-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:20.352-0500 c20012| 2016-04-06T02:52:07.404-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 238 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:20.354-0500 c20012| 2016-04-06T02:52:07.404-0500 D COMMAND [conn9] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|10, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:20.357-0500 c20012| 2016-04-06T02:52:07.404-0500 D COMMAND [conn9] Using 'committed' snapshot. { find: "collections", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|10, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.361-0500 c20011| 2016-04-06T02:52:07.404-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:20.363-0500 c20012| 2016-04-06T02:52:07.404-0500 D QUERY [conn9] Collection config.collections does not exist. Using EOF plan: query: {} sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:52:20.364-0500 c20013| 2016-04-06T02:52:07.404-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 238 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:20.371-0500 c20013| 2016-04-06T02:52:07.404-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.379-0500 c20011| 2016-04-06T02:52:07.404-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:20.383-0500 c20013| 2016-04-06T02:52:07.404-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 239 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.384-0500 c20013| 2016-04-06T02:52:07.404-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 239 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:20.392-0500 c20011| 2016-04-06T02:52:07.404-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.393-0500 c20011| 2016-04-06T02:52:07.404-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:20.399-0500 c20012| 2016-04-06T02:52:07.404-0500 I COMMAND [conn9] command config.collections command: find { find: "collections", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|10, t: 1 } }, maxTimeMS: 30000 } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:375 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:20.403-0500 c20011| 2016-04-06T02:52:07.404-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.409-0500 c20011| 2016-04-06T02:52:07.404-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|10, t: 1 } and is durable through: { ts: Timestamp 1459929127000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.416-0500 c20011| 2016-04-06T02:52:07.404-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:20.421-0500 s20015| 2016-04-06T02:52:07.404-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 42 finished with response: { waitedMS: 1, cursor: { id: 0, ns: "config.collections", firstBatch: [] }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.423-0500 c20013| 2016-04-06T02:52:07.404-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 239 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.424-0500 s20015| 2016-04-06T02:52:07.404-0500 D SHARDING [Balancer] no collections to balance [js_test:multi_coll_drop] 2016-04-06T02:52:20.425-0500 s20015| 2016-04-06T02:52:07.404-0500 D SHARDING [Balancer] no need to move any chunk [js_test:multi_coll_drop] 2016-04-06T02:52:20.429-0500 s20015| 2016-04-06T02:52:07.405-0500 D SHARDING [Balancer] *** End of balancing round [js_test:multi_coll_drop] 2016-04-06T02:52:20.435-0500 s20015| 2016-04-06T02:52:07.405-0500 D ASIO [Balancer] startCommand: RemoteCommand 44 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.405-0500 cmd:{ findAndModify: "locks", query: { ts: ObjectId('5704c0275ce0eed80678aa0a') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.439-0500 s20015| 2016-04-06T02:52:07.405-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 44 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:20.443-0500 c20011| 2016-04-06T02:52:07.405-0500 D COMMAND [conn19] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c0275ce0eed80678aa0a') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.447-0500 c20011| 2016-04-06T02:52:07.405-0500 D QUERY [conn19] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:20.450-0500 c20011| 2016-04-06T02:52:07.405-0500 D QUERY [conn19] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c0275ce0eed80678aa0a') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.458-0500 c20011| 2016-04-06T02:52:07.405-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|10, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:489 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:20.461-0500 c20013| 2016-04-06T02:52:07.405-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 238 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|11, t: 1, h: 439476235941015415, v: 2, op: "u", ns: "config.locks", o2: { _id: "balancer" }, o: { $set: { state: 0 } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.468-0500 c20011| 2016-04-06T02:52:07.405-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|10, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:489 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:20.472-0500 c20012| 2016-04-06T02:52:07.405-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 238 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|11, t: 1, h: 439476235941015415, v: 2, op: "u", ns: "config.locks", o2: { _id: "balancer" }, o: { $set: { state: 0 } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.473-0500 c20012| 2016-04-06T02:52:07.405-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|11 and ending at ts: Timestamp 1459929127000|11 [js_test:multi_coll_drop] 2016-04-06T02:52:20.477-0500 c20012| 2016-04-06T02:52:07.405-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:20.479-0500 c20012| 2016-04-06T02:52:07.406-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.482-0500 c20012| 2016-04-06T02:52:07.406-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.484-0500 c20012| 2016-04-06T02:52:07.406-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.485-0500 c20012| 2016-04-06T02:52:07.406-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.486-0500 c20012| 2016-04-06T02:52:07.406-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.488-0500 c20012| 2016-04-06T02:52:07.406-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.488-0500 c20012| 2016-04-06T02:52:07.406-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.490-0500 c20012| 2016-04-06T02:52:07.406-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.492-0500 c20012| 2016-04-06T02:52:07.406-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.493-0500 c20012| 2016-04-06T02:52:07.406-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.494-0500 c20012| 2016-04-06T02:52:07.406-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:20.498-0500 c20012| 2016-04-06T02:52:07.406-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.500-0500 c20012| 2016-04-06T02:52:07.406-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.503-0500 c20012| 2016-04-06T02:52:07.406-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.504-0500 c20012| 2016-04-06T02:52:07.406-0500 D QUERY [repl writer worker 5] Using idhack: { _id: "balancer" } [js_test:multi_coll_drop] 2016-04-06T02:52:20.504-0500 c20012| 2016-04-06T02:52:07.406-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.505-0500 c20012| 2016-04-06T02:52:07.406-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.510-0500 c20012| 2016-04-06T02:52:07.406-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.512-0500 c20012| 2016-04-06T02:52:07.406-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.516-0500 c20012| 2016-04-06T02:52:07.406-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.516-0500 c20012| 2016-04-06T02:52:07.406-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.518-0500 c20012| 2016-04-06T02:52:07.406-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.519-0500 c20012| 2016-04-06T02:52:07.406-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.521-0500 c20012| 2016-04-06T02:52:07.406-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.522-0500 c20012| 2016-04-06T02:52:07.406-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.524-0500 c20012| 2016-04-06T02:52:07.406-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.526-0500 c20012| 2016-04-06T02:52:07.407-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.528-0500 c20012| 2016-04-06T02:52:07.407-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.529-0500 c20012| 2016-04-06T02:52:07.407-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.530-0500 c20012| 2016-04-06T02:52:07.407-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.533-0500 c20011| 2016-04-06T02:52:07.407-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:20.536-0500 c20012| 2016-04-06T02:52:07.407-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 240 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.407-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:20.536-0500 c20012| 2016-04-06T02:52:07.407-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 240 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:20.537-0500 c20012| 2016-04-06T02:52:07.414-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.538-0500 c20012| 2016-04-06T02:52:07.414-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.542-0500 c20013| 2016-04-06T02:52:07.415-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|11 and ending at ts: Timestamp 1459929127000|11 [js_test:multi_coll_drop] 2016-04-06T02:52:20.543-0500 c20012| 2016-04-06T02:52:07.415-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.544-0500 c20012| 2016-04-06T02:52:07.415-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.548-0500 c20013| 2016-04-06T02:52:07.415-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:20.549-0500 c20012| 2016-04-06T02:52:07.415-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:20.552-0500 c20013| 2016-04-06T02:52:07.415-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.553-0500 c20013| 2016-04-06T02:52:07.415-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.555-0500 c20013| 2016-04-06T02:52:07.415-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.557-0500 c20013| 2016-04-06T02:52:07.415-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.558-0500 c20013| 2016-04-06T02:52:07.415-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.558-0500 c20013| 2016-04-06T02:52:07.415-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.559-0500 c20013| 2016-04-06T02:52:07.415-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.560-0500 c20013| 2016-04-06T02:52:07.415-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.562-0500 c20013| 2016-04-06T02:52:07.415-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.563-0500 c20013| 2016-04-06T02:52:07.415-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.564-0500 c20013| 2016-04-06T02:52:07.416-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.565-0500 c20013| 2016-04-06T02:52:07.416-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.566-0500 c20013| 2016-04-06T02:52:07.416-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:20.570-0500 c20013| 2016-04-06T02:52:07.416-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.570-0500 c20013| 2016-04-06T02:52:07.416-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.571-0500 c20013| 2016-04-06T02:52:07.416-0500 D QUERY [repl writer worker 3] Using idhack: { _id: "balancer" } [js_test:multi_coll_drop] 2016-04-06T02:52:20.572-0500 c20013| 2016-04-06T02:52:07.416-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.573-0500 c20013| 2016-04-06T02:52:07.416-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.575-0500 c20013| 2016-04-06T02:52:07.416-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.576-0500 c20013| 2016-04-06T02:52:07.416-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.577-0500 c20013| 2016-04-06T02:52:07.416-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.579-0500 c20013| 2016-04-06T02:52:07.416-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.581-0500 c20013| 2016-04-06T02:52:07.416-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.582-0500 c20013| 2016-04-06T02:52:07.416-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.583-0500 c20013| 2016-04-06T02:52:07.416-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.586-0500 c20013| 2016-04-06T02:52:07.416-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.586-0500 c20013| 2016-04-06T02:52:07.416-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.587-0500 c20013| 2016-04-06T02:52:07.416-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.587-0500 c20013| 2016-04-06T02:52:07.416-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.588-0500 c20013| 2016-04-06T02:52:07.416-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.592-0500 c20013| 2016-04-06T02:52:07.416-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.594-0500 c20013| 2016-04-06T02:52:07.416-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.600-0500 c20012| 2016-04-06T02:52:07.416-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.605-0500 c20012| 2016-04-06T02:52:07.416-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 241 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.607-0500 c20012| 2016-04-06T02:52:07.416-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 241 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:20.611-0500 c20011| 2016-04-06T02:52:07.416-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.612-0500 c20011| 2016-04-06T02:52:07.416-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:20.612-0500 c20013| 2016-04-06T02:52:07.416-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.613-0500 c20013| 2016-04-06T02:52:07.416-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.615-0500 c20011| 2016-04-06T02:52:07.416-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|11, t: 1 } and is durable through: { ts: Timestamp 1459929127000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.617-0500 c20011| 2016-04-06T02:52:07.416-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.621-0500 c20011| 2016-04-06T02:52:07.416-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:20.623-0500 c20012| 2016-04-06T02:52:07.417-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 241 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.625-0500 c20013| 2016-04-06T02:52:07.417-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:20.630-0500 c20013| 2016-04-06T02:52:07.417-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.636-0500 c20013| 2016-04-06T02:52:07.417-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 242 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.637-0500 c20013| 2016-04-06T02:52:07.417-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 242 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:20.640-0500 c20013| 2016-04-06T02:52:07.417-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 243 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.417-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:20.640-0500 c20013| 2016-04-06T02:52:07.417-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 243 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:20.646-0500 c20011| 2016-04-06T02:52:07.417-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.646-0500 c20011| 2016-04-06T02:52:07.417-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:20.650-0500 c20011| 2016-04-06T02:52:07.417-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.651-0500 c20011| 2016-04-06T02:52:07.417-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:20.654-0500 c20011| 2016-04-06T02:52:07.417-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|11, t: 1 } and is durable through: { ts: Timestamp 1459929127000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.661-0500 c20011| 2016-04-06T02:52:07.417-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:20.667-0500 c20013| 2016-04-06T02:52:07.417-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 242 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.670-0500 c20011| 2016-04-06T02:52:07.427-0500 D REPL [conn19] Required snapshot optime: { ts: Timestamp 1459929127000|11, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|10, t: 1 }, name-id: "68" } [js_test:multi_coll_drop] 2016-04-06T02:52:20.676-0500 c20013| 2016-04-06T02:52:07.427-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.681-0500 c20013| 2016-04-06T02:52:07.428-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 245 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.683-0500 c20013| 2016-04-06T02:52:07.428-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 245 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:20.687-0500 c20011| 2016-04-06T02:52:07.428-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.687-0500 c20011| 2016-04-06T02:52:07.428-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:20.690-0500 c20011| 2016-04-06T02:52:07.428-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.693-0500 c20011| 2016-04-06T02:52:07.428-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|11, t: 1 } and is durable through: { ts: Timestamp 1459929127000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.694-0500 c20011| 2016-04-06T02:52:07.428-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.700-0500 c20011| 2016-04-06T02:52:07.428-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:20.702-0500 c20013| 2016-04-06T02:52:07.428-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 245 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.708-0500 c20011| 2016-04-06T02:52:07.428-0500 I COMMAND [conn19] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c0275ce0eed80678aa0a') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:564 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 23ms [js_test:multi_coll_drop] 2016-04-06T02:52:20.711-0500 c20011| 2016-04-06T02:52:07.428-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|10, t: 1 } } cursorid:20785203637 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 20ms [js_test:multi_coll_drop] 2016-04-06T02:52:20.713-0500 c20012| 2016-04-06T02:52:07.428-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 240 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.717-0500 s20015| 2016-04-06T02:52:07.428-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 44 finished with response: { lastErrorObject: { updatedExisting: true, n: 1 }, value: { _id: "balancer", state: 2, ts: ObjectId('5704c0275ce0eed80678aa0a'), who: "mongovm16:20015:1459929127:-1485108316:Balancer", process: "mongovm16:20015:1459929127:-1485108316", when: new Date(1459929127391), why: "doing balance round" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.720-0500 c20012| 2016-04-06T02:52:07.428-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.722-0500 c20012| 2016-04-06T02:52:07.428-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:20.728-0500 c20012| 2016-04-06T02:52:07.428-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 244 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.428-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:20.733-0500 c20011| 2016-04-06T02:52:07.428-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|10, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 11ms [js_test:multi_coll_drop] 2016-04-06T02:52:20.734-0500 s20015| 2016-04-06T02:52:07.428-0500 I SHARDING [Balancer] distributed lock with ts: 5704c0275ce0eed80678aa0a' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:20.736-0500 c20013| 2016-04-06T02:52:07.428-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 243 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.740-0500 s20015| 2016-04-06T02:52:07.428-0500 D ASIO [Balancer] startCommand: RemoteCommand 46 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.428-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929127428), up: 0, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.741-0500 c20013| 2016-04-06T02:52:07.428-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.742-0500 c20011| 2016-04-06T02:52:07.428-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:20.744-0500 s20015| 2016-04-06T02:52:07.428-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 46 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:20.744-0500 c20013| 2016-04-06T02:52:07.428-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:20.746-0500 c20012| 2016-04-06T02:52:07.428-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 244 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:20.750-0500 c20011| 2016-04-06T02:52:07.428-0500 D COMMAND [conn19] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929127428), up: 0, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.752-0500 c20013| 2016-04-06T02:52:07.428-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 248 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.428-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:20.753-0500 c20013| 2016-04-06T02:52:07.428-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 248 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:20.755-0500 c20011| 2016-04-06T02:52:07.428-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:20.756-0500 c20011| 2016-04-06T02:52:07.429-0500 D QUERY [conn19] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:52:20.762-0500 c20011| 2016-04-06T02:52:07.429-0500 I WRITE [conn19] update config.mongos query: { _id: "mongovm16:20015" } update: { $set: { _id: "mongovm16:20015", ping: new Date(1459929127428), up: 0, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:20.765-0500 c20011| 2016-04-06T02:52:07.429-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|11, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:510 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:20.776-0500 c20012| 2016-04-06T02:52:07.429-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.780-0500 c20012| 2016-04-06T02:52:07.429-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 244 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|12, t: 1, h: 396975935605961421, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20015" }, o: { $set: { ping: new Date(1459929127428), waiting: true } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.790-0500 c20012| 2016-04-06T02:52:07.429-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 245 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.791-0500 c20012| 2016-04-06T02:52:07.429-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 245 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:20.794-0500 c20012| 2016-04-06T02:52:07.429-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|12 and ending at ts: Timestamp 1459929127000|12 [js_test:multi_coll_drop] 2016-04-06T02:52:20.805-0500 c20011| 2016-04-06T02:52:07.429-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.807-0500 c20011| 2016-04-06T02:52:07.429-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:20.809-0500 c20011| 2016-04-06T02:52:07.429-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|11, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:510 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:20.813-0500 c20011| 2016-04-06T02:52:07.429-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|11, t: 1 } and is durable through: { ts: Timestamp 1459929127000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.816-0500 c20011| 2016-04-06T02:52:07.429-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.822-0500 c20013| 2016-04-06T02:52:07.429-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 248 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|12, t: 1, h: 396975935605961421, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20015" }, o: { $set: { ping: new Date(1459929127428), waiting: true } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.826-0500 c20011| 2016-04-06T02:52:07.429-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:20.829-0500 c20011| 2016-04-06T02:52:07.429-0500 D REPL [conn19] Required snapshot optime: { ts: Timestamp 1459929127000|12, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|11, t: 1 }, name-id: "69" } [js_test:multi_coll_drop] 2016-04-06T02:52:20.840-0500 c20012| 2016-04-06T02:52:07.429-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 245 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.841-0500 c20012| 2016-04-06T02:52:07.430-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:20.844-0500 c20013| 2016-04-06T02:52:07.430-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|12 and ending at ts: Timestamp 1459929127000|12 [js_test:multi_coll_drop] 2016-04-06T02:52:20.845-0500 c20013| 2016-04-06T02:52:07.430-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:20.846-0500 c20012| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.848-0500 c20012| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.849-0500 c20012| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.849-0500 c20012| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.850-0500 c20012| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.851-0500 c20013| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.853-0500 c20013| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.854-0500 c20013| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.855-0500 c20013| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.855-0500 c20013| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.863-0500 c20012| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.865-0500 c20012| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.866-0500 c20013| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.868-0500 c20013| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.870-0500 c20012| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.873-0500 c20012| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.874-0500 c20012| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.876-0500 c20012| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.877-0500 c20012| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.877-0500 c20012| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.879-0500 c20012| 2016-04-06T02:52:07.430-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:20.881-0500 c20012| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.881-0500 c20013| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.882-0500 c20013| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.883-0500 c20013| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.884-0500 c20013| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.884-0500 c20013| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.887-0500 c20012| 2016-04-06T02:52:07.430-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:52:20.888-0500 c20013| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.892-0500 c20013| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.893-0500 c20013| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.894-0500 c20012| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.896-0500 c20013| 2016-04-06T02:52:07.430-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:20.897-0500 c20013| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.897-0500 c20012| 2016-04-06T02:52:07.430-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.899-0500 c20013| 2016-04-06T02:52:07.430-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:52:20.900-0500 c20013| 2016-04-06T02:52:07.431-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.902-0500 c20013| 2016-04-06T02:52:07.431-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.904-0500 c20013| 2016-04-06T02:52:07.431-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.905-0500 c20013| 2016-04-06T02:52:07.431-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.907-0500 c20013| 2016-04-06T02:52:07.431-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.907-0500 c20013| 2016-04-06T02:52:07.431-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.908-0500 c20013| 2016-04-06T02:52:07.431-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.909-0500 c20012| 2016-04-06T02:52:07.431-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.910-0500 c20013| 2016-04-06T02:52:07.431-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.911-0500 c20013| 2016-04-06T02:52:07.431-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.912-0500 c20012| 2016-04-06T02:52:07.431-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.913-0500 c20013| 2016-04-06T02:52:07.431-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.914-0500 c20013| 2016-04-06T02:52:07.431-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.915-0500 c20013| 2016-04-06T02:52:07.431-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.916-0500 c20013| 2016-04-06T02:52:07.431-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.917-0500 c20013| 2016-04-06T02:52:07.431-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.919-0500 c20012| 2016-04-06T02:52:07.431-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.920-0500 c20012| 2016-04-06T02:52:07.431-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.920-0500 c20013| 2016-04-06T02:52:07.431-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.920-0500 c20013| 2016-04-06T02:52:07.431-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.920-0500 c20012| 2016-04-06T02:52:07.431-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.921-0500 c20012| 2016-04-06T02:52:07.431-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.922-0500 c20012| 2016-04-06T02:52:07.431-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 248 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.431-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:20.922-0500 c20012| 2016-04-06T02:52:07.431-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.924-0500 c20012| 2016-04-06T02:52:07.431-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.925-0500 c20012| 2016-04-06T02:52:07.431-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 248 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:20.928-0500 c20011| 2016-04-06T02:52:07.432-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:20.929-0500 c20012| 2016-04-06T02:52:07.432-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.930-0500 c20012| 2016-04-06T02:52:07.432-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.930-0500 c20012| 2016-04-06T02:52:07.432-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:20.933-0500 c20013| 2016-04-06T02:52:07.432-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 250 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.432-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:20.935-0500 c20013| 2016-04-06T02:52:07.432-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 250 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:20.936-0500 c20013| 2016-04-06T02:52:07.432-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:20.937-0500 c20011| 2016-04-06T02:52:07.432-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:20.939-0500 c20013| 2016-04-06T02:52:07.432-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.942-0500 c20013| 2016-04-06T02:52:07.432-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 251 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.943-0500 c20013| 2016-04-06T02:52:07.432-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 251 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:20.944-0500 c20013| 2016-04-06T02:52:07.433-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 251 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.948-0500 c20013| 2016-04-06T02:52:07.435-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.953-0500 c20013| 2016-04-06T02:52:07.435-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 253 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.954-0500 c20013| 2016-04-06T02:52:07.435-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 253 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:20.957-0500 c20013| 2016-04-06T02:52:07.435-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 253 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.960-0500 c20013| 2016-04-06T02:52:07.435-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 250 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.962-0500 c20013| 2016-04-06T02:52:07.435-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.964-0500 c20013| 2016-04-06T02:52:07.435-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:20.969-0500 c20013| 2016-04-06T02:52:07.435-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 256 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.435-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:20.970-0500 c20013| 2016-04-06T02:52:07.435-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 256 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:20.976-0500 c20011| 2016-04-06T02:52:07.432-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:20.978-0500 c20011| 2016-04-06T02:52:07.432-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:20.979-0500 s20015| 2016-04-06T02:52:07.433-0500 I NETWORK [mongosMain] waiting for connections on port 20015 [js_test:multi_coll_drop] 2016-04-06T02:52:20.982-0500 c20011| 2016-04-06T02:52:07.432-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.986-0500 c20011| 2016-04-06T02:52:07.433-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|12, t: 1 } and is durable through: { ts: Timestamp 1459929127000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:20.991-0500 c20011| 2016-04-06T02:52:07.433-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929127000|12, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|11, t: 1 }, name-id: "69" } [js_test:multi_coll_drop] 2016-04-06T02:52:20.995-0500 c20011| 2016-04-06T02:52:07.433-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:20.998-0500 c20011| 2016-04-06T02:52:07.435-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:21.004-0500 c20011| 2016-04-06T02:52:07.435-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:21.005-0500 c20011| 2016-04-06T02:52:07.435-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.005-0500 c20011| 2016-04-06T02:52:07.435-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|12, t: 1 } and is durable through: { ts: Timestamp 1459929127000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.005-0500 c20011| 2016-04-06T02:52:07.435-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.006-0500 c20011| 2016-04-06T02:52:07.435-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:21.006-0500 c20011| 2016-04-06T02:52:07.435-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|11, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:21.006-0500 c20011| 2016-04-06T02:52:07.435-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|11, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:21.010-0500 c20011| 2016-04-06T02:52:07.435-0500 I COMMAND [conn19] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929127428), up: 0, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:386 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:21.011-0500 c20011| 2016-04-06T02:52:07.435-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:21.013-0500 c20012| 2016-04-06T02:52:07.434-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.014-0500 c20012| 2016-04-06T02:52:07.434-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.015-0500 c20012| 2016-04-06T02:52:07.434-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.019-0500 c20011| 2016-04-06T02:52:07.435-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:21.025-0500 c20011| 2016-04-06T02:52:07.436-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:21.026-0500 c20011| 2016-04-06T02:52:07.436-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:21.027-0500 s20015| 2016-04-06T02:52:07.435-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 46 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929127000|12, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:21.029-0500 c20012| 2016-04-06T02:52:07.434-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.033-0500 c20012| 2016-04-06T02:52:07.435-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.036-0500 c20012| 2016-04-06T02:52:07.435-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 248 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.048-0500 c20012| 2016-04-06T02:52:07.435-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:21.050-0500 c20012| 2016-04-06T02:52:07.435-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.056-0500 c20012| 2016-04-06T02:52:07.435-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:21.059-0500 c20012| 2016-04-06T02:52:07.435-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 250 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.435-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:21.060-0500 c20012| 2016-04-06T02:52:07.435-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 250 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.062-0500 c20012| 2016-04-06T02:52:07.436-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:21.072-0500 c20012| 2016-04-06T02:52:07.436-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 251 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:21.073-0500 c20012| 2016-04-06T02:52:07.436-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 251 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.074-0500 c20012| 2016-04-06T02:52:07.437-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 251 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.078-0500 c20012| 2016-04-06T02:52:07.437-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:21.085-0500 c20012| 2016-04-06T02:52:07.437-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 252 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:21.087-0500 c20012| 2016-04-06T02:52:07.437-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 252 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.088-0500 c20012| 2016-04-06T02:52:07.438-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 252 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.090-0500 c20011| 2016-04-06T02:52:07.436-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|12, t: 1 } and is durable through: { ts: Timestamp 1459929127000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.093-0500 c20011| 2016-04-06T02:52:07.436-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.099-0500 c20011| 2016-04-06T02:52:07.436-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:21.101-0500 c20011| 2016-04-06T02:52:07.437-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:21.102-0500 c20011| 2016-04-06T02:52:07.437-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:21.104-0500 c20011| 2016-04-06T02:52:07.437-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|12, t: 1 } and is durable through: { ts: Timestamp 1459929127000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.106-0500 c20011| 2016-04-06T02:52:07.437-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.109-0500 c20011| 2016-04-06T02:52:07.438-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:21.110-0500 s20015| 2016-04-06T02:52:07.519-0500 I NETWORK [mongosMain] connection accepted from 127.0.0.1:53934 #1 (1 connection now open) [js_test:multi_coll_drop] 2016-04-06T02:52:21.113-0500 s20014| 2016-04-06T02:52:07.520-0500 D ASIO [conn1] startCommand: RemoteCommand 55 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.520-0500 cmd:{ update: "settings", updates: [ { q: { _id: "balancer" }, u: { $set: { stopped: true } }, multi: false, upsert: true } ], writeConcern: { w: "majority", timeout: 30.0 }, ordered: true, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.115-0500 s20014| 2016-04-06T02:52:07.521-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 55 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.117-0500 c20011| 2016-04-06T02:52:07.521-0500 D COMMAND [conn10] run command config.$cmd { update: "settings", updates: [ { q: { _id: "balancer" }, u: { $set: { stopped: true } }, multi: false, upsert: true } ], writeConcern: { w: "majority", timeout: 30.0 }, ordered: true, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.118-0500 c20011| 2016-04-06T02:52:07.521-0500 D QUERY [conn10] Using idhack: { _id: "balancer" } [js_test:multi_coll_drop] 2016-04-06T02:52:21.123-0500 c20011| 2016-04-06T02:52:07.521-0500 I WRITE [conn10] update config.settings query: { _id: "balancer" } update: { $set: { stopped: true } } keysExamined:0 docsExamined:0 nMatched:0 nModified:0 upsert:1 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:21.126-0500 c20011| 2016-04-06T02:52:07.521-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|12, t: 1 } } cursorid:17466612721 numYields:1 nreturned:1 reslen:471 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 85ms [js_test:multi_coll_drop] 2016-04-06T02:52:21.131-0500 c20011| 2016-04-06T02:52:07.521-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|12, t: 1 } } cursorid:20785203637 numYields:1 nreturned:1 reslen:471 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 85ms [js_test:multi_coll_drop] 2016-04-06T02:52:21.133-0500 c20013| 2016-04-06T02:52:07.521-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 256 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|13, t: 1, h: 6220728803746622601, v: 2, op: "i", ns: "config.settings", o: { _id: "balancer", stopped: true } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.135-0500 c20012| 2016-04-06T02:52:07.521-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 250 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|13, t: 1, h: 6220728803746622601, v: 2, op: "i", ns: "config.settings", o: { _id: "balancer", stopped: true } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.141-0500 c20013| 2016-04-06T02:52:07.521-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|13 and ending at ts: Timestamp 1459929127000|13 [js_test:multi_coll_drop] 2016-04-06T02:52:21.143-0500 c20012| 2016-04-06T02:52:07.522-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|13 and ending at ts: Timestamp 1459929127000|13 [js_test:multi_coll_drop] 2016-04-06T02:52:21.144-0500 c20013| 2016-04-06T02:52:07.522-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:21.146-0500 c20013| 2016-04-06T02:52:07.522-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.147-0500 c20013| 2016-04-06T02:52:07.522-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.150-0500 c20013| 2016-04-06T02:52:07.522-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.152-0500 c20013| 2016-04-06T02:52:07.522-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.153-0500 c20013| 2016-04-06T02:52:07.522-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.156-0500 c20013| 2016-04-06T02:52:07.522-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.156-0500 c20013| 2016-04-06T02:52:07.522-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.157-0500 c20013| 2016-04-06T02:52:07.522-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.158-0500 c20013| 2016-04-06T02:52:07.522-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.159-0500 c20013| 2016-04-06T02:52:07.522-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.161-0500 c20013| 2016-04-06T02:52:07.522-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.163-0500 c20013| 2016-04-06T02:52:07.522-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.165-0500 c20013| 2016-04-06T02:52:07.522-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.165-0500 c20013| 2016-04-06T02:52:07.522-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.167-0500 c20013| 2016-04-06T02:52:07.522-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.167-0500 c20013| 2016-04-06T02:52:07.522-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:21.168-0500 c20013| 2016-04-06T02:52:07.522-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.171-0500 c20012| 2016-04-06T02:52:07.522-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:21.177-0500 c20011| 2016-04-06T02:52:07.523-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929127000|13, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|12, t: 1 }, name-id: "70" } [js_test:multi_coll_drop] 2016-04-06T02:52:21.178-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.179-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.180-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.181-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.184-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.185-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.186-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.188-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.192-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.193-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.194-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.195-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.195-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.196-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.202-0500 c20012| 2016-04-06T02:52:07.523-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:21.203-0500 c20013| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.210-0500 c20013| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.212-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.213-0500 c20013| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.214-0500 c20013| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.216-0500 c20013| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.217-0500 c20013| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.218-0500 c20013| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.218-0500 c20013| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.219-0500 c20013| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.223-0500 c20013| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.223-0500 c20013| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.225-0500 c20013| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.235-0500 c20013| 2016-04-06T02:52:07.524-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 258 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.524-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:21.236-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.238-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.238-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.240-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.246-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.255-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.256-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.262-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.264-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.275-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.277-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.279-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.280-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.281-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.287-0500 c20012| 2016-04-06T02:52:07.524-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 256 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.524-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:21.288-0500 c20012| 2016-04-06T02:52:07.524-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 256 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.291-0500 c20013| 2016-04-06T02:52:07.524-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 258 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.294-0500 c20011| 2016-04-06T02:52:07.524-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:21.296-0500 c20011| 2016-04-06T02:52:07.524-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:21.296-0500 c20012| 2016-04-06T02:52:07.523-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.297-0500 c20012| 2016-04-06T02:52:07.524-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.297-0500 c20013| 2016-04-06T02:52:07.524-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.298-0500 c20013| 2016-04-06T02:52:07.524-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.301-0500 c20013| 2016-04-06T02:52:07.524-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.302-0500 c20013| 2016-04-06T02:52:07.524-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.303-0500 c20013| 2016-04-06T02:52:07.524-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:21.304-0500 c20012| 2016-04-06T02:52:07.525-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.309-0500 c20011| 2016-04-06T02:52:07.525-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:21.310-0500 c20011| 2016-04-06T02:52:07.525-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:21.313-0500 c20011| 2016-04-06T02:52:07.525-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.315-0500 c20011| 2016-04-06T02:52:07.525-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|13, t: 1 } and is durable through: { ts: Timestamp 1459929127000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.322-0500 c20011| 2016-04-06T02:52:07.525-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929127000|13, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|12, t: 1 }, name-id: "70" } [js_test:multi_coll_drop] 2016-04-06T02:52:21.328-0500 c20011| 2016-04-06T02:52:07.525-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:21.332-0500 c20013| 2016-04-06T02:52:07.525-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:21.337-0500 c20013| 2016-04-06T02:52:07.525-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 259 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:21.338-0500 c20013| 2016-04-06T02:52:07.525-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 259 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.341-0500 c20013| 2016-04-06T02:52:07.525-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 259 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.342-0500 c20012| 2016-04-06T02:52:07.525-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:21.348-0500 c20012| 2016-04-06T02:52:07.525-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:21.356-0500 c20012| 2016-04-06T02:52:07.525-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 257 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:21.361-0500 c20012| 2016-04-06T02:52:07.525-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 257 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.365-0500 c20011| 2016-04-06T02:52:07.525-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:21.367-0500 c20011| 2016-04-06T02:52:07.525-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:21.375-0500 c20011| 2016-04-06T02:52:07.525-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|13, t: 1 } and is durable through: { ts: Timestamp 1459929127000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.387-0500 c20011| 2016-04-06T02:52:07.525-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929127000|13, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|12, t: 1 }, name-id: "70" } [js_test:multi_coll_drop] 2016-04-06T02:52:21.394-0500 c20011| 2016-04-06T02:52:07.525-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.399-0500 c20011| 2016-04-06T02:52:07.525-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:21.399-0500 c20012| 2016-04-06T02:52:07.526-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 257 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.406-0500 c20011| 2016-04-06T02:52:07.526-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:21.407-0500 c20011| 2016-04-06T02:52:07.526-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:21.409-0500 c20011| 2016-04-06T02:52:07.526-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.426-0500 c20011| 2016-04-06T02:52:07.526-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|13, t: 1 } and is durable through: { ts: Timestamp 1459929127000|13, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.428-0500 c20011| 2016-04-06T02:52:07.526-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|13, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.431-0500 c20011| 2016-04-06T02:52:07.526-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:21.436-0500 c20013| 2016-04-06T02:52:07.526-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:21.450-0500 c20013| 2016-04-06T02:52:07.526-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 261 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:21.452-0500 c20013| 2016-04-06T02:52:07.526-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 261 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.455-0500 c20013| 2016-04-06T02:52:07.526-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 261 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.461-0500 c20011| 2016-04-06T02:52:07.526-0500 I COMMAND [conn10] command config.$cmd command: update { update: "settings", updates: [ { q: { _id: "balancer" }, u: { $set: { stopped: true } }, multi: false, upsert: true } ], writeConcern: { w: "majority", timeout: 30.0 }, ordered: true, maxTimeMS: 30000 } numYields:0 reslen:438 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:21.464-0500 c20011| 2016-04-06T02:52:07.526-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|12, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:21.466-0500 c20013| 2016-04-06T02:52:07.526-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 258 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.469-0500 c20011| 2016-04-06T02:52:07.526-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|12, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:21.471-0500 c20012| 2016-04-06T02:52:07.526-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 256 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.472-0500 c20012| 2016-04-06T02:52:07.527-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|13, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.472-0500 c20012| 2016-04-06T02:52:07.527-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:21.473-0500 s20014| 2016-04-06T02:52:07.526-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 55 finished with response: { ok: 1, nModified: 0, n: 1, upserted: [ { index: 0, _id: "balancer" } ], opTime: { ts: Timestamp 1459929127000|13, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:21.476-0500 c20012| 2016-04-06T02:52:07.527-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 260 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.527-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|13, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:21.478-0500 c20011| 2016-04-06T02:52:07.527-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|13, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:21.503-0500 c20011| 2016-04-06T02:52:07.527-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:21.503-0500 c20011| 2016-04-06T02:52:07.527-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:21.506-0500 c20011| 2016-04-06T02:52:07.527-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|13, t: 1 } and is durable through: { ts: Timestamp 1459929127000|13, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.513-0500 c20011| 2016-04-06T02:52:07.527-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.519-0500 c20011| 2016-04-06T02:52:07.527-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:21.520-0500 c20012| 2016-04-06T02:52:07.527-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 260 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.527-0500 c20012| 2016-04-06T02:52:07.527-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:21.535-0500 c20012| 2016-04-06T02:52:07.527-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 261 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:21.538-0500 c20012| 2016-04-06T02:52:07.527-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 261 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.540-0500 c20012| 2016-04-06T02:52:07.527-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 261 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.543-0500 s20014| 2016-04-06T02:52:07.527-0500 D ASIO [conn1] startCommand: RemoteCommand 57 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.527-0500 cmd:{ find: "collections", filter: { _id: /^config\./ }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|13, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.546-0500 c20011| 2016-04-06T02:52:07.527-0500 D COMMAND [conn10] run command config.$cmd { find: "collections", filter: { _id: /^config\./ }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|13, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.549-0500 c20011| 2016-04-06T02:52:07.527-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|13, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:21.552-0500 c20011| 2016-04-06T02:52:07.527-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "collections", filter: { _id: /^config\./ }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|13, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.552-0500 c20011| 2016-04-06T02:52:07.527-0500 D QUERY [conn10] Collection config.collections does not exist. Using EOF plan: query: { _id: /^config\./ } sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:52:21.556-0500 s20014| 2016-04-06T02:52:07.527-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 57 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.584-0500 c20011| 2016-04-06T02:52:07.528-0500 I COMMAND [conn10] command config.collections command: find { find: "collections", filter: { _id: /^config\./ }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|13, t: 1 } }, maxTimeMS: 30000 } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:395 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:21.588-0500 s20014| 2016-04-06T02:52:07.528-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 57 finished with response: { waitedMS: 0, cursor: { id: 0, ns: "config.collections", firstBatch: [] }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.590-0500 s20014| 2016-04-06T02:52:07.528-0500 D SHARDING [conn1] found 0 collections left and 0 collections dropped for database config [js_test:multi_coll_drop] 2016-04-06T02:52:21.590-0500 s20014| 2016-04-06T02:52:07.528-0500 D ASIO [conn1] startCommand: RemoteCommand 59 -- target:mongovm16:20011 db:config cmd:{ find: "mongos" } [js_test:multi_coll_drop] 2016-04-06T02:52:21.594-0500 s20014| 2016-04-06T02:52:07.528-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-0-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.596-0500 s20014| 2016-04-06T02:52:07.530-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-0-0] Starting asynchronous command 60 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.600-0500 c20011| 2016-04-06T02:52:07.530-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:58990 #21 (17 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:21.602-0500 c20011| 2016-04-06T02:52:07.530-0500 D COMMAND [conn21] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:52:21.606-0500 c20011| 2016-04-06T02:52:07.531-0500 I COMMAND [conn21] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20014" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:21.608-0500 s20014| 2016-04-06T02:52:07.531-0500 I ASIO [NetworkInterfaceASIO-TaskExecutorPool-0-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.609-0500 s20014| 2016-04-06T02:52:07.531-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-0-0] Request 60 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:52:21.611-0500 s20014| 2016-04-06T02:52:07.531-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-0-0] Starting asynchronous command 59 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.613-0500 c20011| 2016-04-06T02:52:07.531-0500 D COMMAND [conn21] run command config.$cmd { find: "mongos" } [js_test:multi_coll_drop] 2016-04-06T02:52:21.614-0500 c20011| 2016-04-06T02:52:07.531-0500 D QUERY [conn21] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:21.617-0500 c20011| 2016-04-06T02:52:07.531-0500 I COMMAND [conn21] command config.mongos command: find { find: "mongos" } planSummary: COLLSCAN keysExamined:0 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:374 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:21.621-0500 s20014| 2016-04-06T02:52:07.531-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-0-0] Request 59 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20014", ping: new Date(1459929127192), up: 0, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" }, { _id: "mongovm16:20015", ping: new Date(1459929127428), up: 0, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } ], id: 0, ns: "config.mongos" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.621-0500 Waiting for active hosts... [js_test:multi_coll_drop] 2016-04-06T02:52:21.621-0500 Waiting for the balancer lock... [js_test:multi_coll_drop] 2016-04-06T02:52:21.624-0500 s20014| 2016-04-06T02:52:07.532-0500 D ASIO [conn1] startCommand: RemoteCommand 62 -- target:mongovm16:20011 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:multi_coll_drop] 2016-04-06T02:52:21.626-0500 s20014| 2016-04-06T02:52:07.532-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-1-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.631-0500 s20014| 2016-04-06T02:52:07.532-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-1-0] Starting asynchronous command 63 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.638-0500 c20011| 2016-04-06T02:52:07.532-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:58991 #22 (18 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:21.639-0500 c20013| 2016-04-06T02:52:07.533-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|13, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.640-0500 c20013| 2016-04-06T02:52:07.533-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:21.643-0500 c20013| 2016-04-06T02:52:07.533-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 264 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.533-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|13, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:21.644-0500 c20011| 2016-04-06T02:52:07.533-0500 D COMMAND [conn22] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:52:21.644-0500 c20013| 2016-04-06T02:52:07.533-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 264 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.646-0500 c20011| 2016-04-06T02:52:07.533-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|13, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:21.648-0500 c20011| 2016-04-06T02:52:07.533-0500 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20014" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:21.651-0500 s20014| 2016-04-06T02:52:07.533-0500 I ASIO [NetworkInterfaceASIO-TaskExecutorPool-1-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.657-0500 s20014| 2016-04-06T02:52:07.533-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-1-0] Request 63 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:52:21.662-0500 s20014| 2016-04-06T02:52:07.533-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-1-0] Starting asynchronous command 62 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.665-0500 c20011| 2016-04-06T02:52:07.533-0500 D COMMAND [conn22] run command config.$cmd { find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:multi_coll_drop] 2016-04-06T02:52:21.670-0500 c20011| 2016-04-06T02:52:07.533-0500 D QUERY [conn22] Using idhack: query: { _id: "balancer" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:21.685-0500 c20011| 2016-04-06T02:52:07.533-0500 I COMMAND [conn22] command config.locks command: find { find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:368 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:21.693-0500 s20014| 2016-04-06T02:52:07.533-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-1-0] Request 62 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "balancer", state: 0, ts: ObjectId('5704c0275ce0eed80678aa0a'), who: "mongovm16:20015:1459929127:-1485108316:Balancer", process: "mongovm16:20015:1459929127:-1485108316", when: new Date(1459929127391), why: "doing balance round" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.695-0500 Waiting again for active hosts after balancer is off... [js_test:multi_coll_drop] 2016-04-06T02:52:21.703-0500 ShardingTest multidrop going to add shard : mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:52:21.707-0500 s20014| 2016-04-06T02:52:07.534-0500 D ASIO [conn1] startCommand: RemoteCommand 65 -- target:mongovm16:20010 db:admin expDate:2016-04-06T02:52:37.534-0500 cmd:{ isdbgrid: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.708-0500 s20014| 2016-04-06T02:52:07.534-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-TaskExecutor-0] Connecting to mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:52:21.714-0500 s20014| 2016-04-06T02:52:07.534-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-TaskExecutor-0] Starting asynchronous command 66 on host mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:52:21.722-0500 d20010| 2016-04-06T02:52:07.534-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:58975 #2 (2 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:21.728-0500 s20014| 2016-04-06T02:52:07.535-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-TaskExecutor-0] Successfully connected to mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:52:21.729-0500 s20014| 2016-04-06T02:52:07.535-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-TaskExecutor-0] Request 66 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:52:21.729-0500 s20014| 2016-04-06T02:52:07.535-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-TaskExecutor-0] Starting asynchronous command 65 on host mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:52:21.731-0500 s20014| 2016-04-06T02:52:07.535-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-TaskExecutor-0] Request 65 finished with response: { ok: 0.0, errmsg: "no such command: 'isdbgrid', bad cmd: '{ isdbgrid: 1 }'", code: 59 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.732-0500 s20014| 2016-04-06T02:52:07.535-0500 D ASIO [conn1] startCommand: RemoteCommand 68 -- target:mongovm16:20010 db:admin expDate:2016-04-06T02:52:37.535-0500 cmd:{ isMaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.735-0500 s20014| 2016-04-06T02:52:07.535-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-TaskExecutor-0] Starting asynchronous command 68 on host mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:52:21.736-0500 s20014| 2016-04-06T02:52:07.535-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-TaskExecutor-0] Request 68 finished with response: { ismaster: true, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1459929127535), maxWireVersion: 4, minWireVersion: 0, readOnly: false, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.738-0500 s20014| 2016-04-06T02:52:07.536-0500 D ASIO [conn1] startCommand: RemoteCommand 70 -- target:mongovm16:20010 db:admin expDate:2016-04-06T02:52:37.536-0500 cmd:{ replSetGetStatus: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.740-0500 s20014| 2016-04-06T02:52:07.536-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-TaskExecutor-0] Starting asynchronous command 70 on host mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:52:21.741-0500 s20014| 2016-04-06T02:52:07.536-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-TaskExecutor-0] Request 70 finished with response: { ok: 0.0, errmsg: "not running with --replSet", code: 76 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.744-0500 s20014| 2016-04-06T02:52:07.536-0500 D ASIO [conn1] startCommand: RemoteCommand 72 -- target:mongovm16:20010 db:admin expDate:2016-04-06T02:52:37.536-0500 cmd:{ listDatabases: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.747-0500 s20014| 2016-04-06T02:52:07.536-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-TaskExecutor-0] Starting asynchronous command 72 on host mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:52:21.748-0500 s20014| 2016-04-06T02:52:07.537-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-TaskExecutor-0] Request 72 finished with response: { databases: [ { name: "local", sizeOnDisk: 8192.0, empty: false } ], totalSize: 8192.0, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.752-0500 s20014| 2016-04-06T02:52:07.537-0500 D ASIO [conn1] startCommand: RemoteCommand 74 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:52:37.537-0500 cmd:{ find: "shards", filter: { _id: /^shard/ }, sort: { _id: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|13, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.753-0500 s20014| 2016-04-06T02:52:07.537-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 74 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:21.755-0500 c20013| 2016-04-06T02:52:07.537-0500 D COMMAND [conn10] run command config.$cmd { find: "shards", filter: { _id: /^shard/ }, sort: { _id: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|13, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.757-0500 c20013| 2016-04-06T02:52:07.537-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|13, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:21.761-0500 c20013| 2016-04-06T02:52:07.537-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "shards", filter: { _id: /^shard/ }, sort: { _id: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|13, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.767-0500 c20013| 2016-04-06T02:52:07.537-0500 D QUERY [conn10] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.shards" } [js_test:multi_coll_drop] 2016-04-06T02:52:21.770-0500 c20013| 2016-04-06T02:52:07.537-0500 D QUERY [conn10] Only one plan is available; it will be run but will not be cached. query: { _id: /^shard/ } sort: { _id: -1 } projection: {} limit: 1, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.775-0500 c20013| 2016-04-06T02:52:07.537-0500 I COMMAND [conn10] command config.shards command: find { find: "shards", filter: { _id: /^shard/ }, sort: { _id: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|13, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { _id: 1 } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:370 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:21.778-0500 s20014| 2016-04-06T02:52:07.537-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 74 finished with response: { waitedMS: 0, cursor: { firstBatch: [], id: 0, ns: "config.shards" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.779-0500 s20014| 2016-04-06T02:52:07.537-0500 I SHARDING [conn1] going to add shard: { _id: "shard0000", host: "mongovm16:20010" } [js_test:multi_coll_drop] 2016-04-06T02:52:21.784-0500 s20014| 2016-04-06T02:52:07.538-0500 D ASIO [conn1] startCommand: RemoteCommand 76 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.538-0500 cmd:{ insert: "shards", documents: [ { _id: "shard0000", host: "mongovm16:20010" } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.784-0500 s20014| 2016-04-06T02:52:07.538-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 76 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.785-0500 c20011| 2016-04-06T02:52:07.538-0500 D COMMAND [conn10] run command config.$cmd { insert: "shards", documents: [ { _id: "shard0000", host: "mongovm16:20010" } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.789-0500 c20013| 2016-04-06T02:52:07.538-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 264 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|14, t: 1, h: 5220479729283035390, v: 2, op: "i", ns: "config.shards", o: { _id: "shard0000", host: "mongovm16:20010" } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.793-0500 c20011| 2016-04-06T02:52:07.538-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|13, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:486 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:21.798-0500 c20011| 2016-04-06T02:52:07.538-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|13, t: 1 } } cursorid:20785203637 numYields:1 nreturned:1 reslen:486 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 11ms [js_test:multi_coll_drop] 2016-04-06T02:52:21.802-0500 c20013| 2016-04-06T02:52:07.538-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|14 and ending at ts: Timestamp 1459929127000|14 [js_test:multi_coll_drop] 2016-04-06T02:52:21.804-0500 c20013| 2016-04-06T02:52:07.539-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:21.805-0500 c20013| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.806-0500 c20013| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.807-0500 [js_test:multi_coll_drop] 2016-04-06T02:52:21.808-0500 [js_test:multi_coll_drop] 2016-04-06T02:52:21.809-0500 ---- [js_test:multi_coll_drop] 2016-04-06T02:52:21.810-0500 Shard and split collection... [js_test:multi_coll_drop] 2016-04-06T02:52:21.810-0500 ---- [js_test:multi_coll_drop] 2016-04-06T02:52:21.810-0500 [js_test:multi_coll_drop] 2016-04-06T02:52:21.811-0500 [js_test:multi_coll_drop] 2016-04-06T02:52:21.811-0500 *** Continuous stepdown thread running with seed node mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.814-0500 c20012| 2016-04-06T02:52:07.538-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 260 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|14, t: 1, h: 5220479729283035390, v: 2, op: "i", ns: "config.shards", o: { _id: "shard0000", host: "mongovm16:20010" } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.819-0500 c20012| 2016-04-06T02:52:07.539-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|14 and ending at ts: Timestamp 1459929127000|14 [js_test:multi_coll_drop] 2016-04-06T02:52:21.821-0500 c20012| 2016-04-06T02:52:07.539-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:21.822-0500 c20012| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.822-0500 c20012| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.824-0500 c20012| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.824-0500 c20012| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.829-0500 c20012| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.829-0500 c20012| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.830-0500 c20012| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.831-0500 c20012| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.833-0500 c20012| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.834-0500 c20012| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.836-0500 c20012| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.837-0500 c20012| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.839-0500 c20012| 2016-04-06T02:52:07.540-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:21.840-0500 c20012| 2016-04-06T02:52:07.540-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.844-0500 c20011| 2016-04-06T02:52:07.541-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|13, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:21.845-0500 s20014| 2016-04-06T02:52:07.544-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 76 finished with response: { ok: 1, n: 1, opTime: { ts: Timestamp 1459929127000|14, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:21.848-0500 c20011| 2016-04-06T02:52:07.541-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:21.849-0500 c20011| 2016-04-06T02:52:07.541-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:21.850-0500 c20012| 2016-04-06T02:52:07.540-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.851-0500 c20012| 2016-04-06T02:52:07.540-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.854-0500 c20012| 2016-04-06T02:52:07.540-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.856-0500 c20012| 2016-04-06T02:52:07.540-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.860-0500 c20012| 2016-04-06T02:52:07.540-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.861-0500 c20012| 2016-04-06T02:52:07.540-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.862-0500 c20012| 2016-04-06T02:52:07.540-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.864-0500 c20012| 2016-04-06T02:52:07.541-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.869-0500 c20012| 2016-04-06T02:52:07.541-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 264 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.541-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|13, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:21.870-0500 c20012| 2016-04-06T02:52:07.541-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.870-0500 c20012| 2016-04-06T02:52:07.541-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 264 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.872-0500 c20012| 2016-04-06T02:52:07.541-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.872-0500 c20012| 2016-04-06T02:52:07.541-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.873-0500 c20012| 2016-04-06T02:52:07.541-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.874-0500 c20012| 2016-04-06T02:52:07.541-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.875-0500 c20012| 2016-04-06T02:52:07.541-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.876-0500 c20012| 2016-04-06T02:52:07.541-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.876-0500 c20012| 2016-04-06T02:52:07.541-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.881-0500 c20012| 2016-04-06T02:52:07.541-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.886-0500 c20012| 2016-04-06T02:52:07.541-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.886-0500 c20012| 2016-04-06T02:52:07.541-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.894-0500 c20012| 2016-04-06T02:52:07.542-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:21.899-0500 c20012| 2016-04-06T02:52:07.542-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:21.908-0500 c20012| 2016-04-06T02:52:07.542-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 265 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:21.910-0500 c20012| 2016-04-06T02:52:07.542-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 265 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.912-0500 c20012| 2016-04-06T02:52:07.542-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 265 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.914-0500 c20012| 2016-04-06T02:52:07.544-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 264 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.915-0500 c20012| 2016-04-06T02:52:07.544-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|14, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.918-0500 c20012| 2016-04-06T02:52:07.544-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:21.927-0500 c20012| 2016-04-06T02:52:07.544-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 268 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.544-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:21.933-0500 c20012| 2016-04-06T02:52:07.544-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 268 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.951-0500 c20012| 2016-04-06T02:52:07.544-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:21.959-0500 c20012| 2016-04-06T02:52:07.544-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 269 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:21.960-0500 c20012| 2016-04-06T02:52:07.544-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 269 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:21.961-0500 c20012| 2016-04-06T02:52:07.545-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 269 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.969-0500 c20012| 2016-04-06T02:52:07.566-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 268 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|15, t: 1, h: 8652652745203495356, v: 2, op: "c", ns: "config.$cmd", o: { create: "changelog", capped: true, size: 10485760 } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.971-0500 c20012| 2016-04-06T02:52:07.566-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|15 and ending at ts: Timestamp 1459929127000|15 [js_test:multi_coll_drop] 2016-04-06T02:52:21.973-0500 c20012| 2016-04-06T02:52:07.566-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:21.977-0500 c20012| 2016-04-06T02:52:07.567-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.978-0500 c20012| 2016-04-06T02:52:07.567-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.979-0500 c20012| 2016-04-06T02:52:07.567-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.980-0500 c20012| 2016-04-06T02:52:07.567-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.981-0500 c20012| 2016-04-06T02:52:07.567-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.983-0500 c20012| 2016-04-06T02:52:07.567-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.986-0500 c20012| 2016-04-06T02:52:07.567-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.986-0500 c20012| 2016-04-06T02:52:07.567-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.989-0500 c20012| 2016-04-06T02:52:07.567-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.989-0500 c20012| 2016-04-06T02:52:07.567-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.989-0500 c20012| 2016-04-06T02:52:07.567-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.990-0500 c20012| 2016-04-06T02:52:07.567-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.991-0500 c20012| 2016-04-06T02:52:07.567-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.992-0500 c20012| 2016-04-06T02:52:07.567-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.993-0500 c20012| 2016-04-06T02:52:07.567-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.995-0500 c20012| 2016-04-06T02:52:07.567-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:21.996-0500 c20012| 2016-04-06T02:52:07.567-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:21.997-0500 c20012| 2016-04-06T02:52:07.567-0500 D STORAGE [repl writer worker 2] create collection config.changelog { capped: true, size: 10485760 } [js_test:multi_coll_drop] 2016-04-06T02:52:21.997-0500 c20012| 2016-04-06T02:52:07.567-0500 D STORAGE [repl writer worker 2] stored meta data for config.changelog @ RecordId(15) [js_test:multi_coll_drop] 2016-04-06T02:52:22.000-0500 c20012| 2016-04-06T02:52:07.567-0500 D STORAGE [repl writer worker 2] WiredTigerKVEngine::createRecordStore uri: table:collection-35-6577373056560964212 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:22.023-0500 c20012| 2016-04-06T02:52:07.568-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 272 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.568-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:22.023-0500 c20012| 2016-04-06T02:52:07.569-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 272 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:22.025-0500 c20012| 2016-04-06T02:52:07.569-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 272 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|16, t: 1, h: 2661855704509793992, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:07.566-0500-5704c02706c33406d4d9c0bc", server: "mongovm16", clientAddr: "127.0.0.1:55066", time: new Date(1459929127566), what: "addShard", ns: "", details: { name: "shard0000", host: "mongovm16:20010" } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.027-0500 c20012| 2016-04-06T02:52:07.569-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|16 and ending at ts: Timestamp 1459929127000|16 [js_test:multi_coll_drop] 2016-04-06T02:52:22.028-0500 c20012| 2016-04-06T02:52:07.569-0500 D REPL [rsBackgroundSync-0] bgsync buffer has 0 bytes [js_test:multi_coll_drop] 2016-04-06T02:52:22.031-0500 c20012| 2016-04-06T02:52:07.571-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 274 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.571-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:22.032-0500 c20012| 2016-04-06T02:52:07.572-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 274 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:22.034-0500 c20012| 2016-04-06T02:52:07.574-0500 D STORAGE [repl writer worker 2] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-35-6577373056560964212 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:22.034-0500 c20012| 2016-04-06T02:52:07.574-0500 D STORAGE [repl writer worker 2] config.changelog: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:22.036-0500 c20012| 2016-04-06T02:52:07.574-0500 D STORAGE [repl writer worker 2] WiredTigerKVEngine::createSortedDataInterface ident: index-36-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.changelog" }), [js_test:multi_coll_drop] 2016-04-06T02:52:22.043-0500 c20012| 2016-04-06T02:52:07.575-0500 D STORAGE [repl writer worker 2] create uri: table:index-36-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.changelog" }), [js_test:multi_coll_drop] 2016-04-06T02:52:22.046-0500 c20012| 2016-04-06T02:52:07.603-0500 D STORAGE [repl writer worker 2] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-36-6577373056560964212 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:22.050-0500 c20012| 2016-04-06T02:52:07.603-0500 D STORAGE [repl writer worker 2] config.changelog: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:22.051-0500 c20012| 2016-04-06T02:52:07.603-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.052-0500 c20012| 2016-04-06T02:52:07.603-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.055-0500 c20012| 2016-04-06T02:52:07.603-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.057-0500 c20012| 2016-04-06T02:52:07.603-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.060-0500 c20012| 2016-04-06T02:52:07.603-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.063-0500 c20012| 2016-04-06T02:52:07.603-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.066-0500 c20012| 2016-04-06T02:52:07.603-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.067-0500 c20012| 2016-04-06T02:52:07.604-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.068-0500 c20012| 2016-04-06T02:52:07.604-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.070-0500 c20012| 2016-04-06T02:52:07.604-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.078-0500 c20012| 2016-04-06T02:52:07.604-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.081-0500 c20012| 2016-04-06T02:52:07.604-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.082-0500 c20012| 2016-04-06T02:52:07.604-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.083-0500 c20012| 2016-04-06T02:52:07.604-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.085-0500 c20012| 2016-04-06T02:52:07.604-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.086-0500 c20012| 2016-04-06T02:52:07.605-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.088-0500 c20012| 2016-04-06T02:52:07.605-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:22.103-0500 c20012| 2016-04-06T02:52:07.605-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:22.107-0500 c20012| 2016-04-06T02:52:07.605-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|15, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:22.113-0500 c20012| 2016-04-06T02:52:07.605-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 275 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|15, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:22.115-0500 c20012| 2016-04-06T02:52:07.605-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.118-0500 c20012| 2016-04-06T02:52:07.605-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.120-0500 c20012| 2016-04-06T02:52:07.605-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.120-0500 c20012| 2016-04-06T02:52:07.605-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.122-0500 c20012| 2016-04-06T02:52:07.605-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.124-0500 c20012| 2016-04-06T02:52:07.605-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 275 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:22.125-0500 c20012| 2016-04-06T02:52:07.605-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.126-0500 c20012| 2016-04-06T02:52:07.605-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.126-0500 c20012| 2016-04-06T02:52:07.605-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.131-0500 c20012| 2016-04-06T02:52:07.605-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.135-0500 c20012| 2016-04-06T02:52:07.606-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.135-0500 c20012| 2016-04-06T02:52:07.606-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.137-0500 c20012| 2016-04-06T02:52:07.606-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.138-0500 c20012| 2016-04-06T02:52:07.606-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.139-0500 c20012| 2016-04-06T02:52:07.606-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.140-0500 c20012| 2016-04-06T02:52:07.606-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.143-0500 c20012| 2016-04-06T02:52:07.606-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:22.144-0500 c20012| 2016-04-06T02:52:07.606-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.146-0500 c20012| 2016-04-06T02:52:07.606-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 275 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.147-0500 c20012| 2016-04-06T02:52:07.606-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.147-0500 c20012| 2016-04-06T02:52:07.606-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.149-0500 c20012| 2016-04-06T02:52:07.606-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.149-0500 c20012| 2016-04-06T02:52:07.606-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.152-0500 c20012| 2016-04-06T02:52:07.606-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.155-0500 c20012| 2016-04-06T02:52:07.606-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.156-0500 c20012| 2016-04-06T02:52:07.606-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.157-0500 c20012| 2016-04-06T02:52:07.606-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.158-0500 c20012| 2016-04-06T02:52:07.606-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.160-0500 c20012| 2016-04-06T02:52:07.606-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.160-0500 c20012| 2016-04-06T02:52:07.606-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.162-0500 c20012| 2016-04-06T02:52:07.606-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.163-0500 c20012| 2016-04-06T02:52:07.606-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.163-0500 c20012| 2016-04-06T02:52:07.606-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.164-0500 c20012| 2016-04-06T02:52:07.606-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.165-0500 c20012| 2016-04-06T02:52:07.606-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.168-0500 c20012| 2016-04-06T02:52:07.607-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:22.170-0500 c20012| 2016-04-06T02:52:07.612-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:22.176-0500 c20012| 2016-04-06T02:52:07.612-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 277 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:22.179-0500 c20012| 2016-04-06T02:52:07.612-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 277 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:22.181-0500 c20012| 2016-04-06T02:52:07.612-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 277 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.187-0500 c20012| 2016-04-06T02:52:07.614-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|15, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:22.191-0500 c20012| 2016-04-06T02:52:07.614-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 279 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|15, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:22.192-0500 c20012| 2016-04-06T02:52:07.614-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 279 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:22.198-0500 c20012| 2016-04-06T02:52:07.615-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 279 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.199-0500 c20012| 2016-04-06T02:52:07.615-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 274 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.199-0500 c20012| 2016-04-06T02:52:07.615-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|15, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.203-0500 c20012| 2016-04-06T02:52:07.615-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:22.207-0500 c20012| 2016-04-06T02:52:07.615-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 282 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.615-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|15, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:22.208-0500 c20012| 2016-04-06T02:52:07.615-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 282 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:22.213-0500 c20012| 2016-04-06T02:52:07.620-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:22.215-0500 c20013| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.217-0500 c20011| 2016-04-06T02:52:07.541-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.223-0500 c20011| 2016-04-06T02:52:07.541-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|14, t: 1 } and is durable through: { ts: Timestamp 1459929127000|13, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.225-0500 c20013| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.231-0500 c20011| 2016-04-06T02:52:07.541-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:22.238-0500 c20012| 2016-04-06T02:52:07.620-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 283 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:22.241-0500 c20012| 2016-04-06T02:52:07.620-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 283 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:22.243-0500 c20012| 2016-04-06T02:52:07.620-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 283 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.247-0500 c20012| 2016-04-06T02:52:07.621-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 282 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.251-0500 c20012| 2016-04-06T02:52:07.621-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.256-0500 c20012| 2016-04-06T02:52:07.621-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:22.262-0500 c20012| 2016-04-06T02:52:07.621-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 286 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.621-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|16, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:22.264-0500 c20012| 2016-04-06T02:52:07.621-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 286 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:22.265-0500 c20012| 2016-04-06T02:52:07.630-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.267-0500 c20012| 2016-04-06T02:52:07.633-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:443 locks:{} protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:22.271-0500 c20012| 2016-04-06T02:52:08.065-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 287 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:18.065-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.273-0500 c20012| 2016-04-06T02:52:08.065-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 287 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:22.277-0500 c20012| 2016-04-06T02:52:08.072-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 287 finished with response: { ok: 1.0, electionTime: new Date(6270347837762961409), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, opTime: { ts: Timestamp 1459929127000|16, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:22.281-0500 c20012| 2016-04-06T02:52:08.074-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:10.074Z [js_test:multi_coll_drop] 2016-04-06T02:52:22.285-0500 c20011| 2016-04-06T02:52:07.542-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929127000|14, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|13, t: 1 }, name-id: "71" } [js_test:multi_coll_drop] 2016-04-06T02:52:22.292-0500 c20011| 2016-04-06T02:52:07.542-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:22.293-0500 c20011| 2016-04-06T02:52:07.542-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:22.297-0500 c20011| 2016-04-06T02:52:07.542-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|14, t: 1 } and is durable through: { ts: Timestamp 1459929127000|13, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.303-0500 c20011| 2016-04-06T02:52:07.542-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929127000|14, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|13, t: 1 }, name-id: "71" } [js_test:multi_coll_drop] 2016-04-06T02:52:22.313-0500 c20011| 2016-04-06T02:52:07.542-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.339-0500 c20011| 2016-04-06T02:52:07.542-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:22.340-0500 c20012| 2016-04-06T02:52:08.074-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 289 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:18.074-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.345-0500 c20012| 2016-04-06T02:52:08.075-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 289 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:22.345-0500 c20012| 2016-04-06T02:52:08.076-0500 D COMMAND [conn5] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.351-0500 c20012| 2016-04-06T02:52:08.076-0500 D COMMAND [conn5] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:22.358-0500 c20012| 2016-04-06T02:52:08.076-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 289 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, opTime: { ts: Timestamp 1459929127000|16, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:22.359-0500 c20012| 2016-04-06T02:52:08.080-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:10.080Z [js_test:multi_coll_drop] 2016-04-06T02:52:22.364-0500 c20012| 2016-04-06T02:52:08.080-0500 I COMMAND [conn5] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } numYields:0 reslen:489 locks:{} protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:22.367-0500 c20011| 2016-04-06T02:52:07.543-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:22.369-0500 c20011| 2016-04-06T02:52:07.543-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:22.372-0500 c20011| 2016-04-06T02:52:07.543-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.386-0500 c20011| 2016-04-06T02:52:07.543-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|14, t: 1 } and is durable through: { ts: Timestamp 1459929127000|14, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.389-0500 c20011| 2016-04-06T02:52:07.543-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|14, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.393-0500 c20011| 2016-04-06T02:52:07.543-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:22.396-0500 c20011| 2016-04-06T02:52:07.544-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|13, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:22.406-0500 c20011| 2016-04-06T02:52:07.544-0500 I COMMAND [conn10] command config.shards command: insert { insert: "shards", documents: [ { _id: "shard0000", host: "mongovm16:20010" } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:22.407-0500 c20011| 2016-04-06T02:52:07.544-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:22.408-0500 c20011| 2016-04-06T02:52:07.544-0500 D COMMAND [conn10] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|14, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.408-0500 c20011| 2016-04-06T02:52:07.544-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|14, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:22.410-0500 c20011| 2016-04-06T02:52:07.544-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|14, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.411-0500 c20011| 2016-04-06T02:52:07.544-0500 D QUERY [conn10] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:22.415-0500 c20011| 2016-04-06T02:52:07.544-0500 I COMMAND [conn10] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|14, t: 1 } }, maxTimeMS: 30000 } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:443 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:22.418-0500 c20011| 2016-04-06T02:52:07.544-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:22.418-0500 c20011| 2016-04-06T02:52:07.544-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:22.419-0500 c20011| 2016-04-06T02:52:07.544-0500 D COMMAND [conn10] run command config.$cmd { create: "changelog", capped: true, size: 10485760, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.420-0500 c20011| 2016-04-06T02:52:07.544-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|14, t: 1 } and is durable through: { ts: Timestamp 1459929127000|14, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.424-0500 c20011| 2016-04-06T02:52:07.545-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.425-0500 c20011| 2016-04-06T02:52:07.545-0500 D STORAGE [conn10] create collection config.changelog { capped: true, size: 10485760, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.427-0500 c20011| 2016-04-06T02:52:07.545-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:22.428-0500 c20011| 2016-04-06T02:52:07.545-0500 D STORAGE [conn10] stored meta data for config.changelog @ RecordId(14) [js_test:multi_coll_drop] 2016-04-06T02:52:22.429-0500 c20011| 2016-04-06T02:52:07.545-0500 D STORAGE [conn10] WiredTigerKVEngine::createRecordStore uri: table:collection-33--6404702321693896372 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:22.431-0500 c20011| 2016-04-06T02:52:07.546-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|13, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:22.433-0500 c20011| 2016-04-06T02:52:07.547-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|13, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:22.436-0500 c20011| 2016-04-06T02:52:07.547-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:22.438-0500 c20011| 2016-04-06T02:52:07.552-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-33--6404702321693896372 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:22.439-0500 c20011| 2016-04-06T02:52:07.552-0500 D STORAGE [conn10] config.changelog: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:22.443-0500 c20011| 2016-04-06T02:52:07.552-0500 D STORAGE [conn10] WiredTigerKVEngine::createSortedDataInterface ident: index-34--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.changelog" }), [js_test:multi_coll_drop] 2016-04-06T02:52:22.449-0500 c20011| 2016-04-06T02:52:07.552-0500 D STORAGE [conn10] create uri: table:index-34--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.changelog" }), [js_test:multi_coll_drop] 2016-04-06T02:52:22.451-0500 c20011| 2016-04-06T02:52:07.565-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-34--6404702321693896372 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:22.453-0500 c20011| 2016-04-06T02:52:07.565-0500 D STORAGE [conn10] config.changelog: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:22.457-0500 c20011| 2016-04-06T02:52:07.566-0500 I COMMAND [conn10] command config.changelog command: create { create: "changelog", capped: true, size: 10485760, maxTimeMS: 30000 } numYields:0 reslen:308 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 21ms [js_test:multi_coll_drop] 2016-04-06T02:52:22.462-0500 c20011| 2016-04-06T02:52:07.566-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|14, t: 1 } } cursorid:17466612721 numYields:1 nreturned:1 reslen:480 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 18ms [js_test:multi_coll_drop] 2016-04-06T02:52:22.469-0500 c20011| 2016-04-06T02:52:07.566-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|14, t: 1 } } cursorid:20785203637 numYields:1 nreturned:1 reslen:480 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 22ms [js_test:multi_coll_drop] 2016-04-06T02:52:22.481-0500 c20011| 2016-04-06T02:52:07.566-0500 D COMMAND [conn10] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:07.566-0500-5704c02706c33406d4d9c0bc", server: "mongovm16", clientAddr: "127.0.0.1:55066", time: new Date(1459929127566), what: "addShard", ns: "", details: { name: "shard0000", host: "mongovm16:20010" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.487-0500 c20011| 2016-04-06T02:52:07.569-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:22.491-0500 c20011| 2016-04-06T02:52:07.569-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|14, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:673 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:22.493-0500 c20011| 2016-04-06T02:52:07.572-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:22.495-0500 c20011| 2016-04-06T02:52:07.572-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929127000|16, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|14, t: 1 }, name-id: "72" } [js_test:multi_coll_drop] 2016-04-06T02:52:22.499-0500 c20011| 2016-04-06T02:52:07.596-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:22.509-0500 c20011| 2016-04-06T02:52:07.597-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|14, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:673 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:22.511-0500 c20011| 2016-04-06T02:52:07.599-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:22.515-0500 c20011| 2016-04-06T02:52:07.606-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|15, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:22.515-0500 c20011| 2016-04-06T02:52:07.606-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:22.518-0500 c20011| 2016-04-06T02:52:07.606-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|15, t: 1 } and is durable through: { ts: Timestamp 1459929127000|14, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.521-0500 c20011| 2016-04-06T02:52:07.606-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929127000|16, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|14, t: 1 }, name-id: "72" } [js_test:multi_coll_drop] 2016-04-06T02:52:22.525-0500 c20011| 2016-04-06T02:52:07.606-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.530-0500 c20011| 2016-04-06T02:52:07.606-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|15, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:22.538-0500 c20011| 2016-04-06T02:52:07.612-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:22.538-0500 c20013| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.539-0500 c20013| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.540-0500 c20013| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.543-0500 c20013| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.547-0500 c20013| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.547-0500 c20013| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.549-0500 c20013| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.552-0500 c20013| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.556-0500 c20013| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.557-0500 c20013| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.557-0500 c20013| 2016-04-06T02:52:07.539-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.559-0500 c20013| 2016-04-06T02:52:07.539-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:22.562-0500 c20013| 2016-04-06T02:52:07.540-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.570-0500 c20013| 2016-04-06T02:52:07.540-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.572-0500 c20013| 2016-04-06T02:52:07.540-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.573-0500 c20013| 2016-04-06T02:52:07.540-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.578-0500 c20013| 2016-04-06T02:52:07.540-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.578-0500 c20013| 2016-04-06T02:52:07.540-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.581-0500 c20013| 2016-04-06T02:52:07.540-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.592-0500 c20013| 2016-04-06T02:52:07.540-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.592-0500 c20013| 2016-04-06T02:52:07.540-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.596-0500 c20013| 2016-04-06T02:52:07.540-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.597-0500 c20013| 2016-04-06T02:52:07.540-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.597-0500 c20013| 2016-04-06T02:52:07.540-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.598-0500 c20013| 2016-04-06T02:52:07.540-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.602-0500 c20013| 2016-04-06T02:52:07.540-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.603-0500 c20013| 2016-04-06T02:52:07.540-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.604-0500 c20013| 2016-04-06T02:52:07.540-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.606-0500 c20013| 2016-04-06T02:52:07.540-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.607-0500 c20013| 2016-04-06T02:52:07.541-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:22.619-0500 c20013| 2016-04-06T02:52:07.541-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:22.625-0500 c20013| 2016-04-06T02:52:07.541-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 266 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:22.628-0500 c20013| 2016-04-06T02:52:07.541-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 266 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:22.632-0500 c20013| 2016-04-06T02:52:07.541-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 266 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.637-0500 c20013| 2016-04-06T02:52:07.543-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:22.642-0500 c20013| 2016-04-06T02:52:07.543-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 268 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:22.644-0500 c20013| 2016-04-06T02:52:07.543-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 268 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:22.646-0500 c20013| 2016-04-06T02:52:07.543-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 268 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.650-0500 c20013| 2016-04-06T02:52:07.546-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 270 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.546-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|13, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:22.651-0500 c20013| 2016-04-06T02:52:07.546-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 270 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:22.657-0500 c20013| 2016-04-06T02:52:07.547-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 270 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.660-0500 c20013| 2016-04-06T02:52:07.547-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|14, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.661-0500 c20013| 2016-04-06T02:52:07.547-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:22.666-0500 c20013| 2016-04-06T02:52:07.547-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 272 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.547-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:22.667-0500 c20013| 2016-04-06T02:52:07.547-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 272 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:22.674-0500 c20013| 2016-04-06T02:52:07.566-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 272 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|15, t: 1, h: 8652652745203495356, v: 2, op: "c", ns: "config.$cmd", o: { create: "changelog", capped: true, size: 10485760 } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.675-0500 c20013| 2016-04-06T02:52:07.584-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|15 and ending at ts: Timestamp 1459929127000|15 [js_test:multi_coll_drop] 2016-04-06T02:52:22.679-0500 c20013| 2016-04-06T02:52:07.584-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:22.679-0500 c20013| 2016-04-06T02:52:07.584-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.681-0500 c20013| 2016-04-06T02:52:07.584-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.682-0500 c20013| 2016-04-06T02:52:07.584-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.685-0500 c20013| 2016-04-06T02:52:07.584-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.687-0500 c20013| 2016-04-06T02:52:07.584-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.688-0500 c20013| 2016-04-06T02:52:07.584-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.689-0500 c20013| 2016-04-06T02:52:07.584-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.689-0500 c20013| 2016-04-06T02:52:07.584-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.695-0500 c20013| 2016-04-06T02:52:07.584-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.695-0500 c20013| 2016-04-06T02:52:07.584-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.696-0500 c20013| 2016-04-06T02:52:07.584-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.698-0500 c20013| 2016-04-06T02:52:07.584-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.703-0500 c20013| 2016-04-06T02:52:07.584-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.706-0500 c20013| 2016-04-06T02:52:07.584-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.707-0500 c20013| 2016-04-06T02:52:07.584-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:22.708-0500 c20013| 2016-04-06T02:52:07.585-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.709-0500 c20013| 2016-04-06T02:52:07.585-0500 D STORAGE [repl writer worker 5] create collection config.changelog { capped: true, size: 10485760 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.709-0500 c20013| 2016-04-06T02:52:07.585-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.710-0500 Recreating replica set from config { [js_test:multi_coll_drop] 2016-04-06T02:52:22.710-0500 "_id" : "multidrop-configRS", [js_test:multi_coll_drop] 2016-04-06T02:52:22.711-0500 "version" : 1, [js_test:multi_coll_drop] 2016-04-06T02:52:22.711-0500 "configsvr" : true, [js_test:multi_coll_drop] 2016-04-06T02:52:22.712-0500 "protocolVersion" : NumberLong(1), [js_test:multi_coll_drop] 2016-04-06T02:52:22.712-0500 "members" : [ [js_test:multi_coll_drop] 2016-04-06T02:52:22.712-0500 { [js_test:multi_coll_drop] 2016-04-06T02:52:22.713-0500 "_id" : 0, [js_test:multi_coll_drop] 2016-04-06T02:52:22.714-0500 "host" : "mongovm16:20011", [js_test:multi_coll_drop] 2016-04-06T02:52:22.715-0500 "arbiterOnly" : false, [js_test:multi_coll_drop] 2016-04-06T02:52:22.715-0500 "buildIndexes" : true, [js_test:multi_coll_drop] 2016-04-06T02:52:22.716-0500 "hidden" : false, [js_test:multi_coll_drop] 2016-04-06T02:52:22.716-0500 "priority" : 1, [js_test:multi_coll_drop] 2016-04-06T02:52:22.716-0500 "tags" : { [js_test:multi_coll_drop] 2016-04-06T02:52:22.716-0500 [js_test:multi_coll_drop] 2016-04-06T02:52:22.717-0500 }, [js_test:multi_coll_drop] 2016-04-06T02:52:22.717-0500 "slaveDelay" : NumberLong(0), [js_test:multi_coll_drop] 2016-04-06T02:52:22.718-0500 "votes" : 1 [js_test:multi_coll_drop] 2016-04-06T02:52:22.718-0500 }, [js_test:multi_coll_drop] 2016-04-06T02:52:22.718-0500 { [js_test:multi_coll_drop] 2016-04-06T02:52:22.727-0500 "_id" : 1, [js_test:multi_coll_drop] 2016-04-06T02:52:22.728-0500 "host" : "mongovm16:20012", [js_test:multi_coll_drop] 2016-04-06T02:52:22.729-0500 "arbiterOnly" : false, [js_test:multi_coll_drop] 2016-04-06T02:52:22.729-0500 "buildIndexes" : true, [js_test:multi_coll_drop] 2016-04-06T02:52:22.729-0500 "hidden" : false, [js_test:multi_coll_drop] 2016-04-06T02:52:22.729-0500 "priority" : 1, [js_test:multi_coll_drop] 2016-04-06T02:52:22.731-0500 "tags" : { [js_test:multi_coll_drop] 2016-04-06T02:52:22.732-0500 [js_test:multi_coll_drop] 2016-04-06T02:52:22.732-0500 }, [js_test:multi_coll_drop] 2016-04-06T02:52:22.738-0500 "slaveDelay" : NumberLong(0), [js_test:multi_coll_drop] 2016-04-06T02:52:22.741-0500 "votes" : 1 [js_test:multi_coll_drop] 2016-04-06T02:52:22.741-0500 }, [js_test:multi_coll_drop] 2016-04-06T02:52:22.744-0500 { [js_test:multi_coll_drop] 2016-04-06T02:52:22.746-0500 "_id" : 2, [js_test:multi_coll_drop] 2016-04-06T02:52:22.747-0500 "host" : "mongovm16:20013", [js_test:multi_coll_drop] 2016-04-06T02:52:22.750-0500 "arbiterOnly" : false, [js_test:multi_coll_drop] 2016-04-06T02:52:22.753-0500 "buildIndexes" : true, [js_test:multi_coll_drop] 2016-04-06T02:52:22.766-0500 "hidden" : false, [js_test:multi_coll_drop] 2016-04-06T02:52:22.766-0500 "priority" : 1, [js_test:multi_coll_drop] 2016-04-06T02:52:22.767-0500 "tags" : { [js_test:multi_coll_drop] 2016-04-06T02:52:22.767-0500 [js_test:multi_coll_drop] 2016-04-06T02:52:22.768-0500 }, [js_test:multi_coll_drop] 2016-04-06T02:52:22.768-0500 "slaveDelay" : NumberLong(0), [js_test:multi_coll_drop] 2016-04-06T02:52:22.768-0500 "votes" : 1 [js_test:multi_coll_drop] 2016-04-06T02:52:22.768-0500 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.769-0500 ], [js_test:multi_coll_drop] 2016-04-06T02:52:22.769-0500 "settings" : { [js_test:multi_coll_drop] 2016-04-06T02:52:22.770-0500 "chainingAllowed" : true, [js_test:multi_coll_drop] 2016-04-06T02:52:22.771-0500 "heartbeatIntervalMillis" : 2000, [js_test:multi_coll_drop] 2016-04-06T02:52:22.772-0500 "heartbeatTimeoutSecs" : 10, [js_test:multi_coll_drop] 2016-04-06T02:52:22.772-0500 "electionTimeoutMillis" : 5000, [js_test:multi_coll_drop] 2016-04-06T02:52:22.773-0500 "getLastErrorModes" : { [js_test:multi_coll_drop] 2016-04-06T02:52:22.773-0500 [js_test:multi_coll_drop] 2016-04-06T02:52:22.773-0500 }, [js_test:multi_coll_drop] 2016-04-06T02:52:22.774-0500 "getLastErrorDefaults" : { [js_test:multi_coll_drop] 2016-04-06T02:52:22.775-0500 "w" : 1, [js_test:multi_coll_drop] 2016-04-06T02:52:22.778-0500 "wtimeout" : 0 [js_test:multi_coll_drop] 2016-04-06T02:52:22.778-0500 }, [js_test:multi_coll_drop] 2016-04-06T02:52:22.778-0500 "replicaSetId" : ObjectId("5704c01d3876c4cfd2eb3eb9") [js_test:multi_coll_drop] 2016-04-06T02:52:22.780-0500 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.780-0500 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.780-0500 c20013| 2016-04-06T02:52:07.585-0500 D STORAGE [repl writer worker 5] stored meta data for config.changelog @ RecordId(15) [js_test:multi_coll_drop] 2016-04-06T02:52:22.782-0500 c20013| 2016-04-06T02:52:07.585-0500 D STORAGE [repl writer worker 5] WiredTigerKVEngine::createRecordStore uri: table:collection-35-751336887848580549 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:22.788-0500 c20013| 2016-04-06T02:52:07.596-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 274 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.596-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:22.789-0500 c20013| 2016-04-06T02:52:07.596-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 274 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:22.796-0500 c20013| 2016-04-06T02:52:07.597-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 274 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929127000|16, t: 1, h: 2661855704509793992, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:07.566-0500-5704c02706c33406d4d9c0bc", server: "mongovm16", clientAddr: "127.0.0.1:55066", time: new Date(1459929127566), what: "addShard", ns: "", details: { name: "shard0000", host: "mongovm16:20010" } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.797-0500 c20013| 2016-04-06T02:52:07.597-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929127000|16 and ending at ts: Timestamp 1459929127000|16 [js_test:multi_coll_drop] 2016-04-06T02:52:22.798-0500 c20013| 2016-04-06T02:52:07.597-0500 D REPL [rsBackgroundSync-0] bgsync buffer has 0 bytes [js_test:multi_coll_drop] 2016-04-06T02:52:22.800-0500 c20013| 2016-04-06T02:52:07.599-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 276 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.599-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:22.800-0500 c20013| 2016-04-06T02:52:07.599-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 276 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:22.803-0500 c20013| 2016-04-06T02:52:07.604-0500 D STORAGE [repl writer worker 5] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-35-751336887848580549 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:22.804-0500 c20013| 2016-04-06T02:52:07.604-0500 D STORAGE [repl writer worker 5] config.changelog: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:22.805-0500 c20013| 2016-04-06T02:52:07.604-0500 D STORAGE [repl writer worker 5] WiredTigerKVEngine::createSortedDataInterface ident: index-36-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.changelog" }), [js_test:multi_coll_drop] 2016-04-06T02:52:22.808-0500 c20013| 2016-04-06T02:52:07.604-0500 D STORAGE [repl writer worker 5] create uri: table:index-36-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.changelog" }), [js_test:multi_coll_drop] 2016-04-06T02:52:22.812-0500 c20013| 2016-04-06T02:52:07.615-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 276 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.813-0500 c20013| 2016-04-06T02:52:07.619-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|15, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.815-0500 c20013| 2016-04-06T02:52:07.619-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:22.817-0500 c20013| 2016-04-06T02:52:07.620-0500 D STORAGE [repl writer worker 5] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-36-751336887848580549 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:22.819-0500 c20013| 2016-04-06T02:52:07.620-0500 D STORAGE [repl writer worker 5] config.changelog: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:22.821-0500 c20013| 2016-04-06T02:52:07.620-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.822-0500 c20013| 2016-04-06T02:52:07.620-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.823-0500 c20013| 2016-04-06T02:52:07.620-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.824-0500 c20013| 2016-04-06T02:52:07.620-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 278 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.620-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|15, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:22.827-0500 c20013| 2016-04-06T02:52:07.620-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.828-0500 c20013| 2016-04-06T02:52:07.620-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.829-0500 c20013| 2016-04-06T02:52:07.620-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.829-0500 c20013| 2016-04-06T02:52:07.620-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 278 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:22.829-0500 c20013| 2016-04-06T02:52:07.620-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.832-0500 c20013| 2016-04-06T02:52:07.620-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.834-0500 c20013| 2016-04-06T02:52:07.620-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.835-0500 c20013| 2016-04-06T02:52:07.620-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.836-0500 c20013| 2016-04-06T02:52:07.620-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.836-0500 c20013| 2016-04-06T02:52:07.620-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.836-0500 c20013| 2016-04-06T02:52:07.620-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.838-0500 c20013| 2016-04-06T02:52:07.620-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.840-0500 c20013| 2016-04-06T02:52:07.620-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 278 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.844-0500 c20013| 2016-04-06T02:52:07.620-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.844-0500 c20013| 2016-04-06T02:52:07.621-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:22.849-0500 c20013| 2016-04-06T02:52:07.621-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 280 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:12.621-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|16, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:22.853-0500 c20013| 2016-04-06T02:52:07.621-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 280 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:22.853-0500 c20013| 2016-04-06T02:52:07.633-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.858-0500 c20013| 2016-04-06T02:52:07.635-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:443 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:22.860-0500 c20013| 2016-04-06T02:52:07.635-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.863-0500 c20013| 2016-04-06T02:52:07.635-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.867-0500 c20013| 2016-04-06T02:52:07.635-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:22.869-0500 c20013| 2016-04-06T02:52:07.636-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:22.874-0500 c20013| 2016-04-06T02:52:07.636-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|15, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:22.880-0500 c20013| 2016-04-06T02:52:07.636-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 281 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|15, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:22.881-0500 c20013| 2016-04-06T02:52:07.636-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 281 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:22.884-0500 c20013| 2016-04-06T02:52:07.636-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.886-0500 c20013| 2016-04-06T02:52:07.636-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.890-0500 c20013| 2016-04-06T02:52:07.636-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.891-0500 c20013| 2016-04-06T02:52:07.636-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.892-0500 c20013| 2016-04-06T02:52:07.636-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.894-0500 c20013| 2016-04-06T02:52:07.636-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.899-0500 c20013| 2016-04-06T02:52:07.636-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 281 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.899-0500 c20013| 2016-04-06T02:52:07.636-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.902-0500 c20013| 2016-04-06T02:52:07.636-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.906-0500 c20013| 2016-04-06T02:52:07.636-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.906-0500 c20013| 2016-04-06T02:52:07.636-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.907-0500 c20013| 2016-04-06T02:52:07.636-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.907-0500 c20013| 2016-04-06T02:52:07.636-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.908-0500 c20013| 2016-04-06T02:52:07.636-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.921-0500 c20013| 2016-04-06T02:52:07.636-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.922-0500 c20013| 2016-04-06T02:52:07.636-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:22.927-0500 c20013| 2016-04-06T02:52:07.636-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.928-0500 c20013| 2016-04-06T02:52:07.637-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:22.930-0500 s20014| 2016-04-06T02:52:07.544-0500 D ASIO [conn1] startCommand: RemoteCommand 78 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.544-0500 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929127000|14, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.931-0500 s20014| 2016-04-06T02:52:07.544-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 78 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:22.938-0500 s20014| 2016-04-06T02:52:07.544-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 78 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "shard0000", host: "mongovm16:20010" } ], id: 0, ns: "config.shards" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.939-0500 s20014| 2016-04-06T02:52:07.544-0500 D SHARDING [conn1] found 1 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp 1459929127000|14, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.940-0500 s20014| 2016-04-06T02:52:07.544-0500 D ASIO [conn1] startCommand: RemoteCommand 80 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.544-0500 cmd:{ create: "changelog", capped: true, size: 10485760, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.949-0500 s20014| 2016-04-06T02:52:07.544-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 80 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:22.950-0500 s20014| 2016-04-06T02:52:07.566-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 80 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.952-0500 s20014| 2016-04-06T02:52:07.566-0500 I SHARDING [conn1] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:07.566-0500-5704c02706c33406d4d9c0bc", server: "mongovm16", clientAddr: "127.0.0.1:55066", time: new Date(1459929127566), what: "addShard", ns: "", details: { name: "shard0000", host: "mongovm16:20010" } } [js_test:multi_coll_drop] 2016-04-06T02:52:22.962-0500 s20014| 2016-04-06T02:52:07.566-0500 D ASIO [conn1] startCommand: RemoteCommand 82 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:37.566-0500 cmd:{ insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:07.566-0500-5704c02706c33406d4d9c0bc", server: "mongovm16", clientAddr: "127.0.0.1:55066", time: new Date(1459929127566), what: "addShard", ns: "", details: { name: "shard0000", host: "mongovm16:20010" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:22.963-0500 s20014| 2016-04-06T02:52:07.566-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 82 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:22.968-0500 s20014| 2016-04-06T02:52:07.620-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 82 finished with response: { ok: 1, n: 1, opTime: { ts: Timestamp 1459929127000|16, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:22.982-0500 s20014| 2016-04-06T02:52:08.349-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c02806c33406d4d9c0bd, why: enableSharding [js_test:multi_coll_drop] 2016-04-06T02:52:22.997-0500 s20014| 2016-04-06T02:52:08.349-0500 D ASIO [conn1] startCommand: RemoteCommand 84 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:38.349-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop", state: 0 }, update: { $set: { ts: ObjectId('5704c02806c33406d4d9c0bd'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929128349), why: "enableSharding" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.005-0500 s20014| 2016-04-06T02:52:08.349-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 84 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:23.006-0500 s20014| 2016-04-06T02:52:08.356-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 84 finished with response: { lastErrorObject: { updatedExisting: false, n: 1, upserted: "multidrop" }, value: { _id: "multidrop", state: 2, ts: ObjectId('5704c02806c33406d4d9c0bd'), who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929128349), why: "enableSharding" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.006-0500 s20014| 2016-04-06T02:52:08.357-0500 I SHARDING [conn1] distributed lock 'multidrop' acquired for 'enableSharding', ts : 5704c02806c33406d4d9c0bd [js_test:multi_coll_drop] 2016-04-06T02:52:23.007-0500 s20014| 2016-04-06T02:52:08.357-0500 D ASIO [conn1] startCommand: RemoteCommand 86 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:38.357-0500 cmd:{ find: "databases", filter: { _id: /^multidrop$/i }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|1, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.007-0500 s20014| 2016-04-06T02:52:08.357-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 86 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:23.015-0500 s20014| 2016-04-06T02:52:08.357-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 86 finished with response: { waitedMS: 0, cursor: { id: 0, ns: "config.databases", firstBatch: [] }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.017-0500 s20014| 2016-04-06T02:52:08.357-0500 D ASIO [conn1] startCommand: RemoteCommand 88 -- target:mongovm16:20010 db:admin expDate:2016-04-06T02:52:38.357-0500 cmd:{ listDatabases: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.020-0500 s20014| 2016-04-06T02:52:08.357-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:52:23.024-0500 s20014| 2016-04-06T02:52:08.357-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 89 on host mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:52:23.030-0500 s20014| 2016-04-06T02:52:08.358-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 89 on host mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:52:23.033-0500 s20014| 2016-04-06T02:52:08.364-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:52:23.035-0500 s20014| 2016-04-06T02:52:08.364-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 89 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:52:23.037-0500 s20014| 2016-04-06T02:52:08.364-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 88 on host mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:52:23.041-0500 s20014| 2016-04-06T02:52:08.365-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 88 finished with response: { databases: [ { name: "local", sizeOnDisk: 8192.0, empty: false } ], totalSize: 8192.0, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.041-0500 s20014| 2016-04-06T02:52:08.365-0500 I SHARDING [conn1] Placing [multidrop] on: shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:23.044-0500 s20014| 2016-04-06T02:52:08.365-0500 I SHARDING [conn1] Enabling sharding for database [multidrop] in config db [js_test:multi_coll_drop] 2016-04-06T02:52:23.052-0500 s20014| 2016-04-06T02:52:08.365-0500 D ASIO [conn1] startCommand: RemoteCommand 92 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:38.365-0500 cmd:{ update: "databases", updates: [ { q: { _id: "multidrop" }, u: { _id: "multidrop", primary: "shard0000", partitioned: true }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.053-0500 s20014| 2016-04-06T02:52:08.365-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 92 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:23.057-0500 s20014| 2016-04-06T02:52:08.395-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 92 finished with response: { ok: 1, nModified: 0, n: 1, upserted: [ { index: 0, _id: "multidrop" } ], opTime: { ts: Timestamp 1459929128000|4, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:23.058-0500 d20010| 2016-04-06T02:52:08.357-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:59083 #3 (3 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:23.058-0500 d20010| 2016-04-06T02:52:08.358-0500 I SHARDING [conn3] remote client 192.168.100.28:59083 initialized this host as shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:23.074-0500 d20010| 2016-04-06T02:52:08.358-0500 I SHARDING [ShardingState initialization] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers [js_test:multi_coll_drop] 2016-04-06T02:52:23.077-0500 d20010| 2016-04-06T02:52:08.358-0500 I SHARDING [ShardingState initialization] Updating config server connection string to: multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:23.081-0500 d20010| 2016-04-06T02:52:08.358-0500 I NETWORK [ShardingState initialization] Starting new replica set monitor for multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:23.081-0500 d20010| 2016-04-06T02:52:08.358-0500 I NETWORK [ReplicaSetMonitorWatcher] starting [js_test:multi_coll_drop] 2016-04-06T02:52:23.085-0500 d20010| 2016-04-06T02:52:08.362-0500 I SHARDING [thread1] creating distributed lock ping thread for process mongovm16:20010:1459929128:185613966 (sleeping for 30000ms) [js_test:multi_coll_drop] 2016-04-06T02:52:23.087-0500 d20010| 2016-04-06T02:52:08.364-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:23.089-0500 d20010| 2016-04-06T02:52:08.366-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:23.094-0500 d20010| 2016-04-06T02:52:08.371-0500 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: LockStateChangeFailed: findAndModify query predicate didn't match any lock document [js_test:multi_coll_drop] 2016-04-06T02:52:23.096-0500 d20010| 2016-04-06T02:52:08.405-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:59090 #4 (4 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:23.097-0500 d20010| 2016-04-06T02:52:08.406-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:59091 #5 (5 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:23.099-0500 d20010| 2016-04-06T02:52:08.484-0500 I SHARDING [conn3] remotely refreshing metadata for multidrop.coll with requested shard version 1|0||5704c02806c33406d4d9c0c0, current shard version is 0|0||000000000000000000000000, current metadata version is 0|0||000000000000000000000000 [js_test:multi_coll_drop] 2016-04-06T02:52:23.101-0500 d20010| 2016-04-06T02:52:08.486-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:23.103-0500 d20010| 2016-04-06T02:52:08.488-0500 I SHARDING [conn3] collection multidrop.coll was previously unsharded, new metadata loaded with shard version 1|0||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.108-0500 d20010| 2016-04-06T02:52:08.488-0500 I SHARDING [conn3] collection version was loaded at version 1|0||5704c02806c33406d4d9c0c0, took 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:23.140-0500 d20010| 2016-04-06T02:52:08.507-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -100.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|0, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:52:23.153-0500 d20010| 2016-04-06T02:52:08.520-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: MinKey }, { _id: MaxKey }) in multidrop.coll', ts : 5704c02865c17830b843f17c [js_test:multi_coll_drop] 2016-04-06T02:52:23.157-0500 d20010| 2016-04-06T02:52:08.520-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|0||5704c02806c33406d4d9c0c0, current metadata version is 1|0||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.158-0500 d20010| 2016-04-06T02:52:08.522-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|0||5704c02806c33406d4d9c0c0, took 1ms) [js_test:multi_coll_drop] 2016-04-06T02:52:23.162-0500 d20010| 2016-04-06T02:52:08.522-0500 I SHARDING [conn5] splitChunk accepted at version 1|0||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.172-0500 d20010| 2016-04-06T02:52:08.528-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:08.528-0500-5704c02865c17830b843f17d", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128528), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey } }, left: { min: { _id: MinKey }, max: { _id: -100.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -100.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:52:23.175-0500 d20010| 2016-04-06T02:52:08.539-0500 I SHARDING [conn5] distributed lock with ts: 5704c02865c17830b843f17c' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:23.178-0500 d20010| 2016-04-06T02:52:08.542-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -100.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -99.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|2, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:52:23.182-0500 d20010| 2016-04-06T02:52:08.547-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -100.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c02865c17830b843f17e [js_test:multi_coll_drop] 2016-04-06T02:52:23.186-0500 d20010| 2016-04-06T02:52:08.547-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|2||5704c02806c33406d4d9c0c0, current metadata version is 1|2||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.189-0500 d20010| 2016-04-06T02:52:08.548-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|2||5704c02806c33406d4d9c0c0, took 1ms) [js_test:multi_coll_drop] 2016-04-06T02:52:23.191-0500 d20010| 2016-04-06T02:52:08.548-0500 I SHARDING [conn5] splitChunk accepted at version 1|2||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.197-0500 d20010| 2016-04-06T02:52:08.554-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:08.554-0500-5704c02865c17830b843f17f", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128554), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -100.0 }, max: { _id: MaxKey } }, left: { min: { _id: -100.0 }, max: { _id: -99.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -99.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:52:23.199-0500 d20010| 2016-04-06T02:52:08.572-0500 I SHARDING [conn5] distributed lock with ts: 5704c02865c17830b843f17e' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:23.202-0500 d20010| 2016-04-06T02:52:08.579-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -99.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -98.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|4, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:52:23.206-0500 d20010| 2016-04-06T02:52:08.585-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -99.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c02865c17830b843f180 [js_test:multi_coll_drop] 2016-04-06T02:52:23.208-0500 d20010| 2016-04-06T02:52:08.585-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|4||5704c02806c33406d4d9c0c0, current metadata version is 1|4||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.209-0500 d20010| 2016-04-06T02:52:08.586-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|4||5704c02806c33406d4d9c0c0, took 0ms) [js_test:multi_coll_drop] 2016-04-06T02:52:23.215-0500 d20010| 2016-04-06T02:52:08.586-0500 I SHARDING [conn5] splitChunk accepted at version 1|4||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.220-0500 d20010| 2016-04-06T02:52:08.598-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:08.598-0500-5704c02865c17830b843f181", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128598), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -99.0 }, max: { _id: MaxKey } }, left: { min: { _id: -99.0 }, max: { _id: -98.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -98.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:52:23.222-0500 d20010| 2016-04-06T02:52:08.609-0500 I SHARDING [conn5] distributed lock with ts: 5704c02865c17830b843f180' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:23.227-0500 d20010| 2016-04-06T02:52:08.614-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -98.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -97.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|6, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:52:23.229-0500 d20010| 2016-04-06T02:52:08.626-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -98.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c02865c17830b843f182 [js_test:multi_coll_drop] 2016-04-06T02:52:23.232-0500 d20010| 2016-04-06T02:52:08.626-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|6||5704c02806c33406d4d9c0c0, current metadata version is 1|6||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.235-0500 d20010| 2016-04-06T02:52:08.628-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|6||5704c02806c33406d4d9c0c0, took 1ms) [js_test:multi_coll_drop] 2016-04-06T02:52:23.235-0500 d20010| 2016-04-06T02:52:08.628-0500 I SHARDING [conn5] splitChunk accepted at version 1|6||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.243-0500 d20010| 2016-04-06T02:52:08.635-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:08.635-0500-5704c02865c17830b843f183", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128635), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -98.0 }, max: { _id: MaxKey } }, left: { min: { _id: -98.0 }, max: { _id: -97.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -97.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:52:23.244-0500 d20010| 2016-04-06T02:52:08.656-0500 I SHARDING [conn5] distributed lock with ts: 5704c02865c17830b843f182' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:23.248-0500 d20010| 2016-04-06T02:52:08.658-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -97.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -96.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|8, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:52:23.252-0500 d20010| 2016-04-06T02:52:08.664-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -97.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c02865c17830b843f184 [js_test:multi_coll_drop] 2016-04-06T02:52:23.257-0500 d20010| 2016-04-06T02:52:08.664-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|8||5704c02806c33406d4d9c0c0, current metadata version is 1|8||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.260-0500 d20010| 2016-04-06T02:52:08.666-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|8||5704c02806c33406d4d9c0c0, took 2ms) [js_test:multi_coll_drop] 2016-04-06T02:52:23.261-0500 d20010| 2016-04-06T02:52:08.666-0500 I SHARDING [conn5] splitChunk accepted at version 1|8||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.271-0500 d20010| 2016-04-06T02:52:08.673-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:08.673-0500-5704c02865c17830b843f185", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128673), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -97.0 }, max: { _id: MaxKey } }, left: { min: { _id: -97.0 }, max: { _id: -96.0 }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -96.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:52:23.272-0500 d20010| 2016-04-06T02:52:08.690-0500 I SHARDING [conn5] distributed lock with ts: 5704c02865c17830b843f184' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:23.276-0500 d20010| 2016-04-06T02:52:08.693-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -96.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -95.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|10, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:52:23.280-0500 d20010| 2016-04-06T02:52:08.704-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -96.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c02865c17830b843f186 [js_test:multi_coll_drop] 2016-04-06T02:52:23.285-0500 d20010| 2016-04-06T02:52:08.704-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|10||5704c02806c33406d4d9c0c0, current metadata version is 1|10||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.288-0500 d20010| 2016-04-06T02:52:08.705-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|10||5704c02806c33406d4d9c0c0, took 1ms) [js_test:multi_coll_drop] 2016-04-06T02:52:23.290-0500 d20010| 2016-04-06T02:52:08.705-0500 I SHARDING [conn5] splitChunk accepted at version 1|10||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.294-0500 d20010| 2016-04-06T02:52:08.713-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:08.713-0500-5704c02865c17830b843f187", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128713), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -96.0 }, max: { _id: MaxKey } }, left: { min: { _id: -96.0 }, max: { _id: -95.0 }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -95.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:52:23.297-0500 d20010| 2016-04-06T02:52:08.722-0500 I SHARDING [conn5] distributed lock with ts: 5704c02865c17830b843f186' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:23.304-0500 d20010| 2016-04-06T02:52:08.725-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -95.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -94.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|12, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:52:23.315-0500 d20010| 2016-04-06T02:52:08.730-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -95.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c02865c17830b843f188 [js_test:multi_coll_drop] 2016-04-06T02:52:23.316-0500 d20010| 2016-04-06T02:52:08.730-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|12||5704c02806c33406d4d9c0c0, current metadata version is 1|12||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.322-0500 d20010| 2016-04-06T02:52:08.731-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|12||5704c02806c33406d4d9c0c0, took 1ms) [js_test:multi_coll_drop] 2016-04-06T02:52:23.325-0500 d20010| 2016-04-06T02:52:08.731-0500 I SHARDING [conn5] splitChunk accepted at version 1|12||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.330-0500 d20010| 2016-04-06T02:52:08.735-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:08.735-0500-5704c02865c17830b843f189", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128735), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -95.0 }, max: { _id: MaxKey } }, left: { min: { _id: -95.0 }, max: { _id: -94.0 }, lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -94.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:52:23.334-0500 d20010| 2016-04-06T02:52:08.765-0500 I SHARDING [conn5] distributed lock with ts: 5704c02865c17830b843f188' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:23.339-0500 d20010| 2016-04-06T02:52:08.769-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -94.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -93.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|14, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:52:23.341-0500 d20010| 2016-04-06T02:52:08.775-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -94.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c02865c17830b843f18a [js_test:multi_coll_drop] 2016-04-06T02:52:23.351-0500 d20010| 2016-04-06T02:52:08.775-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|14||5704c02806c33406d4d9c0c0, current metadata version is 1|14||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.353-0500 d20010| 2016-04-06T02:52:08.779-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|14||5704c02806c33406d4d9c0c0, took 3ms) [js_test:multi_coll_drop] 2016-04-06T02:52:23.354-0500 d20010| 2016-04-06T02:52:08.779-0500 I SHARDING [conn5] splitChunk accepted at version 1|14||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.363-0500 d20010| 2016-04-06T02:52:08.784-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:08.784-0500-5704c02865c17830b843f18b", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128784), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -94.0 }, max: { _id: MaxKey } }, left: { min: { _id: -94.0 }, max: { _id: -93.0 }, lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -93.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:52:23.366-0500 d20010| 2016-04-06T02:52:08.824-0500 I SHARDING [conn5] distributed lock with ts: 5704c02865c17830b843f18a' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:23.373-0500 d20010| 2016-04-06T02:52:08.828-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -93.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -92.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|16, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:52:23.376-0500 d20010| 2016-04-06T02:52:08.832-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -93.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c02865c17830b843f18c [js_test:multi_coll_drop] 2016-04-06T02:52:23.379-0500 d20010| 2016-04-06T02:52:08.832-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|16||5704c02806c33406d4d9c0c0, current metadata version is 1|16||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.383-0500 d20010| 2016-04-06T02:52:08.835-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|16||5704c02806c33406d4d9c0c0, took 2ms) [js_test:multi_coll_drop] 2016-04-06T02:52:23.383-0500 d20010| 2016-04-06T02:52:08.835-0500 I SHARDING [conn5] splitChunk accepted at version 1|16||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.388-0500 d20010| 2016-04-06T02:52:08.842-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:08.842-0500-5704c02865c17830b843f18d", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128842), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -93.0 }, max: { _id: MaxKey } }, left: { min: { _id: -93.0 }, max: { _id: -92.0 }, lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -92.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:52:23.389-0500 d20010| 2016-04-06T02:52:08.856-0500 I SHARDING [conn5] distributed lock with ts: 5704c02865c17830b843f18c' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:23.393-0500 d20010| 2016-04-06T02:52:08.859-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -92.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -91.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|18, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:52:23.396-0500 d20010| 2016-04-06T02:52:08.867-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -92.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c02865c17830b843f18e [js_test:multi_coll_drop] 2016-04-06T02:52:23.400-0500 d20010| 2016-04-06T02:52:08.867-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|18||5704c02806c33406d4d9c0c0, current metadata version is 1|18||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.401-0500 d20010| 2016-04-06T02:52:08.868-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|18||5704c02806c33406d4d9c0c0, took 1ms) [js_test:multi_coll_drop] 2016-04-06T02:52:23.404-0500 d20010| 2016-04-06T02:52:08.868-0500 I SHARDING [conn5] splitChunk accepted at version 1|18||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.409-0500 d20010| 2016-04-06T02:52:08.872-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:08.872-0500-5704c02865c17830b843f18f", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128872), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -92.0 }, max: { _id: MaxKey } }, left: { min: { _id: -92.0 }, max: { _id: -91.0 }, lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -91.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:52:23.410-0500 d20010| 2016-04-06T02:52:08.886-0500 I SHARDING [conn5] distributed lock with ts: 5704c02865c17830b843f18e' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:23.417-0500 d20010| 2016-04-06T02:52:08.888-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -91.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -90.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|20, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:52:23.419-0500 d20010| 2016-04-06T02:52:08.892-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -91.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c02865c17830b843f190 [js_test:multi_coll_drop] 2016-04-06T02:52:23.422-0500 d20010| 2016-04-06T02:52:08.892-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|20||5704c02806c33406d4d9c0c0, current metadata version is 1|20||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.423-0500 d20010| 2016-04-06T02:52:08.893-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|20||5704c02806c33406d4d9c0c0, took 1ms) [js_test:multi_coll_drop] 2016-04-06T02:52:23.427-0500 d20010| 2016-04-06T02:52:08.893-0500 I SHARDING [conn5] splitChunk accepted at version 1|20||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.431-0500 d20010| 2016-04-06T02:52:08.900-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:08.900-0500-5704c02865c17830b843f191", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128900), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -91.0 }, max: { _id: MaxKey } }, left: { min: { _id: -91.0 }, max: { _id: -90.0 }, lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -90.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:52:23.433-0500 d20010| 2016-04-06T02:52:08.911-0500 I SHARDING [conn5] distributed lock with ts: 5704c02865c17830b843f190' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:23.436-0500 d20010| 2016-04-06T02:52:08.914-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -90.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -89.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|22, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:52:23.441-0500 d20010| 2016-04-06T02:52:08.918-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -90.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c02865c17830b843f192 [js_test:multi_coll_drop] 2016-04-06T02:52:23.446-0500 d20010| 2016-04-06T02:52:08.919-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|22||5704c02806c33406d4d9c0c0, current metadata version is 1|22||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.454-0500 d20010| 2016-04-06T02:52:08.920-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|22||5704c02806c33406d4d9c0c0, took 1ms) [js_test:multi_coll_drop] 2016-04-06T02:52:23.457-0500 d20010| 2016-04-06T02:52:08.920-0500 I SHARDING [conn5] splitChunk accepted at version 1|22||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.472-0500 d20010| 2016-04-06T02:52:08.923-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:08.923-0500-5704c02865c17830b843f193", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128923), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -90.0 }, max: { _id: MaxKey } }, left: { min: { _id: -90.0 }, max: { _id: -89.0 }, lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -89.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:52:23.480-0500 d20010| 2016-04-06T02:52:08.929-0500 I SHARDING [conn5] distributed lock with ts: 5704c02865c17830b843f192' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:23.489-0500 d20010| 2016-04-06T02:52:08.932-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -89.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -88.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|24, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:52:23.491-0500 d20010| 2016-04-06T02:52:08.934-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -89.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c02865c17830b843f194 [js_test:multi_coll_drop] 2016-04-06T02:52:23.495-0500 d20010| 2016-04-06T02:52:08.934-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|24||5704c02806c33406d4d9c0c0, current metadata version is 1|24||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.496-0500 d20010| 2016-04-06T02:52:08.936-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|24||5704c02806c33406d4d9c0c0, took 1ms) [js_test:multi_coll_drop] 2016-04-06T02:52:23.498-0500 d20010| 2016-04-06T02:52:08.936-0500 I SHARDING [conn5] splitChunk accepted at version 1|24||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.516-0500 d20010| 2016-04-06T02:52:08.940-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:08.940-0500-5704c02865c17830b843f195", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128940), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -89.0 }, max: { _id: MaxKey } }, left: { min: { _id: -89.0 }, max: { _id: -88.0 }, lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -88.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:52:23.517-0500 d20010| 2016-04-06T02:52:08.951-0500 I SHARDING [conn5] distributed lock with ts: 5704c02865c17830b843f194' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:23.521-0500 d20010| 2016-04-06T02:52:08.953-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -88.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -87.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|26, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:52:23.525-0500 d20010| 2016-04-06T02:52:08.962-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -88.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c02865c17830b843f196 [js_test:multi_coll_drop] 2016-04-06T02:52:23.531-0500 d20010| 2016-04-06T02:52:08.962-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|26||5704c02806c33406d4d9c0c0, current metadata version is 1|26||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.536-0500 d20010| 2016-04-06T02:52:08.963-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|26||5704c02806c33406d4d9c0c0, took 1ms) [js_test:multi_coll_drop] 2016-04-06T02:52:23.537-0500 d20010| 2016-04-06T02:52:08.963-0500 I SHARDING [conn5] splitChunk accepted at version 1|26||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.541-0500 d20010| 2016-04-06T02:52:08.967-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:08.967-0500-5704c02865c17830b843f197", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128967), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -88.0 }, max: { _id: MaxKey } }, left: { min: { _id: -88.0 }, max: { _id: -87.0 }, lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -87.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:52:23.542-0500 d20010| 2016-04-06T02:52:08.976-0500 I SHARDING [conn5] distributed lock with ts: 5704c02865c17830b843f196' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:23.546-0500 d20010| 2016-04-06T02:52:08.978-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -87.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -86.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|28, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:52:23.548-0500 d20010| 2016-04-06T02:52:08.988-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -87.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c02865c17830b843f198 [js_test:multi_coll_drop] 2016-04-06T02:52:23.551-0500 d20010| 2016-04-06T02:52:08.988-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|28||5704c02806c33406d4d9c0c0, current metadata version is 1|28||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.552-0500 d20010| 2016-04-06T02:52:08.990-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|28||5704c02806c33406d4d9c0c0, took 1ms) [js_test:multi_coll_drop] 2016-04-06T02:52:23.554-0500 d20010| 2016-04-06T02:52:08.990-0500 I SHARDING [conn5] splitChunk accepted at version 1|28||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.557-0500 d20010| 2016-04-06T02:52:09.014-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:09.014-0500-5704c02965c17830b843f199", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929129014), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -87.0 }, max: { _id: MaxKey } }, left: { min: { _id: -87.0 }, max: { _id: -86.0 }, lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -86.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:52:23.559-0500 d20010| 2016-04-06T02:52:09.034-0500 I SHARDING [conn5] distributed lock with ts: 5704c02865c17830b843f198' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:23.564-0500 d20010| 2016-04-06T02:52:09.036-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -86.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -85.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|30, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:52:23.567-0500 d20010| 2016-04-06T02:52:09.050-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -86.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c02965c17830b843f19a [js_test:multi_coll_drop] 2016-04-06T02:52:23.568-0500 d20010| 2016-04-06T02:52:09.050-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|30||5704c02806c33406d4d9c0c0, current metadata version is 1|30||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.570-0500 d20010| 2016-04-06T02:52:09.051-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|30||5704c02806c33406d4d9c0c0, took 1ms) [js_test:multi_coll_drop] 2016-04-06T02:52:23.572-0500 d20010| 2016-04-06T02:52:09.051-0500 I SHARDING [conn5] splitChunk accepted at version 1|30||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.577-0500 d20010| 2016-04-06T02:52:09.063-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:09.062-0500-5704c02965c17830b843f19b", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929129062), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -86.0 }, max: { _id: MaxKey } }, left: { min: { _id: -86.0 }, max: { _id: -85.0 }, lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -85.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:52:23.578-0500 d20010| 2016-04-06T02:52:09.085-0500 I SHARDING [conn5] distributed lock with ts: 5704c02965c17830b843f19a' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:23.585-0500 d20010| 2016-04-06T02:52:09.093-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -85.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -84.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|32, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:52:23.586-0500 d20010| 2016-04-06T02:52:09.100-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -85.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c02965c17830b843f19c [js_test:multi_coll_drop] 2016-04-06T02:52:23.588-0500 d20010| 2016-04-06T02:52:09.100-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|32||5704c02806c33406d4d9c0c0, current metadata version is 1|32||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.588-0500 d20010| 2016-04-06T02:52:09.102-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|32||5704c02806c33406d4d9c0c0, took 1ms) [js_test:multi_coll_drop] 2016-04-06T02:52:23.591-0500 d20010| 2016-04-06T02:52:09.102-0500 I SHARDING [conn5] splitChunk accepted at version 1|32||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.596-0500 d20010| 2016-04-06T02:52:09.107-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:09.107-0500-5704c02965c17830b843f19d", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929129107), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -85.0 }, max: { _id: MaxKey } }, left: { min: { _id: -85.0 }, max: { _id: -84.0 }, lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -84.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:52:23.597-0500 d20010| 2016-04-06T02:52:09.126-0500 I SHARDING [conn5] distributed lock with ts: 5704c02965c17830b843f19c' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:23.599-0500 d20010| 2016-04-06T02:52:09.129-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -84.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -83.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|34, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:52:23.603-0500 d20010| 2016-04-06T02:52:09.142-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -84.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c02965c17830b843f19e [js_test:multi_coll_drop] 2016-04-06T02:52:23.605-0500 d20010| 2016-04-06T02:52:09.142-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|34||5704c02806c33406d4d9c0c0, current metadata version is 1|34||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.608-0500 d20010| 2016-04-06T02:52:09.144-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|34||5704c02806c33406d4d9c0c0, took 1ms) [js_test:multi_coll_drop] 2016-04-06T02:52:23.609-0500 d20010| 2016-04-06T02:52:09.144-0500 I SHARDING [conn5] splitChunk accepted at version 1|34||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:23.609-0500 c20011| 2016-04-06T02:52:07.612-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:23.616-0500 c20011| 2016-04-06T02:52:07.612-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|14, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.619-0500 c20011| 2016-04-06T02:52:07.612-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929127000|16, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|14, t: 1 }, name-id: "72" } [js_test:multi_coll_drop] 2016-04-06T02:52:23.623-0500 c20011| 2016-04-06T02:52:07.612-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.679-0500 c20011| 2016-04-06T02:52:07.612-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:23.681-0500 c20011| 2016-04-06T02:52:07.615-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|15, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:23.692-0500 c20011| 2016-04-06T02:52:07.615-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:23.696-0500 c20011| 2016-04-06T02:52:07.615-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|15, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.697-0500 c20011| 2016-04-06T02:52:07.615-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|15, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.698-0500 c20011| 2016-04-06T02:52:07.615-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929127000|16, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|15, t: 1 }, name-id: "75" } [js_test:multi_coll_drop] 2016-04-06T02:52:23.700-0500 c20011| 2016-04-06T02:52:07.615-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929127000|16, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|15, t: 1 }, name-id: "75" } [js_test:multi_coll_drop] 2016-04-06T02:52:23.703-0500 c20011| 2016-04-06T02:52:07.615-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.707-0500 c20011| 2016-04-06T02:52:07.615-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|15, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:23.709-0500 c20011| 2016-04-06T02:52:07.615-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|14, t: 1 } } cursorid:17466612721 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 15ms [js_test:multi_coll_drop] 2016-04-06T02:52:23.713-0500 c20011| 2016-04-06T02:52:07.615-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|14, t: 1 } } cursorid:20785203637 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 43ms [js_test:multi_coll_drop] 2016-04-06T02:52:23.714-0500 c20011| 2016-04-06T02:52:07.615-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|15, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:23.718-0500 c20011| 2016-04-06T02:52:07.620-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:23.720-0500 c20011| 2016-04-06T02:52:07.620-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:23.722-0500 c20011| 2016-04-06T02:52:07.620-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.727-0500 c20011| 2016-04-06T02:52:07.620-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.730-0500 c20011| 2016-04-06T02:52:07.620-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.739-0500 c20011| 2016-04-06T02:52:07.620-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:23.744-0500 c20011| 2016-04-06T02:52:07.620-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|15, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:23.749-0500 c20011| 2016-04-06T02:52:07.620-0500 I COMMAND [conn10] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:07.566-0500-5704c02706c33406d4d9c0bc", server: "mongovm16", clientAddr: "127.0.0.1:55066", time: new Date(1459929127566), what: "addShard", ns: "", details: { name: "shard0000", host: "mongovm16:20010" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 54ms [js_test:multi_coll_drop] 2016-04-06T02:52:23.752-0500 c20011| 2016-04-06T02:52:07.620-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|15, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:23.755-0500 c20011| 2016-04-06T02:52:07.621-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|15, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:23.757-0500 c20011| 2016-04-06T02:52:07.621-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|16, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:23.759-0500 c20011| 2016-04-06T02:52:07.621-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|16, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:23.759-0500 c20011| 2016-04-06T02:52:07.630-0500 D COMMAND [conn1] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.760-0500 c20011| 2016-04-06T02:52:07.630-0500 I COMMAND [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:23.761-0500 c20011| 2016-04-06T02:52:07.635-0500 D COMMAND [conn1] run command admin.$cmd { replSetGetConfig: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.762-0500 c20011| 2016-04-06T02:52:07.635-0500 D COMMAND [conn1] command: replSetGetConfig [js_test:multi_coll_drop] 2016-04-06T02:52:23.764-0500 c20011| 2016-04-06T02:52:07.635-0500 I COMMAND [conn1] command admin.$cmd command: replSetGetConfig { replSetGetConfig: 1.0 } numYields:0 reslen:823 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:23.775-0500 c20011| 2016-04-06T02:52:07.636-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|15, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:23.776-0500 c20011| 2016-04-06T02:52:07.636-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:23.780-0500 c20011| 2016-04-06T02:52:07.636-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.784-0500 c20011| 2016-04-06T02:52:07.636-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|15, t: 1 } and is durable through: { ts: Timestamp 1459929127000|14, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.787-0500 c20011| 2016-04-06T02:52:07.636-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|15, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:23.792-0500 c20011| 2016-04-06T02:52:07.639-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:23.796-0500 c20011| 2016-04-06T02:52:07.639-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:23.800-0500 c20011| 2016-04-06T02:52:07.639-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.802-0500 c20011| 2016-04-06T02:52:07.639-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|14, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.823-0500 c20011| 2016-04-06T02:52:07.639-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:23.837-0500 c20011| 2016-04-06T02:52:07.640-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|15, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:23.837-0500 c20011| 2016-04-06T02:52:07.640-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:23.840-0500 c20011| 2016-04-06T02:52:07.640-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.844-0500 c20011| 2016-04-06T02:52:07.640-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|15, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.850-0500 c20011| 2016-04-06T02:52:07.640-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|15, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:23.858-0500 c20011| 2016-04-06T02:52:07.646-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:23.860-0500 c20011| 2016-04-06T02:52:07.646-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:23.865-0500 c20011| 2016-04-06T02:52:07.646-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929117000|1, t: -1 } and is durable through: { ts: Timestamp 1459929117000|1, t: -1 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.869-0500 c20011| 2016-04-06T02:52:07.646-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.879-0500 c20011| 2016-04-06T02:52:07.646-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:23.883-0500 c20011| 2016-04-06T02:52:08.066-0500 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.884-0500 c20011| 2016-04-06T02:52:08.066-0500 D COMMAND [conn2] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:23.891-0500 c20011| 2016-04-06T02:52:08.072-0500 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } numYields:0 reslen:480 locks:{} protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:23.894-0500 c20011| 2016-04-06T02:52:08.073-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.897-0500 c20011| 2016-04-06T02:52:08.073-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:23.900-0500 c20011| 2016-04-06T02:52:08.073-0500 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } numYields:0 reslen:480 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:23.903-0500 s20014| 2016-04-06T02:52:08.395-0500 D ASIO [conn1] startCommand: RemoteCommand 94 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:38.395-0500 cmd:{ findAndModify: "locks", query: { ts: ObjectId('5704c02806c33406d4d9c0bd') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.904-0500 s20014| 2016-04-06T02:52:08.395-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 94 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:23.912-0500 s20014| 2016-04-06T02:52:08.400-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 94 finished with response: { lastErrorObject: { updatedExisting: true, n: 1 }, value: { _id: "multidrop", state: 2, ts: ObjectId('5704c02806c33406d4d9c0bd'), who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929128349), why: "enableSharding" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.913-0500 s20014| 2016-04-06T02:52:08.400-0500 I SHARDING [conn1] distributed lock with ts: 5704c02806c33406d4d9c0bd' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:23.920-0500 s20014| 2016-04-06T02:52:08.400-0500 D ASIO [conn1] startCommand: RemoteCommand 96 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:38.400-0500 cmd:{ find: "databases", filter: { _id: "multidrop" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|5, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.923-0500 s20014| 2016-04-06T02:52:08.400-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 96 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:23.924-0500 s20014| 2016-04-06T02:52:08.404-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 96 finished with response: { waitedMS: 3, cursor: { firstBatch: [ { _id: "multidrop", primary: "shard0000", partitioned: true } ], id: 0, ns: "config.databases" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.928-0500 s20014| 2016-04-06T02:52:08.404-0500 D ASIO [conn1] startCommand: RemoteCommand 98 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:38.404-0500 cmd:{ find: "databases", filter: { _id: "multidrop" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|5, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.928-0500 s20014| 2016-04-06T02:52:08.404-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 98 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:23.930-0500 s20014| 2016-04-06T02:52:08.404-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 98 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop", primary: "shard0000", partitioned: true } ], id: 0, ns: "config.databases" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.935-0500 s20014| 2016-04-06T02:52:08.404-0500 D ASIO [conn1] startCommand: RemoteCommand 100 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:38.404-0500 cmd:{ find: "collections", filter: { _id: /^multidrop\./ }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|5, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.939-0500 s20014| 2016-04-06T02:52:08.404-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 100 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:23.942-0500 s20014| 2016-04-06T02:52:08.404-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 100 finished with response: { waitedMS: 0, cursor: { id: 0, ns: "config.collections", firstBatch: [] }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:23.944-0500 s20014| 2016-04-06T02:52:08.404-0500 D SHARDING [conn1] found 0 collections left and 0 collections dropped for database multidrop [js_test:multi_coll_drop] 2016-04-06T02:52:23.949-0500 s20014| 2016-04-06T02:52:08.404-0500 D NETWORK [conn1] creating new connection to:mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:52:23.963-0500 s20014| 2016-04-06T02:52:08.405-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:52:23.966-0500 c20013| 2016-04-06T02:52:07.637-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:23.967-0500 c20013| 2016-04-06T02:52:07.637-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:23.969-0500 c20013| 2016-04-06T02:52:07.637-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:23.970-0500 c20013| 2016-04-06T02:52:07.637-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:23.970-0500 c20013| 2016-04-06T02:52:07.637-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:23.971-0500 c20013| 2016-04-06T02:52:07.637-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:23.976-0500 c20013| 2016-04-06T02:52:07.637-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:23.979-0500 c20013| 2016-04-06T02:52:07.638-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:23.980-0500 c20013| 2016-04-06T02:52:07.638-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:23.982-0500 c20013| 2016-04-06T02:52:07.639-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:23.984-0500 c20013| 2016-04-06T02:52:07.639-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:23.985-0500 c20013| 2016-04-06T02:52:07.639-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:23.989-0500 c20013| 2016-04-06T02:52:07.639-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:23.991-0500 c20013| 2016-04-06T02:52:07.639-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:23.992-0500 c20013| 2016-04-06T02:52:07.639-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:23.992-0500 c20013| 2016-04-06T02:52:07.639-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:23.995-0500 c20013| 2016-04-06T02:52:07.639-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:24.031-0500 c20013| 2016-04-06T02:52:07.639-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:24.043-0500 c20013| 2016-04-06T02:52:07.639-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 283 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:24.044-0500 c20013| 2016-04-06T02:52:07.639-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 283 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:24.048-0500 c20013| 2016-04-06T02:52:07.639-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 283 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.059-0500 c20013| 2016-04-06T02:52:07.640-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|15, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:24.067-0500 c20013| 2016-04-06T02:52:07.640-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 285 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|15, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:24.069-0500 c20013| 2016-04-06T02:52:07.640-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 285 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:24.072-0500 c20013| 2016-04-06T02:52:07.640-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 285 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.081-0500 c20013| 2016-04-06T02:52:07.646-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:24.087-0500 c20013| 2016-04-06T02:52:07.646-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 287 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:24.091-0500 c20013| 2016-04-06T02:52:07.646-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 287 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:24.092-0500 c20013| 2016-04-06T02:52:07.646-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 287 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.097-0500 c20013| 2016-04-06T02:52:08.073-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 289 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:18.073-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.102-0500 c20013| 2016-04-06T02:52:08.073-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 289 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:24.108-0500 c20013| 2016-04-06T02:52:08.073-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 289 finished with response: { ok: 1.0, electionTime: new Date(6270347837762961409), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, opTime: { ts: Timestamp 1459929127000|16, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:24.111-0500 c20013| 2016-04-06T02:52:08.074-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:10.074Z [js_test:multi_coll_drop] 2016-04-06T02:52:24.112-0500 c20013| 2016-04-06T02:52:08.075-0500 D COMMAND [conn5] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.113-0500 c20013| 2016-04-06T02:52:08.075-0500 D COMMAND [conn5] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:24.124-0500 c20013| 2016-04-06T02:52:08.076-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 291 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:18.076-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.138-0500 c20013| 2016-04-06T02:52:08.076-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 291 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:24.147-0500 c20013| 2016-04-06T02:52:08.076-0500 I COMMAND [conn5] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } numYields:0 reslen:489 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:24.160-0500 c20013| 2016-04-06T02:52:08.081-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 291 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, opTime: { ts: Timestamp 1459929127000|16, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:24.160-0500 c20013| 2016-04-06T02:52:08.082-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:10.082Z [js_test:multi_coll_drop] 2016-04-06T02:52:24.164-0500 c20012| 2016-04-06T02:52:08.350-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 286 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|1, t: 1, h: 2686008167822126967, v: 2, op: "i", ns: "config.locks", o: { _id: "multidrop", state: 2, ts: ObjectId('5704c02806c33406d4d9c0bd'), who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929128349), why: "enableSharding" } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.174-0500 c20012| 2016-04-06T02:52:08.351-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|1 and ending at ts: Timestamp 1459929128000|1 [js_test:multi_coll_drop] 2016-04-06T02:52:24.177-0500 c20012| 2016-04-06T02:52:08.351-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:24.179-0500 c20012| 2016-04-06T02:52:08.351-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.181-0500 c20012| 2016-04-06T02:52:08.351-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.189-0500 c20012| 2016-04-06T02:52:08.351-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.192-0500 c20012| 2016-04-06T02:52:08.351-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.192-0500 c20012| 2016-04-06T02:52:08.351-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.195-0500 c20012| 2016-04-06T02:52:08.351-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.198-0500 c20012| 2016-04-06T02:52:08.351-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.201-0500 c20012| 2016-04-06T02:52:08.351-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.205-0500 c20012| 2016-04-06T02:52:08.351-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.210-0500 c20012| 2016-04-06T02:52:08.351-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.211-0500 c20012| 2016-04-06T02:52:08.351-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.212-0500 c20012| 2016-04-06T02:52:08.351-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:24.214-0500 c20012| 2016-04-06T02:52:08.352-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.215-0500 c20012| 2016-04-06T02:52:08.352-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.220-0500 c20012| 2016-04-06T02:52:08.351-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.222-0500 c20012| 2016-04-06T02:52:08.352-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.224-0500 c20012| 2016-04-06T02:52:08.352-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.224-0500 c20012| 2016-04-06T02:52:08.352-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.229-0500 c20012| 2016-04-06T02:52:08.352-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.229-0500 c20012| 2016-04-06T02:52:08.352-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.229-0500 c20012| 2016-04-06T02:52:08.352-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.230-0500 c20012| 2016-04-06T02:52:08.352-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.232-0500 c20012| 2016-04-06T02:52:08.352-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.233-0500 c20012| 2016-04-06T02:52:08.352-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.234-0500 c20012| 2016-04-06T02:52:08.352-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.235-0500 c20012| 2016-04-06T02:52:08.352-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.242-0500 c20012| 2016-04-06T02:52:08.352-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.244-0500 c20012| 2016-04-06T02:52:08.352-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.251-0500 c20012| 2016-04-06T02:52:08.352-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.253-0500 c20012| 2016-04-06T02:52:08.352-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.253-0500 c20012| 2016-04-06T02:52:08.352-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.254-0500 c20012| 2016-04-06T02:52:08.352-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.260-0500 c20012| 2016-04-06T02:52:08.352-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:24.262-0500 c20012| 2016-04-06T02:52:08.353-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:24.266-0500 c20012| 2016-04-06T02:52:08.353-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:24.271-0500 c20012| 2016-04-06T02:52:08.353-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 292 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:24.272-0500 c20012| 2016-04-06T02:52:08.353-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 292 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:24.274-0500 c20012| 2016-04-06T02:52:08.353-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 293 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.353-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|16, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:24.277-0500 c20012| 2016-04-06T02:52:08.353-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 292 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.278-0500 c20012| 2016-04-06T02:52:08.353-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 293 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:24.284-0500 c20012| 2016-04-06T02:52:08.355-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:24.289-0500 c20012| 2016-04-06T02:52:08.355-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 295 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:24.293-0500 c20012| 2016-04-06T02:52:08.355-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 295 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:24.296-0500 c20012| 2016-04-06T02:52:08.355-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 295 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.298-0500 c20012| 2016-04-06T02:52:08.356-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 293 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.301-0500 c20012| 2016-04-06T02:52:08.356-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.303-0500 c20011| 2016-04-06T02:52:08.349-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:59096 #23 (19 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:24.305-0500 s20014| 2016-04-06T02:52:08.405-0500 D NETWORK [conn1] connected to server mongovm16:20010 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:52:24.310-0500 c20011| 2016-04-06T02:52:08.349-0500 D COMMAND [conn10] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop", state: 0 }, update: { $set: { ts: ObjectId('5704c02806c33406d4d9c0bd'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929128349), why: "enableSharding" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.310-0500 c20011| 2016-04-06T02:52:08.349-0500 D QUERY [conn10] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:24.311-0500 s20014| 2016-04-06T02:52:08.405-0500 D NETWORK [conn1] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:52:24.312-0500 c20011| 2016-04-06T02:52:08.349-0500 D QUERY [conn10] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:24.314-0500 c20011| 2016-04-06T02:52:08.349-0500 D QUERY [conn10] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.315-0500 c20011| 2016-04-06T02:52:08.349-0500 D COMMAND [conn23] run command admin.$cmd { isMaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.315-0500 c20011| 2016-04-06T02:52:08.349-0500 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1 } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:24.316-0500 s20014| 2016-04-06T02:52:08.406-0500 D NETWORK [conn1] creating new connection to:mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:52:24.316-0500 s20014| 2016-04-06T02:52:08.406-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:52:24.317-0500 s20014| 2016-04-06T02:52:08.406-0500 D NETWORK [conn1] connected to server mongovm16:20010 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:52:24.319-0500 s20014| 2016-04-06T02:52:08.406-0500 D NETWORK [conn1] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:52:24.321-0500 s20014| 2016-04-06T02:52:08.406-0500 D SHARDING [conn1] setShardVersion shard0000 mongovm16:20010 { setShardVersion: "", init: true, authoritative: true, configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shard: "shard0000", shardHost: "mongovm16:20010", maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.323-0500 s20014| 2016-04-06T02:52:08.413-0500 I COMMAND [conn1] CMD: shardcollection: { shardCollection: "multidrop.coll", key: { _id: 1.0 } } [js_test:multi_coll_drop] 2016-04-06T02:52:24.326-0500 s20014| 2016-04-06T02:52:08.413-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c02806c33406d4d9c0be, why: shardCollection [js_test:multi_coll_drop] 2016-04-06T02:52:24.335-0500 s20014| 2016-04-06T02:52:08.413-0500 D ASIO [conn1] startCommand: RemoteCommand 102 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:38.413-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02806c33406d4d9c0be'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929128413), why: "shardCollection" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.337-0500 s20014| 2016-04-06T02:52:08.413-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 102 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:24.343-0500 s20014| 2016-04-06T02:52:08.420-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 102 finished with response: { lastErrorObject: { updatedExisting: false, n: 1, upserted: "multidrop.coll" }, value: { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c02806c33406d4d9c0be'), who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929128413), why: "shardCollection" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.346-0500 s20014| 2016-04-06T02:52:08.420-0500 I SHARDING [conn1] distributed lock 'multidrop.coll' acquired for 'shardCollection', ts : 5704c02806c33406d4d9c0be [js_test:multi_coll_drop] 2016-04-06T02:52:24.348-0500 s20014| 2016-04-06T02:52:08.420-0500 D ASIO [conn1] startCommand: RemoteCommand 104 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:38.420-0500 cmd:{ find: "databases", filter: { _id: "multidrop" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|6, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.351-0500 s20014| 2016-04-06T02:52:08.420-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 104 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:24.354-0500 s20014| 2016-04-06T02:52:08.420-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 104 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop", primary: "shard0000", partitioned: true } ], id: 0, ns: "config.databases" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.357-0500 s20014| 2016-04-06T02:52:08.420-0500 D ASIO [conn1] startCommand: RemoteCommand 106 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:38.420-0500 cmd:{ count: "chunks", query: { ns: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|6, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.360-0500 s20014| 2016-04-06T02:52:08.420-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 106 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:24.364-0500 s20014| 2016-04-06T02:52:08.421-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 106 finished with response: { waitedMS: 0, n: 0, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.374-0500 s20014| 2016-04-06T02:52:08.421-0500 I SHARDING [conn1] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:08.421-0500-5704c02806c33406d4d9c0bf", server: "mongovm16", clientAddr: "127.0.0.1:55066", time: new Date(1459929128421), what: "shardCollection.start", ns: "multidrop.coll", details: { shardKey: { _id: 1.0 }, collection: "multidrop.coll", primary: "shard0000:mongovm16:20010", initShards: [], numChunks: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:24.383-0500 c20011| 2016-04-06T02:52:08.350-0500 D COMMAND [conn23] run command admin.$cmd { replSetGetConfig: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.384-0500 c20011| 2016-04-06T02:52:08.350-0500 D COMMAND [conn23] command: replSetGetConfig [js_test:multi_coll_drop] 2016-04-06T02:52:24.389-0500 c20011| 2016-04-06T02:52:08.350-0500 I COMMAND [conn23] command admin.$cmd command: replSetGetConfig { replSetGetConfig: 1.0 } numYields:0 reslen:823 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:24.393-0500 c20011| 2016-04-06T02:52:08.350-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|16, t: 1 } } cursorid:20785203637 numYields:1 nreturned:1 reslen:628 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 728ms [js_test:multi_coll_drop] 2016-04-06T02:52:24.397-0500 c20011| 2016-04-06T02:52:08.351-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|16, t: 1 } } cursorid:17466612721 numYields:1 nreturned:1 reslen:628 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 729ms [js_test:multi_coll_drop] 2016-04-06T02:52:24.403-0500 c20011| 2016-04-06T02:52:08.352-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929128000|1, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|16, t: 1 }, name-id: "76" } [js_test:multi_coll_drop] 2016-04-06T02:52:24.413-0500 c20011| 2016-04-06T02:52:08.353-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:24.421-0500 s20014| 2016-04-06T02:52:08.421-0500 D ASIO [conn1] startCommand: RemoteCommand 108 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:38.421-0500 cmd:{ insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.421-0500-5704c02806c33406d4d9c0bf", server: "mongovm16", clientAddr: "127.0.0.1:55066", time: new Date(1459929128421), what: "shardCollection.start", ns: "multidrop.coll", details: { shardKey: { _id: 1.0 }, collection: "multidrop.coll", primary: "shard0000:mongovm16:20010", initShards: [], numChunks: 1 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.421-0500 s20014| 2016-04-06T02:52:08.421-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 108 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:24.426-0500 s20014| 2016-04-06T02:52:08.429-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 108 finished with response: { ok: 1, n: 1, opTime: { ts: Timestamp 1459929128000|7, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:24.429-0500 s20014| 2016-04-06T02:52:08.429-0500 D ASIO [conn1] startCommand: RemoteCommand 110 -- target:mongovm16:20010 db:multidrop expDate:2016-04-06T02:52:38.429-0500 cmd:{ count: "coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:24.436-0500 s20014| 2016-04-06T02:52:08.429-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 110 on host mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:52:24.439-0500 s20014| 2016-04-06T02:52:08.430-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 110 finished with response: { waitedMS: 0, n: 0, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.440-0500 s20014| 2016-04-06T02:52:08.430-0500 I SHARDING [conn1] going to create 1 chunk(s) for: multidrop.coll using new epoch 5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:24.447-0500 s20014| 2016-04-06T02:52:08.430-0500 D ASIO [conn1] startCommand: RemoteCommand 112 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:38.430-0500 cmd:{ insert: "chunks", documents: [ { _id: "multidrop.coll-_id_MinKey", ns: "multidrop.coll", min: { _id: MinKey }, max: { _id: MaxKey }, shard: "shard0000", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.452-0500 s20014| 2016-04-06T02:52:08.430-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 112 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:24.462-0500 s20014| 2016-04-06T02:52:08.446-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 112 finished with response: { ok: 1, n: 1, opTime: { ts: Timestamp 1459929128000|8, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:24.466-0500 s20014| 2016-04-06T02:52:08.447-0500 D SHARDING [conn1] major version query from 0|0||5704c02806c33406d4d9c0c0 and over 0 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 0|0 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.469-0500 s20014| 2016-04-06T02:52:08.447-0500 D ASIO [conn1] startCommand: RemoteCommand 114 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:38.447-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 0|0 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|8, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.471-0500 s20014| 2016-04-06T02:52:08.447-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 114 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:24.477-0500 s20014| 2016-04-06T02:52:08.447-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 114 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_MinKey", ns: "multidrop.coll", min: { _id: MinKey }, max: { _id: MaxKey }, shard: "shard0000", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.478-0500 s20014| 2016-04-06T02:52:08.448-0500 D SHARDING [conn1] loaded 1 chunks into new chunk manager for multidrop.coll with version 1|0||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:24.481-0500 s20014| 2016-04-06T02:52:08.448-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 1ms sequenceNumber: 2 version: 1|0||5704c02806c33406d4d9c0c0 based on: (empty) [js_test:multi_coll_drop] 2016-04-06T02:52:24.490-0500 s20014| 2016-04-06T02:52:08.448-0500 D ASIO [conn1] startCommand: RemoteCommand 116 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:38.448-0500 cmd:{ update: "collections", updates: [ { q: { _id: "multidrop.coll" }, u: { _id: "multidrop.coll", lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), lastmod: new Date(4294967296), dropped: false, key: { _id: 1.0 }, unique: false }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.495-0500 s20014| 2016-04-06T02:52:08.448-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 116 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:24.498-0500 s20014| 2016-04-06T02:52:08.484-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 116 finished with response: { ok: 1, nModified: 0, n: 1, upserted: [ { index: 0, _id: "multidrop.coll" } ], opTime: { ts: Timestamp 1459929128000|10, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:24.502-0500 s20014| 2016-04-06T02:52:08.484-0500 D ASIO [conn1] startCommand: RemoteCommand 118 -- target:mongovm16:20010 db:admin expDate:2016-04-06T02:52:38.484-0500 cmd:{ setShardVersion: "multidrop.coll", init: false, authoritative: true, configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shard: "shard0000", shardHost: "mongovm16:20010", version: Timestamp 1000|0, versionEpoch: ObjectId('5704c02806c33406d4d9c0c0'), noConnectionVersioning: true } [js_test:multi_coll_drop] 2016-04-06T02:52:24.505-0500 s20014| 2016-04-06T02:52:08.484-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 118 on host mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:52:24.507-0500 s20014| 2016-04-06T02:52:08.488-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 118 finished with response: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.516-0500 s20014| 2016-04-06T02:52:08.489-0500 I SHARDING [conn1] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:08.489-0500-5704c02806c33406d4d9c0c1", server: "mongovm16", clientAddr: "127.0.0.1:55066", time: new Date(1459929128489), what: "shardCollection.end", ns: "multidrop.coll", details: { version: "1|0||5704c02806c33406d4d9c0c0" } } [js_test:multi_coll_drop] 2016-04-06T02:52:24.519-0500 s20014| 2016-04-06T02:52:08.489-0500 D ASIO [conn1] startCommand: RemoteCommand 120 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:38.489-0500 cmd:{ insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.489-0500-5704c02806c33406d4d9c0c1", server: "mongovm16", clientAddr: "127.0.0.1:55066", time: new Date(1459929128489), what: "shardCollection.end", ns: "multidrop.coll", details: { version: "1|0||5704c02806c33406d4d9c0c0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.522-0500 s20014| 2016-04-06T02:52:08.490-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 120 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:24.750-0500 s20014| 2016-04-06T02:52:08.496-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 120 finished with response: { ok: 1, n: 1, opTime: { ts: Timestamp 1459929128000|11, t: 1 }, electionId: ObjectId('7fffffff0000000000000001') } [js_test:multi_coll_drop] 2016-04-06T02:52:24.753-0500 s20014| 2016-04-06T02:52:08.496-0500 D ASIO [conn1] startCommand: RemoteCommand 122 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:38.496-0500 cmd:{ findAndModify: "locks", query: { ts: ObjectId('5704c02806c33406d4d9c0be') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.756-0500 s20014| 2016-04-06T02:52:08.496-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 122 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:24.760-0500 s20014| 2016-04-06T02:52:08.502-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 122 finished with response: { lastErrorObject: { updatedExisting: true, n: 1 }, value: { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c02806c33406d4d9c0be'), who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929128413), why: "shardCollection" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.761-0500 s20014| 2016-04-06T02:52:08.502-0500 I SHARDING [conn1] distributed lock with ts: 5704c02806c33406d4d9c0be' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:24.767-0500 s20014| 2016-04-06T02:52:08.502-0500 D ASIO [conn1] startCommand: RemoteCommand 124 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:38.502-0500 cmd:{ find: "databases", filter: { _id: "multidrop" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|12, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.768-0500 s20014| 2016-04-06T02:52:08.502-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 124 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:24.770-0500 s20014| 2016-04-06T02:52:08.503-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 124 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop", primary: "shard0000", partitioned: true } ], id: 0, ns: "config.databases" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.775-0500 s20014| 2016-04-06T02:52:08.503-0500 D ASIO [conn1] startCommand: RemoteCommand 126 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:38.503-0500 cmd:{ find: "collections", filter: { _id: /^multidrop\./ }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|12, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.776-0500 s20014| 2016-04-06T02:52:08.503-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 126 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:24.779-0500 s20014| 2016-04-06T02:52:08.504-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 126 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), lastmod: new Date(4294967296), dropped: false, key: { _id: 1.0 }, unique: false } ], id: 0, ns: "config.collections" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.783-0500 s20014| 2016-04-06T02:52:08.504-0500 D SHARDING [conn1] major version query from 0|0||5704c02806c33406d4d9c0c0 and over 0 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 0|0 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.785-0500 s20014| 2016-04-06T02:52:08.504-0500 D ASIO [conn1] startCommand: RemoteCommand 128 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:52:38.504-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 0|0 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|12, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.787-0500 s20014| 2016-04-06T02:52:08.504-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 128 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:24.789-0500 s20014| 2016-04-06T02:52:08.505-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 128 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_MinKey", ns: "multidrop.coll", min: { _id: MinKey }, max: { _id: MaxKey }, shard: "shard0000", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.796-0500 s20014| 2016-04-06T02:52:08.505-0500 D SHARDING [conn1] loaded 1 chunks into new chunk manager for multidrop.coll with version 1|0||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:24.797-0500 s20014| 2016-04-06T02:52:08.505-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 3 version: 1|0||5704c02806c33406d4d9c0c0 based on: (empty) [js_test:multi_coll_drop] 2016-04-06T02:52:24.800-0500 s20014| 2016-04-06T02:52:08.505-0500 D SHARDING [conn1] found 1 collections left and 0 collections dropped for database multidrop [js_test:multi_coll_drop] 2016-04-06T02:52:24.802-0500 s20014| 2016-04-06T02:52:08.505-0500 D ASIO [conn1] startCommand: RemoteCommand 130 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:38.505-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|12, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.806-0500 s20014| 2016-04-06T02:52:08.506-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 130 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:24.809-0500 s20014| 2016-04-06T02:52:08.506-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 130 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_MinKey", ns: "multidrop.coll", min: { _id: MinKey }, max: { _id: MaxKey }, shard: "shard0000", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.812-0500 s20014| 2016-04-06T02:52:08.507-0500 I COMMAND [conn1] splitting chunk [{ _id: MinKey },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:24.815-0500 s20014| 2016-04-06T02:52:08.539-0500 D ASIO [conn1] startCommand: RemoteCommand 132 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:38.539-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|16, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.817-0500 s20014| 2016-04-06T02:52:08.539-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 132 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:24.819-0500 s20014| 2016-04-06T02:52:08.540-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 132 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-100.0", lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -100.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.823-0500 s20014| 2016-04-06T02:52:08.540-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|0||5704c02806c33406d4d9c0c0 and 1 chunks [js_test:multi_coll_drop] 2016-04-06T02:52:24.826-0500 s20014| 2016-04-06T02:52:08.540-0500 D SHARDING [conn1] major version query from 1|0||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|0 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.828-0500 s20014| 2016-04-06T02:52:08.540-0500 D ASIO [conn1] startCommand: RemoteCommand 134 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:38.540-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|0 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|16, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.830-0500 s20014| 2016-04-06T02:52:08.540-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 134 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:24.835-0500 s20014| 2016-04-06T02:52:08.540-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 134 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: MinKey }, max: { _id: -100.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-100.0", lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -100.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.837-0500 s20014| 2016-04-06T02:52:08.541-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|2||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:24.839-0500 s20014| 2016-04-06T02:52:08.541-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 4 version: 1|2||5704c02806c33406d4d9c0c0 based on: 1|0||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:24.840-0500 s20014| 2016-04-06T02:52:08.541-0500 D ASIO [conn1] startCommand: RemoteCommand 136 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:38.541-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|16, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.841-0500 s20014| 2016-04-06T02:52:08.541-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 136 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:24.844-0500 s20014| 2016-04-06T02:52:08.541-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 136 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-100.0", lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -100.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.847-0500 s20014| 2016-04-06T02:52:08.541-0500 I COMMAND [conn1] splitting chunk [{ _id: -100.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:24.849-0500 s20014| 2016-04-06T02:52:08.573-0500 D ASIO [conn1] startCommand: RemoteCommand 138 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:38.573-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|20, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.851-0500 s20014| 2016-04-06T02:52:08.573-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 138 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:24.854-0500 s20014| 2016-04-06T02:52:08.573-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 138 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-99.0", lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -99.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.858-0500 s20014| 2016-04-06T02:52:08.574-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|2||5704c02806c33406d4d9c0c0 and 2 chunks [js_test:multi_coll_drop] 2016-04-06T02:52:24.859-0500 s20014| 2016-04-06T02:52:08.574-0500 D SHARDING [conn1] major version query from 1|2||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|2 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.867-0500 s20014| 2016-04-06T02:52:08.574-0500 D ASIO [conn1] startCommand: RemoteCommand 140 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:52:38.574-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|2 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|20, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.868-0500 s20014| 2016-04-06T02:52:08.574-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 140 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:24.876-0500 s20014| 2016-04-06T02:52:08.577-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 140 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-100.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -100.0 }, max: { _id: -99.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-99.0", lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -99.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.876-0500 s20014| 2016-04-06T02:52:08.578-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|4||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:24.879-0500 s20014| 2016-04-06T02:52:08.578-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 3ms sequenceNumber: 5 version: 1|4||5704c02806c33406d4d9c0c0 based on: 1|2||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:24.882-0500 s20014| 2016-04-06T02:52:08.578-0500 D ASIO [conn1] startCommand: RemoteCommand 142 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:38.578-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|20, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.884-0500 s20014| 2016-04-06T02:52:08.578-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 142 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:24.889-0500 s20014| 2016-04-06T02:52:08.579-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 142 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-99.0", lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -99.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.890-0500 s20014| 2016-04-06T02:52:08.579-0500 I COMMAND [conn1] splitting chunk [{ _id: -99.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:24.893-0500 s20014| 2016-04-06T02:52:08.610-0500 D ASIO [conn1] startCommand: RemoteCommand 144 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:38.610-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|24, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.896-0500 s20014| 2016-04-06T02:52:08.611-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 144 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:24.897-0500 c20011| 2016-04-06T02:52:08.353-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:24.903-0500 c20011| 2016-04-06T02:52:08.353-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|1, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.911-0500 c20011| 2016-04-06T02:52:08.353-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|1, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929127000|16, t: 1 }, name-id: "76" } [js_test:multi_coll_drop] 2016-04-06T02:52:24.917-0500 c20011| 2016-04-06T02:52:08.353-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.926-0500 c20011| 2016-04-06T02:52:08.353-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:24.934-0500 c20011| 2016-04-06T02:52:08.353-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|16, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:24.937-0500 c20011| 2016-04-06T02:52:08.355-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:24.944-0500 c20011| 2016-04-06T02:52:08.355-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:24.946-0500 c20011| 2016-04-06T02:52:08.355-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|1, t: 1 } and is durable through: { ts: Timestamp 1459929128000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.948-0500 c20011| 2016-04-06T02:52:08.355-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.951-0500 c20011| 2016-04-06T02:52:08.355-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.957-0500 c20011| 2016-04-06T02:52:08.355-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:24.964-0500 c20011| 2016-04-06T02:52:08.356-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:24.966-0500 c20011| 2016-04-06T02:52:08.356-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:24.975-0500 c20011| 2016-04-06T02:52:08.356-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:24.993-0500 c20011| 2016-04-06T02:52:08.356-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|1, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.016-0500 c20011| 2016-04-06T02:52:08.356-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.021-0500 c20011| 2016-04-06T02:52:08.356-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|16, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.026-0500 c20011| 2016-04-06T02:52:08.356-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|16, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.032-0500 c20011| 2016-04-06T02:52:08.356-0500 I COMMAND [conn10] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop", state: 0 }, update: { $set: { ts: ObjectId('5704c02806c33406d4d9c0bd'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929128349), why: "enableSharding" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c02806c33406d4d9c0bd'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929128349), why: "enableSharding" } } keysExamined:0 docsExamined:0 nMatched:0 nModified:0 upsert:1 numYields:0 reslen:579 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.032-0500 *** Stepping down connection to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:25.042-0500 c20011| 2016-04-06T02:52:08.356-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|16, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.046-0500 c20011| 2016-04-06T02:52:08.357-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.050-0500 c20011| 2016-04-06T02:52:08.357-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:25.050-0500 c20011| 2016-04-06T02:52:08.357-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:25.056-0500 c20011| 2016-04-06T02:52:08.357-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.060-0500 c20011| 2016-04-06T02:52:08.357-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|1, t: 1 } and is durable through: { ts: Timestamp 1459929128000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.078-0500 c20011| 2016-04-06T02:52:08.357-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.081-0500 c20011| 2016-04-06T02:52:08.358-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.082-0500 c20011| 2016-04-06T02:52:08.362-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:59101 #24 (20 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:25.084-0500 c20011| 2016-04-06T02:52:08.362-0500 D COMMAND [conn24] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20010" } [js_test:multi_coll_drop] 2016-04-06T02:52:25.085-0500 c20011| 2016-04-06T02:52:08.363-0500 I COMMAND [conn24] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20010" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.085-0500 c20011| 2016-04-06T02:52:08.363-0500 D COMMAND [conn24] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.087-0500 c20011| 2016-04-06T02:52:08.363-0500 I COMMAND [conn24] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.088-0500 c20011| 2016-04-06T02:52:08.363-0500 D COMMAND [conn24] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.089-0500 c20011| 2016-04-06T02:52:08.363-0500 I COMMAND [conn24] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.092-0500 c20011| 2016-04-06T02:52:08.364-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:59103 #25 (21 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:25.093-0500 c20011| 2016-04-06T02:52:08.364-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:59104 #26 (22 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:25.095-0500 c20011| 2016-04-06T02:52:08.364-0500 D COMMAND [conn25] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20010" } [js_test:multi_coll_drop] 2016-04-06T02:52:25.098-0500 c20011| 2016-04-06T02:52:08.364-0500 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20010" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.099-0500 c20011| 2016-04-06T02:52:08.364-0500 D COMMAND [conn25] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 0|0, t: -1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.102-0500 c20011| 2016-04-06T02:52:08.364-0500 D COMMAND [conn25] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 0|0, t: -1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.104-0500 c20011| 2016-04-06T02:52:08.364-0500 D COMMAND [conn25] Using 'committed' snapshot. { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 0|0, t: -1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.105-0500 c20011| 2016-04-06T02:52:08.364-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:25.113-0500 c20011| 2016-04-06T02:52:08.364-0500 I COMMAND [conn25] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 0|0, t: -1 } }, maxTimeMS: 30000 } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:423 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.116-0500 c20011| 2016-04-06T02:52:08.364-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "lockpings", query: { _id: "mongovm16:20010:1459929128:185613966" }, update: { $set: { ping: new Date(1459929128362) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.116-0500 c20011| 2016-04-06T02:52:08.364-0500 D QUERY [conn25] Using idhack: { _id: "mongovm16:20010:1459929128:185613966" } [js_test:multi_coll_drop] 2016-04-06T02:52:25.123-0500 c20011| 2016-04-06T02:52:08.365-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|1, t: 1 } } cursorid:20785203637 numYields:1 nreturned:1 reslen:504 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.126-0500 c20011| 2016-04-06T02:52:08.365-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|1, t: 1 } } cursorid:17466612721 numYields:1 nreturned:1 reslen:504 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.130-0500 c20011| 2016-04-06T02:52:08.365-0500 D COMMAND [conn10] run command config.$cmd { update: "databases", updates: [ { q: { _id: "multidrop" }, u: { _id: "multidrop", primary: "shard0000", partitioned: true }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.131-0500 c20011| 2016-04-06T02:52:08.365-0500 D COMMAND [conn26] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20010" } [js_test:multi_coll_drop] 2016-04-06T02:52:25.133-0500 c20011| 2016-04-06T02:52:08.365-0500 D STORAGE [conn10] create collection config.databases {} [js_test:multi_coll_drop] 2016-04-06T02:52:25.136-0500 c20011| 2016-04-06T02:52:08.365-0500 D STORAGE [conn10] stored meta data for config.databases @ RecordId(15) [js_test:multi_coll_drop] 2016-04-06T02:52:25.137-0500 c20011| 2016-04-06T02:52:08.365-0500 D STORAGE [conn10] WiredTigerKVEngine::createRecordStore uri: table:collection-35--6404702321693896372 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:25.144-0500 c20011| 2016-04-06T02:52:08.365-0500 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20010" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.147-0500 c20011| 2016-04-06T02:52:08.368-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.152-0500 c20011| 2016-04-06T02:52:08.368-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:25.153-0500 c20011| 2016-04-06T02:52:08.368-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:25.159-0500 c20011| 2016-04-06T02:52:08.368-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|2, t: 1 } and is durable through: { ts: Timestamp 1459929128000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.162-0500 c20011| 2016-04-06T02:52:08.368-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.167-0500 c20011| 2016-04-06T02:52:08.368-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.169-0500 c20011| 2016-04-06T02:52:08.368-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:25.169-0500 c20011| 2016-04-06T02:52:08.368-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:25.171-0500 c20011| 2016-04-06T02:52:08.368-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|2, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|1, t: 1 }, name-id: "77" } [js_test:multi_coll_drop] 2016-04-06T02:52:25.176-0500 c20011| 2016-04-06T02:52:08.368-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.179-0500 c20011| 2016-04-06T02:52:08.368-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|2, t: 1 } and is durable through: { ts: Timestamp 1459929128000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.183-0500 c20011| 2016-04-06T02:52:08.368-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929128000|2, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|1, t: 1 }, name-id: "77" } [js_test:multi_coll_drop] 2016-04-06T02:52:25.190-0500 c20011| 2016-04-06T02:52:08.368-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.195-0500 c20011| 2016-04-06T02:52:08.369-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.201-0500 c20011| 2016-04-06T02:52:08.370-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:25.202-0500 c20011| 2016-04-06T02:52:08.370-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:25.206-0500 c20011| 2016-04-06T02:52:08.370-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.221-0500 c20011| 2016-04-06T02:52:08.370-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|2, t: 1 } and is durable through: { ts: Timestamp 1459929128000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.221-0500 c20011| 2016-04-06T02:52:08.370-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.228-0500 c20011| 2016-04-06T02:52:08.370-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.234-0500 c20011| 2016-04-06T02:52:08.371-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:25.234-0500 c20011| 2016-04-06T02:52:08.371-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:25.236-0500 c20011| 2016-04-06T02:52:08.371-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|2, t: 1 } and is durable through: { ts: Timestamp 1459929128000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.240-0500 c20011| 2016-04-06T02:52:08.371-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.248-0500 c20011| 2016-04-06T02:52:08.371-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.252-0500 c20011| 2016-04-06T02:52:08.371-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|1, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.257-0500 c20011| 2016-04-06T02:52:08.371-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|1, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.263-0500 c20011| 2016-04-06T02:52:08.371-0500 I COMMAND [conn25] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "mongovm16:20010:1459929128:185613966" }, update: { $set: { ping: new Date(1459929128362) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ping: new Date(1459929128362) } } keysExamined:0 docsExamined:0 nMatched:0 nModified:0 upsert:1 numYields:0 reslen:413 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.267-0500 c20011| 2016-04-06T02:52:08.371-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.271-0500 c20011| 2016-04-06T02:52:08.371-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-35--6404702321693896372 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:25.275-0500 c20011| 2016-04-06T02:52:08.372-0500 D STORAGE [conn10] config.databases: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:25.283-0500 c20011| 2016-04-06T02:52:08.372-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.289-0500 c20011| 2016-04-06T02:52:08.372-0500 D STORAGE [conn10] WiredTigerKVEngine::createSortedDataInterface ident: index-36--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.databases" }), [js_test:multi_coll_drop] 2016-04-06T02:52:25.293-0500 c20011| 2016-04-06T02:52:08.372-0500 D STORAGE [conn10] create uri: table:index-36--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.databases" }), [js_test:multi_coll_drop] 2016-04-06T02:52:25.294-0500 c20011| 2016-04-06T02:52:08.374-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-36--6404702321693896372 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:25.296-0500 c20011| 2016-04-06T02:52:08.374-0500 D STORAGE [conn10] config.databases: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:25.297-0500 c20011| 2016-04-06T02:52:08.374-0500 D QUERY [conn10] Using idhack: { _id: "multidrop" } [js_test:multi_coll_drop] 2016-04-06T02:52:25.306-0500 c20011| 2016-04-06T02:52:08.374-0500 I WRITE [conn10] update config.databases query: { _id: "multidrop" } update: { _id: "multidrop", primary: "shard0000", partitioned: true } keysExamined:0 docsExamined:0 nMatched:0 nModified:0 upsert:1 numYields:0 locks:{ Global: { acquireCount: { r: 5, w: 5 } }, Database: { acquireCount: { w: 4, W: 1 } }, Collection: { acquireCount: { w: 2 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } 8ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.311-0500 c20011| 2016-04-06T02:52:08.374-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|2, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.317-0500 c20011| 2016-04-06T02:52:08.374-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|2, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.321-0500 c20011| 2016-04-06T02:52:08.375-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929128000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|2, t: 1 }, name-id: "78" } [js_test:multi_coll_drop] 2016-04-06T02:52:25.325-0500 c20011| 2016-04-06T02:52:08.377-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.327-0500 c20011| 2016-04-06T02:52:08.377-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|2, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:500 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.330-0500 c20011| 2016-04-06T02:52:08.377-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.336-0500 c20011| 2016-04-06T02:52:08.378-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|2, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:500 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.338-0500 c20011| 2016-04-06T02:52:08.379-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.343-0500 c20011| 2016-04-06T02:52:08.380-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.348-0500 c20011| 2016-04-06T02:52:08.392-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:25.349-0500 c20011| 2016-04-06T02:52:08.392-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:25.353-0500 c20011| 2016-04-06T02:52:08.392-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.358-0500 c20011| 2016-04-06T02:52:08.392-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|3, t: 1 } and is durable through: { ts: Timestamp 1459929128000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.363-0500 c20011| 2016-04-06T02:52:08.392-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929128000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|2, t: 1 }, name-id: "78" } [js_test:multi_coll_drop] 2016-04-06T02:52:25.371-0500 c20011| 2016-04-06T02:52:08.393-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.376-0500 c20011| 2016-04-06T02:52:08.393-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:25.377-0500 c20011| 2016-04-06T02:52:08.393-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:25.379-0500 c20011| 2016-04-06T02:52:08.393-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.382-0500 c20011| 2016-04-06T02:52:08.393-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|3, t: 1 } and is durable through: { ts: Timestamp 1459929128000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.382-0500 c20011| 2016-04-06T02:52:08.393-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.387-0500 c20011| 2016-04-06T02:52:08.393-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929128000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|3, t: 1 }, name-id: "81" } [js_test:multi_coll_drop] 2016-04-06T02:52:25.390-0500 c20011| 2016-04-06T02:52:08.393-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929128000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|3, t: 1 }, name-id: "81" } [js_test:multi_coll_drop] 2016-04-06T02:52:25.397-0500 c20011| 2016-04-06T02:52:08.393-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.401-0500 c20011| 2016-04-06T02:52:08.393-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|2, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 14ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.404-0500 c20011| 2016-04-06T02:52:08.394-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|2, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 13ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.407-0500 c20011| 2016-04-06T02:52:08.394-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:25.408-0500 c20011| 2016-04-06T02:52:08.394-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:25.410-0500 c20011| 2016-04-06T02:52:08.394-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.414-0500 c20011| 2016-04-06T02:52:08.394-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|4, t: 1 } and is durable through: { ts: Timestamp 1459929128000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.417-0500 c20011| 2016-04-06T02:52:08.394-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929128000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|3, t: 1 }, name-id: "81" } [js_test:multi_coll_drop] 2016-04-06T02:52:25.425-0500 c20011| 2016-04-06T02:52:08.394-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.429-0500 c20011| 2016-04-06T02:52:08.394-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.430-0500 c20011| 2016-04-06T02:52:08.394-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.435-0500 c20011| 2016-04-06T02:52:08.395-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:25.436-0500 c20011| 2016-04-06T02:52:08.395-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:25.437-0500 c20011| 2016-04-06T02:52:08.395-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.440-0500 c20011| 2016-04-06T02:52:08.395-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|4, t: 1 } and is durable through: { ts: Timestamp 1459929128000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.442-0500 c20011| 2016-04-06T02:52:08.395-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.450-0500 c20011| 2016-04-06T02:52:08.395-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.452-0500 c20011| 2016-04-06T02:52:08.395-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|3, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.456-0500 c20011| 2016-04-06T02:52:08.395-0500 I COMMAND [conn10] command config.$cmd command: update { update: "databases", updates: [ { q: { _id: "multidrop" }, u: { _id: "multidrop", primary: "shard0000", partitioned: true }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:439 locks:{ Global: { acquireCount: { r: 5, w: 5 } }, Database: { acquireCount: { w: 4, W: 1 } }, Collection: { acquireCount: { w: 2 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 29ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.464-0500 c20011| 2016-04-06T02:52:08.395-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|3, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.467-0500 c20011| 2016-04-06T02:52:08.395-0500 D COMMAND [conn10] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c02806c33406d4d9c0bd') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.470-0500 c20011| 2016-04-06T02:52:08.395-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.473-0500 c20011| 2016-04-06T02:52:08.395-0500 D QUERY [conn10] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:25.475-0500 c20011| 2016-04-06T02:52:08.395-0500 D QUERY [conn10] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c02806c33406d4d9c0bd') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.477-0500 c20011| 2016-04-06T02:52:08.395-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.483-0500 c20011| 2016-04-06T02:52:08.396-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|4, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:490 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.491-0500 c20011| 2016-04-06T02:52:08.396-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|4, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:490 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.498-0500 c20011| 2016-04-06T02:52:08.397-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:25.498-0500 c20011| 2016-04-06T02:52:08.397-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:25.504-0500 c20011| 2016-04-06T02:52:08.397-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|3, t: 1 } and is durable through: { ts: Timestamp 1459929128000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.508-0500 c20011| 2016-04-06T02:52:08.397-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.514-0500 c20011| 2016-04-06T02:52:08.397-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.519-0500 c20011| 2016-04-06T02:52:08.398-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929128000|5, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|4, t: 1 }, name-id: "82" } [js_test:multi_coll_drop] 2016-04-06T02:52:25.526-0500 c20011| 2016-04-06T02:52:08.398-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:25.529-0500 c20011| 2016-04-06T02:52:08.398-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:25.545-0500 c20011| 2016-04-06T02:52:08.398-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.548-0500 c20011| 2016-04-06T02:52:08.398-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.551-0500 c20011| 2016-04-06T02:52:08.398-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|5, t: 1 } and is durable through: { ts: Timestamp 1459929128000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.552-0500 c20011| 2016-04-06T02:52:08.398-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929128000|5, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|4, t: 1 }, name-id: "82" } [js_test:multi_coll_drop] 2016-04-06T02:52:25.556-0500 c20011| 2016-04-06T02:52:08.398-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.560-0500 c20011| 2016-04-06T02:52:08.399-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.566-0500 c20011| 2016-04-06T02:52:08.399-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:25.567-0500 c20011| 2016-04-06T02:52:08.399-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:25.568-0500 c20011| 2016-04-06T02:52:08.399-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|3, t: 1 } and is durable through: { ts: Timestamp 1459929128000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.571-0500 c20011| 2016-04-06T02:52:08.399-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|5, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|4, t: 1 }, name-id: "82" } [js_test:multi_coll_drop] 2016-04-06T02:52:25.577-0500 c20011| 2016-04-06T02:52:08.399-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.582-0500 c20011| 2016-04-06T02:52:08.399-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.587-0500 c20011| 2016-04-06T02:52:08.399-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:25.589-0500 c20011| 2016-04-06T02:52:08.399-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:25.590-0500 c20011| 2016-04-06T02:52:08.399-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.592-0500 c20011| 2016-04-06T02:52:08.399-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|5, t: 1 } and is durable through: { ts: Timestamp 1459929128000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.592-0500 c20011| 2016-04-06T02:52:08.399-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.595-0500 c20011| 2016-04-06T02:52:08.399-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.599-0500 c20011| 2016-04-06T02:52:08.399-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|4, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.601-0500 c20011| 2016-04-06T02:52:08.400-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|4, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.618-0500 c20011| 2016-04-06T02:52:08.400-0500 I COMMAND [conn10] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c02806c33406d4d9c0bd') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:555 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.621-0500 c20011| 2016-04-06T02:52:08.400-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.623-0500 c20011| 2016-04-06T02:52:08.400-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.629-0500 c20011| 2016-04-06T02:52:08.402-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:25.634-0500 c20011| 2016-04-06T02:52:08.402-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:25.637-0500 c20011| 2016-04-06T02:52:08.402-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|4, t: 1 } and is durable through: { ts: Timestamp 1459929128000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.642-0500 c20011| 2016-04-06T02:52:08.402-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.646-0500 c20011| 2016-04-06T02:52:08.402-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.657-0500 c20011| 2016-04-06T02:52:08.402-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:25.657-0500 c20011| 2016-04-06T02:52:08.403-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:25.660-0500 c20011| 2016-04-06T02:52:08.403-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|4, t: 1 } and is durable through: { ts: Timestamp 1459929128000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.664-0500 c20011| 2016-04-06T02:52:08.403-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.670-0500 c20011| 2016-04-06T02:52:08.403-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.672-0500 c20011| 2016-04-06T02:52:08.403-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:25.672-0500 c20011| 2016-04-06T02:52:08.403-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:25.676-0500 c20011| 2016-04-06T02:52:08.403-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|5, t: 1 } and is durable through: { ts: Timestamp 1459929128000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.678-0500 c20011| 2016-04-06T02:52:08.403-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.685-0500 c20011| 2016-04-06T02:52:08.403-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.689-0500 c20011| 2016-04-06T02:52:08.404-0500 D COMMAND [conn10] run command config.$cmd { find: "collections", filter: { _id: /^multidrop\./ }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|5, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.690-0500 c20011| 2016-04-06T02:52:08.404-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|5, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.693-0500 c20011| 2016-04-06T02:52:08.404-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "collections", filter: { _id: /^multidrop\./ }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|5, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.694-0500 c20011| 2016-04-06T02:52:08.404-0500 D QUERY [conn10] Collection config.collections does not exist. Using EOF plan: query: { _id: /^multidrop\./ } sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:52:25.697-0500 c20011| 2016-04-06T02:52:08.404-0500 I COMMAND [conn10] command config.collections command: find { find: "collections", filter: { _id: /^multidrop\./ }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|5, t: 1 } }, maxTimeMS: 30000 } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:395 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.702-0500 c20011| 2016-04-06T02:52:08.406-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:25.703-0500 c20011| 2016-04-06T02:52:08.406-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:25.708-0500 c20011| 2016-04-06T02:52:08.406-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|5, t: 1 } and is durable through: { ts: Timestamp 1459929128000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.709-0500 c20011| 2016-04-06T02:52:08.406-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.715-0500 c20011| 2016-04-06T02:52:08.407-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.721-0500 c20011| 2016-04-06T02:52:08.413-0500 D COMMAND [conn10] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02806c33406d4d9c0be'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929128413), why: "shardCollection" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.723-0500 c20011| 2016-04-06T02:52:08.413-0500 D QUERY [conn10] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:25.726-0500 c20011| 2016-04-06T02:52:08.413-0500 D QUERY [conn10] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:25.733-0500 c20011| 2016-04-06T02:52:08.413-0500 D QUERY [conn10] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.738-0500 c20011| 2016-04-06T02:52:08.414-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|5, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:634 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 13ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.743-0500 c20011| 2016-04-06T02:52:08.414-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|5, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:634 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 13ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.744-0500 c20011| 2016-04-06T02:52:08.415-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929128000|6, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|5, t: 1 }, name-id: "83" } [js_test:multi_coll_drop] 2016-04-06T02:52:25.748-0500 c20011| 2016-04-06T02:52:08.416-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.752-0500 c20011| 2016-04-06T02:52:08.416-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.758-0500 c20011| 2016-04-06T02:52:08.418-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:25.759-0500 c20011| 2016-04-06T02:52:08.418-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:25.763-0500 c20011| 2016-04-06T02:52:08.418-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|6, t: 1 } and is durable through: { ts: Timestamp 1459929128000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.764-0500 c20011| 2016-04-06T02:52:08.418-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|6, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|5, t: 1 }, name-id: "83" } [js_test:multi_coll_drop] 2016-04-06T02:52:25.769-0500 c20011| 2016-04-06T02:52:08.418-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.772-0500 c20011| 2016-04-06T02:52:08.418-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.778-0500 c20011| 2016-04-06T02:52:08.419-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:25.778-0500 c20011| 2016-04-06T02:52:08.419-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:25.780-0500 c20011| 2016-04-06T02:52:08.419-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.785-0500 c20011| 2016-04-06T02:52:08.419-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|6, t: 1 } and is durable through: { ts: Timestamp 1459929128000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.785-0500 c20011| 2016-04-06T02:52:08.419-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929128000|6, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|5, t: 1 }, name-id: "83" } [js_test:multi_coll_drop] 2016-04-06T02:52:25.793-0500 c20011| 2016-04-06T02:52:08.419-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.797-0500 c20011| 2016-04-06T02:52:08.419-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:25.798-0500 c20011| 2016-04-06T02:52:08.419-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:25.809-0500 c20011| 2016-04-06T02:52:08.419-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:25.811-0500 c20011| 2016-04-06T02:52:08.419-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:25.815-0500 c20011| 2016-04-06T02:52:08.419-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|6, t: 1 } and is durable through: { ts: Timestamp 1459929128000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.816-0500 c20011| 2016-04-06T02:52:08.419-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.819-0500 c20011| 2016-04-06T02:52:08.419-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.824-0500 c20011| 2016-04-06T02:52:08.419-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.829-0500 c20011| 2016-04-06T02:52:08.419-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|6, t: 1 } and is durable through: { ts: Timestamp 1459929128000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.833-0500 c20011| 2016-04-06T02:52:08.419-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.836-0500 c20011| 2016-04-06T02:52:08.419-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.840-0500 c20011| 2016-04-06T02:52:08.419-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|5, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.847-0500 c20011| 2016-04-06T02:52:08.419-0500 I COMMAND [conn10] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02806c33406d4d9c0be'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929128413), why: "shardCollection" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c02806c33406d4d9c0be'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929128413), why: "shardCollection" } } keysExamined:0 docsExamined:0 nMatched:0 nModified:0 upsert:1 numYields:0 reslen:590 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.857-0500 c20011| 2016-04-06T02:52:08.419-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|5, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.861-0500 c20011| 2016-04-06T02:52:08.420-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.863-0500 c20011| 2016-04-06T02:52:08.420-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.868-0500 c20011| 2016-04-06T02:52:08.420-0500 D COMMAND [conn10] run command config.$cmd { find: "databases", filter: { _id: "multidrop" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|6, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.870-0500 c20011| 2016-04-06T02:52:08.420-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|6, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.874-0500 c20011| 2016-04-06T02:52:08.420-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "databases", filter: { _id: "multidrop" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|6, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.874-0500 c20011| 2016-04-06T02:52:08.420-0500 D QUERY [conn10] Using idhack: query: { _id: "multidrop" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:25.879-0500 c20011| 2016-04-06T02:52:08.420-0500 I COMMAND [conn10] command config.databases command: find { find: "databases", filter: { _id: "multidrop" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|6, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:457 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.882-0500 s20014| 2016-04-06T02:52:08.613-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 144 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-98.0", lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -98.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.885-0500 s20014| 2016-04-06T02:52:08.613-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|4||5704c02806c33406d4d9c0c0 and 3 chunks [js_test:multi_coll_drop] 2016-04-06T02:52:25.890-0500 s20014| 2016-04-06T02:52:08.613-0500 D SHARDING [conn1] major version query from 1|4||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|4 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.895-0500 s20014| 2016-04-06T02:52:08.613-0500 D ASIO [conn1] startCommand: RemoteCommand 146 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:52:38.613-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|4 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|24, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.896-0500 s20014| 2016-04-06T02:52:08.613-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 146 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:25.901-0500 s20014| 2016-04-06T02:52:08.614-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 146 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-99.0", lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -99.0 }, max: { _id: -98.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-98.0", lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -98.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.904-0500 s20014| 2016-04-06T02:52:08.614-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|6||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:25.907-0500 s20014| 2016-04-06T02:52:08.614-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 6 version: 1|6||5704c02806c33406d4d9c0c0 based on: 1|4||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:25.913-0500 s20014| 2016-04-06T02:52:08.614-0500 D ASIO [conn1] startCommand: RemoteCommand 148 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:38.614-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|24, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.918-0500 s20014| 2016-04-06T02:52:08.614-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 148 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:25.924-0500 s20014| 2016-04-06T02:52:08.614-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 148 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-98.0", lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -98.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.931-0500 c20011| 2016-04-06T02:52:08.421-0500 D COMMAND [conn10] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.421-0500-5704c02806c33406d4d9c0bf", server: "mongovm16", clientAddr: "127.0.0.1:55066", time: new Date(1459929128421), what: "shardCollection.start", ns: "multidrop.coll", details: { shardKey: { _id: 1.0 }, collection: "multidrop.coll", primary: "shard0000:mongovm16:20010", initShards: [], numChunks: 1 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.935-0500 c20011| 2016-04-06T02:52:08.422-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|6, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:784 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.940-0500 c20011| 2016-04-06T02:52:08.422-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|6, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:784 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.945-0500 c20011| 2016-04-06T02:52:08.422-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929128000|7, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|6, t: 1 }, name-id: "84" } [js_test:multi_coll_drop] 2016-04-06T02:52:25.949-0500 c20011| 2016-04-06T02:52:08.424-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:25.949-0500 c20011| 2016-04-06T02:52:08.424-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:25.957-0500 c20011| 2016-04-06T02:52:08.424-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|7, t: 1 } and is durable through: { ts: Timestamp 1459929128000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.967-0500 c20011| 2016-04-06T02:52:08.424-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:25.970-0500 c20011| 2016-04-06T02:52:08.424-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|7, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|6, t: 1 }, name-id: "84" } [js_test:multi_coll_drop] 2016-04-06T02:52:25.970-0500 c20011| 2016-04-06T02:52:08.424-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:25.974-0500 c20011| 2016-04-06T02:52:08.424-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.987-0500 c20011| 2016-04-06T02:52:08.424-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:25.989-0500 c20011| 2016-04-06T02:52:08.424-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:25.991-0500 s20014| 2016-04-06T02:52:08.614-0500 I COMMAND [conn1] splitting chunk [{ _id: -98.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:25.997-0500 s20014| 2016-04-06T02:52:08.656-0500 D ASIO [conn1] startCommand: RemoteCommand 150 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:38.656-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|28, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:25.999-0500 s20014| 2016-04-06T02:52:08.656-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 150 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:26.010-0500 c20013| 2016-04-06T02:52:08.351-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 280 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|1, t: 1, h: 2686008167822126967, v: 2, op: "i", ns: "config.locks", o: { _id: "multidrop", state: 2, ts: ObjectId('5704c02806c33406d4d9c0bd'), who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929128349), why: "enableSharding" } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.015-0500 s20014| 2016-04-06T02:52:08.657-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 150 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-97.0", lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -97.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.019-0500 c20013| 2016-04-06T02:52:08.353-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|1 and ending at ts: Timestamp 1459929128000|1 [js_test:multi_coll_drop] 2016-04-06T02:52:26.025-0500 c20013| 2016-04-06T02:52:08.354-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:26.035-0500 c20013| 2016-04-06T02:52:08.354-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:26.043-0500 c20013| 2016-04-06T02:52:08.354-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:26.044-0500 c20013| 2016-04-06T02:52:08.354-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:26.049-0500 c20013| 2016-04-06T02:52:08.354-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:26.050-0500 c20013| 2016-04-06T02:52:08.354-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:26.051-0500 c20013| 2016-04-06T02:52:08.354-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:26.055-0500 c20013| 2016-04-06T02:52:08.354-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:26.056-0500 c20013| 2016-04-06T02:52:08.354-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:26.056-0500 c20013| 2016-04-06T02:52:08.354-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:26.063-0500 c20011| 2016-04-06T02:52:08.424-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.067-0500 c20011| 2016-04-06T02:52:08.424-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|7, t: 1 } and is durable through: { ts: Timestamp 1459929128000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.069-0500 s20014| 2016-04-06T02:52:08.657-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|6||5704c02806c33406d4d9c0c0 and 4 chunks [js_test:multi_coll_drop] 2016-04-06T02:52:26.074-0500 d20010| 2016-04-06T02:52:10.165-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:10.165-0500-5704c02a65c17830b843f19f", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929130165), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -84.0 }, max: { _id: MaxKey } }, left: { min: { _id: -84.0 }, max: { _id: -83.0 }, lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -83.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:52:26.078-0500 s20014| 2016-04-06T02:52:08.657-0500 D SHARDING [conn1] major version query from 1|6||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|6 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.081-0500 s20014| 2016-04-06T02:52:08.657-0500 D ASIO [conn1] startCommand: RemoteCommand 152 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:52:38.657-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|6 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|28, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.082-0500 s20014| 2016-04-06T02:52:08.657-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 152 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:26.087-0500 s20014| 2016-04-06T02:52:08.657-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 152 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-98.0", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -98.0 }, max: { _id: -97.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-97.0", lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -97.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.089-0500 s20014| 2016-04-06T02:52:08.657-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|8||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:26.093-0500 s20014| 2016-04-06T02:52:08.657-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 7 version: 1|8||5704c02806c33406d4d9c0c0 based on: 1|6||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:26.101-0500 s20014| 2016-04-06T02:52:08.658-0500 D ASIO [conn1] startCommand: RemoteCommand 154 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:38.658-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|28, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.102-0500 s20014| 2016-04-06T02:52:08.658-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 154 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:26.106-0500 c20013| 2016-04-06T02:52:08.354-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:26.107-0500 c20011| 2016-04-06T02:52:08.424-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929128000|7, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|6, t: 1 }, name-id: "84" } [js_test:multi_coll_drop] 2016-04-06T02:52:26.111-0500 c20011| 2016-04-06T02:52:08.424-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:26.121-0500 c20011| 2016-04-06T02:52:08.424-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:26.126-0500 c20011| 2016-04-06T02:52:08.429-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:26.127-0500 c20011| 2016-04-06T02:52:08.429-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:26.134-0500 c20011| 2016-04-06T02:52:08.429-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:26.138-0500 c20011| 2016-04-06T02:52:08.429-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:26.141-0500 c20011| 2016-04-06T02:52:08.429-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.142-0500 c20011| 2016-04-06T02:52:08.429-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|7, t: 1 } and is durable through: { ts: Timestamp 1459929128000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.143-0500 c20011| 2016-04-06T02:52:08.429-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.147-0500 c20011| 2016-04-06T02:52:08.429-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:26.151-0500 c20011| 2016-04-06T02:52:08.429-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|7, t: 1 } and is durable through: { ts: Timestamp 1459929128000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.153-0500 c20011| 2016-04-06T02:52:08.429-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.163-0500 c20011| 2016-04-06T02:52:08.429-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:26.171-0500 c20011| 2016-04-06T02:52:08.429-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|6, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:26.177-0500 c20011| 2016-04-06T02:52:08.429-0500 I COMMAND [conn10] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.421-0500-5704c02806c33406d4d9c0bf", server: "mongovm16", clientAddr: "127.0.0.1:55066", time: new Date(1459929128421), what: "shardCollection.start", ns: "multidrop.coll", details: { shardKey: { _id: 1.0 }, collection: "multidrop.coll", primary: "shard0000:mongovm16:20010", initShards: [], numChunks: 1 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:52:26.181-0500 c20011| 2016-04-06T02:52:08.429-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|6, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:26.183-0500 c20011| 2016-04-06T02:52:08.430-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:26.187-0500 c20011| 2016-04-06T02:52:08.430-0500 D COMMAND [conn10] run command config.$cmd { insert: "chunks", documents: [ { _id: "multidrop.coll-_id_MinKey", ns: "multidrop.coll", min: { _id: MinKey }, max: { _id: MaxKey }, shard: "shard0000", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.191-0500 c20011| 2016-04-06T02:52:08.430-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:26.201-0500 c20011| 2016-04-06T02:52:08.431-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|7, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:593 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:26.207-0500 c20011| 2016-04-06T02:52:08.431-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|7, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:593 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:26.209-0500 c20011| 2016-04-06T02:52:08.433-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929128000|8, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|7, t: 1 }, name-id: "85" } [js_test:multi_coll_drop] 2016-04-06T02:52:26.212-0500 c20011| 2016-04-06T02:52:08.433-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:26.219-0500 c20011| 2016-04-06T02:52:08.434-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:26.227-0500 c20011| 2016-04-06T02:52:08.436-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:26.227-0500 c20011| 2016-04-06T02:52:08.436-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:26.228-0500 c20011| 2016-04-06T02:52:08.436-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.240-0500 c20011| 2016-04-06T02:52:08.436-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|8, t: 1 } and is durable through: { ts: Timestamp 1459929128000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.243-0500 c20011| 2016-04-06T02:52:08.436-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929128000|8, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|7, t: 1 }, name-id: "85" } [js_test:multi_coll_drop] 2016-04-06T02:52:26.255-0500 c20011| 2016-04-06T02:52:08.436-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:26.264-0500 c20011| 2016-04-06T02:52:08.437-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:26.265-0500 c20011| 2016-04-06T02:52:08.437-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:26.272-0500 c20011| 2016-04-06T02:52:08.437-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|8, t: 1 } and is durable through: { ts: Timestamp 1459929128000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.274-0500 c20011| 2016-04-06T02:52:08.437-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|8, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|7, t: 1 }, name-id: "85" } [js_test:multi_coll_drop] 2016-04-06T02:52:26.278-0500 c20011| 2016-04-06T02:52:08.437-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.282-0500 c20011| 2016-04-06T02:52:08.437-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:26.291-0500 c20011| 2016-04-06T02:52:08.446-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:26.291-0500 c20011| 2016-04-06T02:52:08.446-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:26.294-0500 c20011| 2016-04-06T02:52:08.446-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|8, t: 1 } and is durable through: { ts: Timestamp 1459929128000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.296-0500 c20011| 2016-04-06T02:52:08.446-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.298-0500 c20011| 2016-04-06T02:52:08.446-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.305-0500 c20011| 2016-04-06T02:52:08.446-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:26.310-0500 c20011| 2016-04-06T02:52:08.446-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|7, t: 1 } } cursorid:20785203637 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 12ms [js_test:multi_coll_drop] 2016-04-06T02:52:26.315-0500 c20011| 2016-04-06T02:52:08.446-0500 I COMMAND [conn10] command config.chunks command: insert { insert: "chunks", documents: [ { _id: "multidrop.coll-_id_MinKey", ns: "multidrop.coll", min: { _id: MinKey }, max: { _id: MaxKey }, shard: "shard0000", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 16ms [js_test:multi_coll_drop] 2016-04-06T02:52:26.320-0500 c20011| 2016-04-06T02:52:08.446-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:26.321-0500 c20011| 2016-04-06T02:52:08.446-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:26.323-0500 c20011| 2016-04-06T02:52:08.446-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.326-0500 c20011| 2016-04-06T02:52:08.446-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|8, t: 1 } and is durable through: { ts: Timestamp 1459929128000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.333-0500 c20011| 2016-04-06T02:52:08.446-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:26.334-0500 c20011| 2016-04-06T02:52:08.447-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:26.342-0500 c20011| 2016-04-06T02:52:08.447-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|7, t: 1 } } cursorid:17466612721 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 12ms [js_test:multi_coll_drop] 2016-04-06T02:52:26.344-0500 c20011| 2016-04-06T02:52:08.447-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:26.348-0500 c20011| 2016-04-06T02:52:08.448-0500 D COMMAND [conn10] run command config.$cmd { update: "collections", updates: [ { q: { _id: "multidrop.coll" }, u: { _id: "multidrop.coll", lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), lastmod: new Date(4294967296), dropped: false, key: { _id: 1.0 }, unique: false }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.351-0500 c20011| 2016-04-06T02:52:08.448-0500 D STORAGE [conn10] create collection config.collections {} [js_test:multi_coll_drop] 2016-04-06T02:52:26.351-0500 c20011| 2016-04-06T02:52:08.448-0500 D STORAGE [conn10] stored meta data for config.collections @ RecordId(16) [js_test:multi_coll_drop] 2016-04-06T02:52:26.354-0500 c20011| 2016-04-06T02:52:08.448-0500 D STORAGE [conn10] WiredTigerKVEngine::createRecordStore uri: table:collection-37--6404702321693896372 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:26.356-0500 c20011| 2016-04-06T02:52:08.459-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-37--6404702321693896372 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:26.358-0500 c20011| 2016-04-06T02:52:08.459-0500 D STORAGE [conn10] config.collections: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:26.367-0500 c20011| 2016-04-06T02:52:08.459-0500 D STORAGE [conn10] WiredTigerKVEngine::createSortedDataInterface ident: index-38--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.collections" }), [js_test:multi_coll_drop] 2016-04-06T02:52:26.369-0500 c20011| 2016-04-06T02:52:08.459-0500 D STORAGE [conn10] create uri: table:index-38--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.collections" }), [js_test:multi_coll_drop] 2016-04-06T02:52:26.371-0500 c20011| 2016-04-06T02:52:08.462-0500 D STORAGE [conn10] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-38--6404702321693896372 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:26.371-0500 c20011| 2016-04-06T02:52:08.463-0500 D STORAGE [conn10] config.collections: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:26.373-0500 c20011| 2016-04-06T02:52:08.463-0500 D QUERY [conn10] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:26.379-0500 c20011| 2016-04-06T02:52:08.463-0500 I WRITE [conn10] update config.collections query: { _id: "multidrop.coll" } update: { _id: "multidrop.coll", lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), lastmod: new Date(4294967296), dropped: false, key: { _id: 1.0 }, unique: false } keysExamined:0 docsExamined:0 nMatched:0 nModified:0 upsert:1 numYields:0 locks:{ Global: { acquireCount: { r: 5, w: 5 } }, Database: { acquireCount: { w: 4, W: 1 } }, Collection: { acquireCount: { w: 2 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } 14ms [js_test:multi_coll_drop] 2016-04-06T02:52:26.388-0500 c20011| 2016-04-06T02:52:08.463-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|8, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:463 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 16ms [js_test:multi_coll_drop] 2016-04-06T02:52:26.395-0500 c20011| 2016-04-06T02:52:08.463-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|8, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:463 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 15ms [js_test:multi_coll_drop] 2016-04-06T02:52:26.401-0500 c20011| 2016-04-06T02:52:08.463-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929128000|10, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|8, t: 1 }, name-id: "86" } [js_test:multi_coll_drop] 2016-04-06T02:52:26.403-0500 c20011| 2016-04-06T02:52:08.465-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:26.411-0500 c20011| 2016-04-06T02:52:08.465-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:26.428-0500 c20011| 2016-04-06T02:52:08.466-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|8, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:555 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:26.436-0500 c20011| 2016-04-06T02:52:08.466-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|8, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:555 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:26.450-0500 c20011| 2016-04-06T02:52:08.468-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:26.456-0500 c20011| 2016-04-06T02:52:08.468-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:26.473-0500 c20011| 2016-04-06T02:52:08.476-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:26.474-0500 c20011| 2016-04-06T02:52:08.476-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:26.475-0500 c20011| 2016-04-06T02:52:08.476-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.483-0500 c20011| 2016-04-06T02:52:08.476-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|9, t: 1 } and is durable through: { ts: Timestamp 1459929128000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.485-0500 c20011| 2016-04-06T02:52:08.476-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929128000|10, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|8, t: 1 }, name-id: "86" } [js_test:multi_coll_drop] 2016-04-06T02:52:26.490-0500 c20011| 2016-04-06T02:52:08.476-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:26.494-0500 c20011| 2016-04-06T02:52:08.476-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:26.495-0500 c20011| 2016-04-06T02:52:08.476-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:26.499-0500 c20011| 2016-04-06T02:52:08.477-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.504-0500 c20011| 2016-04-06T02:52:08.477-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|9, t: 1 } and is durable through: { ts: Timestamp 1459929128000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.505-0500 c20011| 2016-04-06T02:52:08.477-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.510-0500 c20011| 2016-04-06T02:52:08.477-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929128000|10, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|9, t: 1 }, name-id: "89" } [js_test:multi_coll_drop] 2016-04-06T02:52:26.515-0500 c20011| 2016-04-06T02:52:08.477-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929128000|10, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|9, t: 1 }, name-id: "89" } [js_test:multi_coll_drop] 2016-04-06T02:52:26.526-0500 c20011| 2016-04-06T02:52:08.477-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:26.532-0500 c20011| 2016-04-06T02:52:08.477-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:26.537-0500 c20011| 2016-04-06T02:52:08.477-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:26.540-0500 c20011| 2016-04-06T02:52:08.477-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.544-0500 c20011| 2016-04-06T02:52:08.477-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|10, t: 1 } and is durable through: { ts: Timestamp 1459929128000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.557-0500 c20011| 2016-04-06T02:52:08.477-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929128000|10, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|9, t: 1 }, name-id: "89" } [js_test:multi_coll_drop] 2016-04-06T02:52:26.564-0500 s20014| 2016-04-06T02:52:08.658-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 154 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-97.0", lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -97.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.567-0500 s20014| 2016-04-06T02:52:08.658-0500 I COMMAND [conn1] splitting chunk [{ _id: -97.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:26.573-0500 s20014| 2016-04-06T02:52:08.691-0500 D ASIO [conn1] startCommand: RemoteCommand 156 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:38.691-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|32, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.575-0500 s20014| 2016-04-06T02:52:08.691-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 156 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:26.577-0500 s20014| 2016-04-06T02:52:08.691-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 156 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-96.0", lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -96.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.580-0500 s20014| 2016-04-06T02:52:08.691-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|8||5704c02806c33406d4d9c0c0 and 5 chunks [js_test:multi_coll_drop] 2016-04-06T02:52:26.582-0500 s20014| 2016-04-06T02:52:08.691-0500 D SHARDING [conn1] major version query from 1|8||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|8 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.586-0500 s20014| 2016-04-06T02:52:08.691-0500 D ASIO [conn1] startCommand: RemoteCommand 158 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:38.691-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|8 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|32, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.588-0500 s20014| 2016-04-06T02:52:08.691-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 158 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:26.596-0500 s20014| 2016-04-06T02:52:08.692-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 158 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-97.0", lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -97.0 }, max: { _id: -96.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-96.0", lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -96.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.598-0500 s20014| 2016-04-06T02:52:08.692-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|10||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:26.600-0500 s20014| 2016-04-06T02:52:08.692-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 8 version: 1|10||5704c02806c33406d4d9c0c0 based on: 1|8||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:26.603-0500 s20014| 2016-04-06T02:52:08.692-0500 D ASIO [conn1] startCommand: RemoteCommand 160 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:52:38.692-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|32, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.605-0500 s20014| 2016-04-06T02:52:08.692-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 160 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:26.613-0500 s20014| 2016-04-06T02:52:08.693-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 160 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-96.0", lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -96.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.617-0500 s20014| 2016-04-06T02:52:08.693-0500 I COMMAND [conn1] splitting chunk [{ _id: -96.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:26.620-0500 s20014| 2016-04-06T02:52:08.722-0500 D ASIO [conn1] startCommand: RemoteCommand 162 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:38.722-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|36, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.624-0500 s20014| 2016-04-06T02:52:08.722-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 162 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:26.630-0500 s20014| 2016-04-06T02:52:08.723-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 162 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-95.0", lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -95.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.635-0500 s20014| 2016-04-06T02:52:08.724-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|10||5704c02806c33406d4d9c0c0 and 6 chunks [js_test:multi_coll_drop] 2016-04-06T02:52:26.640-0500 s20014| 2016-04-06T02:52:08.724-0500 D SHARDING [conn1] major version query from 1|10||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|10 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.645-0500 s20014| 2016-04-06T02:52:08.724-0500 D ASIO [conn1] startCommand: RemoteCommand 164 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:38.724-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|10 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|36, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.649-0500 s20014| 2016-04-06T02:52:08.724-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 164 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:26.660-0500 s20014| 2016-04-06T02:52:08.724-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 164 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-96.0", lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -96.0 }, max: { _id: -95.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-95.0", lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -95.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.665-0500 s20014| 2016-04-06T02:52:08.724-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|12||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:26.666-0500 s20014| 2016-04-06T02:52:08.724-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 9 version: 1|12||5704c02806c33406d4d9c0c0 based on: 1|10||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:26.673-0500 s20014| 2016-04-06T02:52:08.724-0500 D ASIO [conn1] startCommand: RemoteCommand 166 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:38.724-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|36, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.677-0500 s20014| 2016-04-06T02:52:08.724-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 166 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:26.683-0500 s20014| 2016-04-06T02:52:08.725-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 166 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-95.0", lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -95.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.685-0500 s20014| 2016-04-06T02:52:08.725-0500 I COMMAND [conn1] splitting chunk [{ _id: -95.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:26.692-0500 s20014| 2016-04-06T02:52:08.765-0500 D ASIO [conn1] startCommand: RemoteCommand 168 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:38.765-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|40, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.692-0500 s20014| 2016-04-06T02:52:08.765-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 168 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:26.696-0500 s20014| 2016-04-06T02:52:08.767-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 168 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-94.0", lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -94.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.703-0500 s20014| 2016-04-06T02:52:08.767-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|12||5704c02806c33406d4d9c0c0 and 7 chunks [js_test:multi_coll_drop] 2016-04-06T02:52:26.707-0500 s20014| 2016-04-06T02:52:08.767-0500 D SHARDING [conn1] major version query from 1|12||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|12 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.712-0500 s20014| 2016-04-06T02:52:08.767-0500 D ASIO [conn1] startCommand: RemoteCommand 170 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:52:38.767-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|12 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|40, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.715-0500 s20014| 2016-04-06T02:52:08.767-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 170 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:26.729-0500 s20014| 2016-04-06T02:52:08.768-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 170 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-95.0", lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -95.0 }, max: { _id: -94.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-94.0", lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -94.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.732-0500 s20014| 2016-04-06T02:52:08.768-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|14||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:26.732-0500 s20014| 2016-04-06T02:52:08.768-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 10 version: 1|14||5704c02806c33406d4d9c0c0 based on: 1|12||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:26.738-0500 s20014| 2016-04-06T02:52:08.768-0500 D ASIO [conn1] startCommand: RemoteCommand 172 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:38.768-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|40, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.742-0500 s20014| 2016-04-06T02:52:08.768-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 172 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:26.750-0500 s20014| 2016-04-06T02:52:08.768-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 172 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-94.0", lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -94.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.753-0500 s20014| 2016-04-06T02:52:08.769-0500 I COMMAND [conn1] splitting chunk [{ _id: -94.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:26.757-0500 s20014| 2016-04-06T02:52:08.824-0500 D ASIO [conn1] startCommand: RemoteCommand 174 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:52:38.824-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|44, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.759-0500 s20014| 2016-04-06T02:52:08.824-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 174 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:26.764-0500 s20014| 2016-04-06T02:52:08.825-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 174 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-93.0", lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -93.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.770-0500 s20014| 2016-04-06T02:52:08.825-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|14||5704c02806c33406d4d9c0c0 and 8 chunks [js_test:multi_coll_drop] 2016-04-06T02:52:26.775-0500 s20014| 2016-04-06T02:52:08.825-0500 D SHARDING [conn1] major version query from 1|14||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|14 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.779-0500 s20014| 2016-04-06T02:52:08.825-0500 D ASIO [conn1] startCommand: RemoteCommand 176 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:38.825-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|14 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|44, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.786-0500 s20014| 2016-04-06T02:52:08.825-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 176 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:26.792-0500 s20014| 2016-04-06T02:52:08.826-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 176 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-94.0", lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -94.0 }, max: { _id: -93.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-93.0", lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -93.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.794-0500 s20014| 2016-04-06T02:52:08.826-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|16||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:26.799-0500 s20014| 2016-04-06T02:52:08.826-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 11 version: 1|16||5704c02806c33406d4d9c0c0 based on: 1|14||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:26.800-0500 s20014| 2016-04-06T02:52:08.827-0500 D ASIO [conn1] startCommand: RemoteCommand 178 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:52:38.827-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|44, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.804-0500 s20014| 2016-04-06T02:52:08.827-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 178 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:26.817-0500 s20014| 2016-04-06T02:52:08.828-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 178 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-93.0", lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -93.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.818-0500 s20014| 2016-04-06T02:52:08.828-0500 I COMMAND [conn1] splitting chunk [{ _id: -93.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:26.826-0500 s20014| 2016-04-06T02:52:08.856-0500 D ASIO [conn1] startCommand: RemoteCommand 180 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:52:38.856-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|48, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.827-0500 s20014| 2016-04-06T02:52:08.857-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 180 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:26.837-0500 s20014| 2016-04-06T02:52:08.857-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 180 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-92.0", lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -92.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.839-0500 s20014| 2016-04-06T02:52:08.857-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|16||5704c02806c33406d4d9c0c0 and 9 chunks [js_test:multi_coll_drop] 2016-04-06T02:52:26.840-0500 s20014| 2016-04-06T02:52:08.857-0500 D SHARDING [conn1] major version query from 1|16||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|16 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.863-0500 s20014| 2016-04-06T02:52:08.857-0500 D ASIO [conn1] startCommand: RemoteCommand 182 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:38.857-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|16 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|48, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.864-0500 s20014| 2016-04-06T02:52:08.857-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 182 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:26.882-0500 s20014| 2016-04-06T02:52:08.858-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 182 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-93.0", lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -93.0 }, max: { _id: -92.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-92.0", lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -92.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.884-0500 s20014| 2016-04-06T02:52:08.858-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|18||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:26.886-0500 s20014| 2016-04-06T02:52:08.858-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 12 version: 1|18||5704c02806c33406d4d9c0c0 based on: 1|16||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:26.889-0500 s20014| 2016-04-06T02:52:08.859-0500 D ASIO [conn1] startCommand: RemoteCommand 184 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:38.859-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|48, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.890-0500 s20014| 2016-04-06T02:52:08.859-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 184 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:26.896-0500 s20014| 2016-04-06T02:52:08.859-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 184 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-92.0", lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -92.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.897-0500 s20014| 2016-04-06T02:52:08.859-0500 I COMMAND [conn1] splitting chunk [{ _id: -92.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:26.900-0500 s20014| 2016-04-06T02:52:08.886-0500 D ASIO [conn1] startCommand: RemoteCommand 186 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:52:38.886-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|52, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.901-0500 s20014| 2016-04-06T02:52:08.886-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 186 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:26.907-0500 s20014| 2016-04-06T02:52:08.886-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 186 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-91.0", lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -91.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.907-0500 s20014| 2016-04-06T02:52:08.887-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|18||5704c02806c33406d4d9c0c0 and 10 chunks [js_test:multi_coll_drop] 2016-04-06T02:52:26.909-0500 s20014| 2016-04-06T02:52:08.887-0500 D SHARDING [conn1] major version query from 1|18||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|18 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.914-0500 s20014| 2016-04-06T02:52:08.887-0500 D ASIO [conn1] startCommand: RemoteCommand 188 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:38.887-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|18 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|52, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.916-0500 s20014| 2016-04-06T02:52:08.887-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 188 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:26.926-0500 s20014| 2016-04-06T02:52:08.887-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 188 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-92.0", lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -92.0 }, max: { _id: -91.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-91.0", lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -91.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.929-0500 s20014| 2016-04-06T02:52:08.887-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|20||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:26.934-0500 s20014| 2016-04-06T02:52:08.887-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 13 version: 1|20||5704c02806c33406d4d9c0c0 based on: 1|18||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:26.937-0500 s20014| 2016-04-06T02:52:08.887-0500 D ASIO [conn1] startCommand: RemoteCommand 190 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:38.887-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|52, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.941-0500 s20014| 2016-04-06T02:52:08.888-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 190 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:26.954-0500 s20014| 2016-04-06T02:52:08.888-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 190 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-91.0", lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -91.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.956-0500 s20014| 2016-04-06T02:52:08.888-0500 I COMMAND [conn1] splitting chunk [{ _id: -91.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:26.959-0500 s20014| 2016-04-06T02:52:08.912-0500 D ASIO [conn1] startCommand: RemoteCommand 192 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:52:38.912-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|56, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.960-0500 s20014| 2016-04-06T02:52:08.912-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 192 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:26.963-0500 s20014| 2016-04-06T02:52:08.912-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 192 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-90.0", lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -90.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.968-0500 s20014| 2016-04-06T02:52:08.912-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|20||5704c02806c33406d4d9c0c0 and 11 chunks [js_test:multi_coll_drop] 2016-04-06T02:52:26.972-0500 s20014| 2016-04-06T02:52:08.912-0500 D SHARDING [conn1] major version query from 1|20||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|20 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.978-0500 s20014| 2016-04-06T02:52:08.912-0500 D ASIO [conn1] startCommand: RemoteCommand 194 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:38.912-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|20 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|56, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.979-0500 s20014| 2016-04-06T02:52:08.912-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 194 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:26.985-0500 s20014| 2016-04-06T02:52:08.913-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 194 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-91.0", lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -91.0 }, max: { _id: -90.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-90.0", lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -90.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.985-0500 s20014| 2016-04-06T02:52:08.913-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|22||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:26.988-0500 s20014| 2016-04-06T02:52:08.913-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 14 version: 1|22||5704c02806c33406d4d9c0c0 based on: 1|20||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:26.992-0500 s20014| 2016-04-06T02:52:08.913-0500 D ASIO [conn1] startCommand: RemoteCommand 196 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:38.913-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|56, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.993-0500 s20014| 2016-04-06T02:52:08.913-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 196 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:26.996-0500 s20014| 2016-04-06T02:52:08.913-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 196 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-90.0", lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -90.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:26.997-0500 s20014| 2016-04-06T02:52:08.914-0500 I COMMAND [conn1] splitting chunk [{ _id: -90.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:27.012-0500 s20014| 2016-04-06T02:52:08.929-0500 D ASIO [conn1] startCommand: RemoteCommand 198 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:38.929-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|60, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.014-0500 s20014| 2016-04-06T02:52:08.929-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 198 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:27.031-0500 s20014| 2016-04-06T02:52:08.930-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 198 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-89.0", lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -89.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.034-0500 s20014| 2016-04-06T02:52:08.930-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|22||5704c02806c33406d4d9c0c0 and 12 chunks [js_test:multi_coll_drop] 2016-04-06T02:52:27.037-0500 s20014| 2016-04-06T02:52:08.930-0500 D SHARDING [conn1] major version query from 1|22||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|22 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.041-0500 s20014| 2016-04-06T02:52:08.930-0500 D ASIO [conn1] startCommand: RemoteCommand 200 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:38.930-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|22 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|60, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.044-0500 s20014| 2016-04-06T02:52:08.930-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 200 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:27.049-0500 s20014| 2016-04-06T02:52:08.931-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 200 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-90.0", lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -90.0 }, max: { _id: -89.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-89.0", lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -89.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.050-0500 s20014| 2016-04-06T02:52:08.931-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|24||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:27.051-0500 s20014| 2016-04-06T02:52:08.931-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 15 version: 1|24||5704c02806c33406d4d9c0c0 based on: 1|22||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:27.054-0500 s20014| 2016-04-06T02:52:08.931-0500 D ASIO [conn1] startCommand: RemoteCommand 202 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:38.931-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|60, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.055-0500 s20014| 2016-04-06T02:52:08.931-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 202 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:27.059-0500 s20014| 2016-04-06T02:52:08.931-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 202 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-89.0", lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -89.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.062-0500 s20014| 2016-04-06T02:52:08.931-0500 I COMMAND [conn1] splitting chunk [{ _id: -89.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:27.067-0500 s20014| 2016-04-06T02:52:08.951-0500 D ASIO [conn1] startCommand: RemoteCommand 204 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:38.951-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|64, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.068-0500 s20014| 2016-04-06T02:52:08.951-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 204 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:27.074-0500 s20014| 2016-04-06T02:52:08.952-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 204 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-88.0", lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -88.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.075-0500 s20014| 2016-04-06T02:52:08.952-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|24||5704c02806c33406d4d9c0c0 and 13 chunks [js_test:multi_coll_drop] 2016-04-06T02:52:27.078-0500 s20014| 2016-04-06T02:52:08.952-0500 D SHARDING [conn1] major version query from 1|24||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|24 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.082-0500 s20014| 2016-04-06T02:52:08.952-0500 D ASIO [conn1] startCommand: RemoteCommand 206 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:38.952-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|24 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|64, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.084-0500 s20014| 2016-04-06T02:52:08.952-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 206 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:27.089-0500 s20014| 2016-04-06T02:52:08.952-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 206 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-89.0", lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -89.0 }, max: { _id: -88.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-88.0", lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -88.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.091-0500 s20014| 2016-04-06T02:52:08.952-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|26||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:27.096-0500 s20014| 2016-04-06T02:52:08.952-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 16 version: 1|26||5704c02806c33406d4d9c0c0 based on: 1|24||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:27.098-0500 s20014| 2016-04-06T02:52:08.953-0500 D ASIO [conn1] startCommand: RemoteCommand 208 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:38.953-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|64, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.102-0500 s20014| 2016-04-06T02:52:08.953-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 208 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:27.109-0500 s20014| 2016-04-06T02:52:08.953-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 208 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-88.0", lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -88.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.112-0500 s20014| 2016-04-06T02:52:08.953-0500 I COMMAND [conn1] splitting chunk [{ _id: -88.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:27.116-0500 s20014| 2016-04-06T02:52:08.976-0500 D ASIO [conn1] startCommand: RemoteCommand 210 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:38.976-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|68, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.117-0500 s20014| 2016-04-06T02:52:08.976-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 210 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:27.137-0500 s20014| 2016-04-06T02:52:08.977-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 210 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-87.0", lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -87.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.139-0500 s20014| 2016-04-06T02:52:08.977-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|26||5704c02806c33406d4d9c0c0 and 14 chunks [js_test:multi_coll_drop] 2016-04-06T02:52:27.141-0500 s20014| 2016-04-06T02:52:08.977-0500 D SHARDING [conn1] major version query from 1|26||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|26 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.150-0500 s20014| 2016-04-06T02:52:08.977-0500 D ASIO [conn1] startCommand: RemoteCommand 212 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:38.977-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|26 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|68, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.153-0500 s20014| 2016-04-06T02:52:08.977-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 212 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:27.159-0500 s20014| 2016-04-06T02:52:08.977-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 212 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-88.0", lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -88.0 }, max: { _id: -87.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-87.0", lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -87.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.162-0500 s20014| 2016-04-06T02:52:08.977-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|28||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:27.165-0500 s20014| 2016-04-06T02:52:08.977-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 17 version: 1|28||5704c02806c33406d4d9c0c0 based on: 1|26||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:27.170-0500 s20014| 2016-04-06T02:52:08.978-0500 D ASIO [conn1] startCommand: RemoteCommand 214 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:38.978-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|68, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.170-0500 s20014| 2016-04-06T02:52:08.978-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 214 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:27.173-0500 s20014| 2016-04-06T02:52:08.978-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 214 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-87.0", lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -87.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.176-0500 s20014| 2016-04-06T02:52:08.978-0500 I COMMAND [conn1] splitting chunk [{ _id: -87.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:27.178-0500 s20014| 2016-04-06T02:52:09.034-0500 D ASIO [conn1] startCommand: RemoteCommand 216 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:52:39.034-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.178-0500 s20014| 2016-04-06T02:52:09.034-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 216 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:27.185-0500 s20014| 2016-04-06T02:52:09.034-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 216 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-86.0", lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -86.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.187-0500 s20014| 2016-04-06T02:52:09.035-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|28||5704c02806c33406d4d9c0c0 and 15 chunks [js_test:multi_coll_drop] 2016-04-06T02:52:27.190-0500 s20014| 2016-04-06T02:52:09.035-0500 D SHARDING [conn1] major version query from 1|28||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|28 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.198-0500 s20014| 2016-04-06T02:52:09.035-0500 D ASIO [conn1] startCommand: RemoteCommand 218 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:52:39.035-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|28 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.199-0500 s20014| 2016-04-06T02:52:09.035-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 218 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:27.204-0500 s20014| 2016-04-06T02:52:09.035-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 218 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-87.0", lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -87.0 }, max: { _id: -86.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-86.0", lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -86.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.207-0500 s20014| 2016-04-06T02:52:09.035-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|30||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:27.209-0500 s20014| 2016-04-06T02:52:09.035-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 18 version: 1|30||5704c02806c33406d4d9c0c0 based on: 1|28||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:27.212-0500 s20014| 2016-04-06T02:52:09.036-0500 D ASIO [conn1] startCommand: RemoteCommand 220 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:39.036-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.213-0500 s20014| 2016-04-06T02:52:09.036-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 220 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:27.216-0500 s20014| 2016-04-06T02:52:09.036-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 220 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-86.0", lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -86.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.217-0500 s20014| 2016-04-06T02:52:09.036-0500 I COMMAND [conn1] splitting chunk [{ _id: -86.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:27.220-0500 s20014| 2016-04-06T02:52:09.086-0500 D ASIO [conn1] startCommand: RemoteCommand 222 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:39.086-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|6, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.222-0500 s20014| 2016-04-06T02:52:09.086-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 222 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:27.227-0500 s20014| 2016-04-06T02:52:09.089-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 222 finished with response: { waitedMS: 1, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-85.0", lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -85.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.231-0500 s20014| 2016-04-06T02:52:09.089-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|30||5704c02806c33406d4d9c0c0 and 16 chunks [js_test:multi_coll_drop] 2016-04-06T02:52:27.235-0500 s20014| 2016-04-06T02:52:09.089-0500 D SHARDING [conn1] major version query from 1|30||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|30 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.238-0500 s20014| 2016-04-06T02:52:09.089-0500 D ASIO [conn1] startCommand: RemoteCommand 224 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:39.089-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|30 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|6, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.241-0500 s20014| 2016-04-06T02:52:09.089-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 224 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:27.248-0500 s20014| 2016-04-06T02:52:09.090-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 224 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-86.0", lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -86.0 }, max: { _id: -85.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-85.0", lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -85.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.248-0500 s20014| 2016-04-06T02:52:09.090-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|32||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:27.250-0500 s20014| 2016-04-06T02:52:09.090-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 1ms sequenceNumber: 19 version: 1|32||5704c02806c33406d4d9c0c0 based on: 1|30||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:27.258-0500 s20014| 2016-04-06T02:52:09.090-0500 D ASIO [conn1] startCommand: RemoteCommand 226 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:39.090-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|6, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.261-0500 s20014| 2016-04-06T02:52:09.091-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 226 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:27.263-0500 s20014| 2016-04-06T02:52:09.093-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 226 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-85.0", lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -85.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.264-0500 s20014| 2016-04-06T02:52:09.093-0500 I COMMAND [conn1] splitting chunk [{ _id: -85.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:27.265-0500 s20014| 2016-04-06T02:52:09.126-0500 D ASIO [conn1] startCommand: RemoteCommand 228 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:39.126-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|10, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.268-0500 s20014| 2016-04-06T02:52:09.127-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 228 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:27.273-0500 s20014| 2016-04-06T02:52:09.127-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 228 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-84.0", lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -84.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.275-0500 s20014| 2016-04-06T02:52:09.127-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|32||5704c02806c33406d4d9c0c0 and 17 chunks [js_test:multi_coll_drop] 2016-04-06T02:52:27.279-0500 s20014| 2016-04-06T02:52:09.127-0500 D SHARDING [conn1] major version query from 1|32||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|32 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.282-0500 s20014| 2016-04-06T02:52:09.127-0500 D ASIO [conn1] startCommand: RemoteCommand 230 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:39.127-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|32 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|10, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.283-0500 s20014| 2016-04-06T02:52:09.127-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 230 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:27.287-0500 s20014| 2016-04-06T02:52:09.128-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 230 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-85.0", lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -85.0 }, max: { _id: -84.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-84.0", lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -84.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.290-0500 s20014| 2016-04-06T02:52:09.128-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|34||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:27.293-0500 s20014| 2016-04-06T02:52:09.128-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 20 version: 1|34||5704c02806c33406d4d9c0c0 based on: 1|32||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:27.296-0500 s20014| 2016-04-06T02:52:09.128-0500 D ASIO [conn1] startCommand: RemoteCommand 232 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:39.128-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|10, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.299-0500 s20014| 2016-04-06T02:52:09.128-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 232 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:27.303-0500 s20014| 2016-04-06T02:52:09.129-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 232 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-84.0", lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -84.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.304-0500 s20014| 2016-04-06T02:52:09.129-0500 I COMMAND [conn1] splitting chunk [{ _id: -84.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:27.317-0500 s20014| 2016-04-06T02:52:10.222-0500 D ASIO [conn1] startCommand: RemoteCommand 234 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:52:40.222-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.319-0500 s20014| 2016-04-06T02:52:10.222-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 234 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:27.327-0500 s20014| 2016-04-06T02:52:10.225-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 234 finished with response: { waitedMS: 2, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-83.0", lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -83.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.331-0500 s20014| 2016-04-06T02:52:10.225-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|34||5704c02806c33406d4d9c0c0 and 18 chunks [js_test:multi_coll_drop] 2016-04-06T02:52:27.335-0500 s20014| 2016-04-06T02:52:10.225-0500 D SHARDING [conn1] major version query from 1|34||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|34 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.339-0500 s20014| 2016-04-06T02:52:10.225-0500 D ASIO [conn1] startCommand: RemoteCommand 236 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:40.225-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|34 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.340-0500 s20014| 2016-04-06T02:52:10.225-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 236 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:27.345-0500 s20014| 2016-04-06T02:52:10.226-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 236 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-84.0", lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -84.0 }, max: { _id: -83.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-83.0", lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -83.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.349-0500 s20014| 2016-04-06T02:52:10.226-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|36||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:27.351-0500 s20014| 2016-04-06T02:52:10.226-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 21 version: 1|36||5704c02806c33406d4d9c0c0 based on: 1|34||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:27.357-0500 s20014| 2016-04-06T02:52:10.227-0500 D ASIO [conn1] startCommand: RemoteCommand 238 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:40.227-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.361-0500 s20014| 2016-04-06T02:52:10.227-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 238 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:27.370-0500 s20014| 2016-04-06T02:52:10.227-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 238 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-83.0", lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -83.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.377-0500 s20014| 2016-04-06T02:52:10.228-0500 I COMMAND [conn1] splitting chunk [{ _id: -83.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:27.379-0500 s20014| 2016-04-06T02:52:10.254-0500 D ASIO [conn1] startCommand: RemoteCommand 240 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:40.254-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|6, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.380-0500 s20014| 2016-04-06T02:52:10.254-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 240 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:27.394-0500 s20014| 2016-04-06T02:52:10.255-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 240 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-82.0", lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -82.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.396-0500 s20014| 2016-04-06T02:52:10.255-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|36||5704c02806c33406d4d9c0c0 and 19 chunks [js_test:multi_coll_drop] 2016-04-06T02:52:27.399-0500 s20014| 2016-04-06T02:52:10.255-0500 D SHARDING [conn1] major version query from 1|36||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|36 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.407-0500 s20014| 2016-04-06T02:52:10.255-0500 D ASIO [conn1] startCommand: RemoteCommand 242 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:52:40.255-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|36 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|6, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.409-0500 s20014| 2016-04-06T02:52:10.255-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 242 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:27.411-0500 s20014| 2016-04-06T02:52:10.255-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 242 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-83.0", lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -83.0 }, max: { _id: -82.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-82.0", lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -82.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.413-0500 s20014| 2016-04-06T02:52:10.255-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|38||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:27.417-0500 s20014| 2016-04-06T02:52:10.255-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 22 version: 1|38||5704c02806c33406d4d9c0c0 based on: 1|36||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:27.419-0500 s20014| 2016-04-06T02:52:10.256-0500 D ASIO [conn1] startCommand: RemoteCommand 244 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:40.256-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|6, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.419-0500 s20014| 2016-04-06T02:52:10.256-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 244 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:27.426-0500 s20014| 2016-04-06T02:52:10.256-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 244 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-82.0", lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -82.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.428-0500 s20014| 2016-04-06T02:52:10.256-0500 I COMMAND [conn1] splitting chunk [{ _id: -82.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:27.433-0500 s20014| 2016-04-06T02:52:10.285-0500 D ASIO [conn1] startCommand: RemoteCommand 246 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:40.285-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|10, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.437-0500 s20014| 2016-04-06T02:52:10.285-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 246 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:27.445-0500 s20014| 2016-04-06T02:52:10.290-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 246 finished with response: { waitedMS: 4, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-81.0", lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -81.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.448-0500 s20014| 2016-04-06T02:52:10.290-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|38||5704c02806c33406d4d9c0c0 and 20 chunks [js_test:multi_coll_drop] 2016-04-06T02:52:27.450-0500 s20014| 2016-04-06T02:52:10.290-0500 D SHARDING [conn1] major version query from 1|38||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|38 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.453-0500 s20014| 2016-04-06T02:52:10.290-0500 D ASIO [conn1] startCommand: RemoteCommand 248 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:40.290-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|38 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|10, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.455-0500 s20014| 2016-04-06T02:52:10.291-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 248 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:27.459-0500 s20014| 2016-04-06T02:52:10.291-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 248 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-82.0", lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -82.0 }, max: { _id: -81.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-81.0", lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -81.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.461-0500 s20014| 2016-04-06T02:52:10.291-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|40||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:27.463-0500 s20014| 2016-04-06T02:52:10.291-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 23 version: 1|40||5704c02806c33406d4d9c0c0 based on: 1|38||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:27.468-0500 s20014| 2016-04-06T02:52:10.291-0500 D ASIO [conn1] startCommand: RemoteCommand 250 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:40.291-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|10, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.470-0500 s20014| 2016-04-06T02:52:10.291-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 250 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:27.472-0500 d20010| 2016-04-06T02:52:10.222-0500 I SHARDING [conn5] distributed lock with ts: 5704c02965c17830b843f19e' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:27.475-0500 d20010| 2016-04-06T02:52:10.222-0500 I COMMAND [conn5] command admin.$cmd command: splitChunk { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -84.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -83.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|34, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } numYields:0 reslen:74 locks:{ Global: { acquireCount: { r: 6, w: 2 } }, Database: { acquireCount: { r: 2, w: 2 } }, Collection: { acquireCount: { r: 2, W: 2 } } } protocol:op_command 1092ms [js_test:multi_coll_drop] 2016-04-06T02:52:27.479-0500 d20010| 2016-04-06T02:52:10.228-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -83.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -82.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|36, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:52:27.480-0500 d20010| 2016-04-06T02:52:10.232-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -83.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c02a65c17830b843f1a0 [js_test:multi_coll_drop] 2016-04-06T02:52:27.483-0500 d20010| 2016-04-06T02:52:10.232-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|36||5704c02806c33406d4d9c0c0, current metadata version is 1|36||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:27.484-0500 d20010| 2016-04-06T02:52:10.233-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|36||5704c02806c33406d4d9c0c0, took 0ms) [js_test:multi_coll_drop] 2016-04-06T02:52:27.485-0500 d20010| 2016-04-06T02:52:10.233-0500 I SHARDING [conn5] splitChunk accepted at version 1|36||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:27.488-0500 d20010| 2016-04-06T02:52:10.239-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:10.239-0500-5704c02a65c17830b843f1a1", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929130239), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -83.0 }, max: { _id: MaxKey } }, left: { min: { _id: -83.0 }, max: { _id: -82.0 }, lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -82.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:52:27.490-0500 d20010| 2016-04-06T02:52:10.254-0500 I SHARDING [conn5] distributed lock with ts: 5704c02a65c17830b843f1a0' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:27.491-0500 d20010| 2016-04-06T02:52:10.256-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -82.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -81.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|38, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:52:27.492-0500 d20010| 2016-04-06T02:52:10.265-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -82.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c02a65c17830b843f1a2 [js_test:multi_coll_drop] 2016-04-06T02:52:27.494-0500 d20010| 2016-04-06T02:52:10.265-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|38||5704c02806c33406d4d9c0c0, current metadata version is 1|38||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:27.495-0500 d20010| 2016-04-06T02:52:10.266-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|38||5704c02806c33406d4d9c0c0, took 0ms) [js_test:multi_coll_drop] 2016-04-06T02:52:27.497-0500 d20010| 2016-04-06T02:52:10.266-0500 I SHARDING [conn5] splitChunk accepted at version 1|38||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:27.500-0500 d20010| 2016-04-06T02:52:10.276-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:10.276-0500-5704c02a65c17830b843f1a3", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929130276), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -82.0 }, max: { _id: MaxKey } }, left: { min: { _id: -82.0 }, max: { _id: -81.0 }, lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -81.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:52:27.501-0500 d20010| 2016-04-06T02:52:10.285-0500 I SHARDING [conn5] distributed lock with ts: 5704c02a65c17830b843f1a2' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:27.505-0500 c20013| 2016-04-06T02:52:08.354-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.505-0500 c20013| 2016-04-06T02:52:08.354-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.507-0500 c20013| 2016-04-06T02:52:08.354-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:27.507-0500 c20013| 2016-04-06T02:52:08.354-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.510-0500 c20013| 2016-04-06T02:52:08.354-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.511-0500 c20013| 2016-04-06T02:52:08.354-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.512-0500 c20013| 2016-04-06T02:52:08.355-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.514-0500 c20013| 2016-04-06T02:52:08.355-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.514-0500 c20013| 2016-04-06T02:52:08.355-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.515-0500 c20013| 2016-04-06T02:52:08.355-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.516-0500 c20013| 2016-04-06T02:52:08.355-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.517-0500 c20013| 2016-04-06T02:52:08.355-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.519-0500 c20013| 2016-04-06T02:52:08.355-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.520-0500 c20013| 2016-04-06T02:52:08.355-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.520-0500 c20013| 2016-04-06T02:52:08.355-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.521-0500 c20013| 2016-04-06T02:52:08.355-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.522-0500 c20013| 2016-04-06T02:52:08.355-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.524-0500 c20013| 2016-04-06T02:52:08.355-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.527-0500 c20013| 2016-04-06T02:52:08.355-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.527-0500 c20013| 2016-04-06T02:52:08.355-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.528-0500 c20013| 2016-04-06T02:52:08.355-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.529-0500 c20013| 2016-04-06T02:52:08.355-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.530-0500 c20013| 2016-04-06T02:52:08.355-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.534-0500 c20013| 2016-04-06T02:52:08.355-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:27.539-0500 c20013| 2016-04-06T02:52:08.356-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:27.546-0500 c20013| 2016-04-06T02:52:08.356-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 294 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:27.548-0500 c20013| 2016-04-06T02:52:08.356-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 294 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:27.555-0500 c20013| 2016-04-06T02:52:08.356-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 295 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.356-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929127000|16, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:27.557-0500 c20013| 2016-04-06T02:52:08.356-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 294 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.558-0500 c20013| 2016-04-06T02:52:08.356-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 295 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:27.571-0500 c20013| 2016-04-06T02:52:08.356-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 295 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.575-0500 c20013| 2016-04-06T02:52:08.357-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:27.577-0500 c20013| 2016-04-06T02:52:08.357-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.581-0500 c20013| 2016-04-06T02:52:08.357-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 298 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:27.582-0500 c20013| 2016-04-06T02:52:08.357-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 298 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:27.583-0500 c20013| 2016-04-06T02:52:08.357-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:27.590-0500 c20013| 2016-04-06T02:52:08.357-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 299 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.357-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:27.590-0500 c20013| 2016-04-06T02:52:08.357-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 298 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.591-0500 c20013| 2016-04-06T02:52:08.358-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 299 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:27.596-0500 c20013| 2016-04-06T02:52:08.365-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 299 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|2, t: 1, h: 3529413680518098651, v: 2, op: "i", ns: "config.lockpings", o: { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929128362) } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.599-0500 c20013| 2016-04-06T02:52:08.366-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|2 and ending at ts: Timestamp 1459929128000|2 [js_test:multi_coll_drop] 2016-04-06T02:52:27.600-0500 c20013| 2016-04-06T02:52:08.367-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:27.601-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.603-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.604-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.606-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.609-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.609-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.611-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.611-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.614-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.617-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.618-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.619-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.620-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.620-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.621-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.622-0500 c20013| 2016-04-06T02:52:08.367-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:27.624-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.629-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.629-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.631-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.632-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.634-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.638-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.639-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.641-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.645-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.657-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.659-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.661-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.666-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.666-0500 c20013| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.670-0500 c20013| 2016-04-06T02:52:08.368-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.676-0500 c20013| 2016-04-06T02:52:08.368-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.677-0500 c20013| 2016-04-06T02:52:08.368-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:27.683-0500 c20013| 2016-04-06T02:52:08.368-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:27.688-0500 c20013| 2016-04-06T02:52:08.368-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 302 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:27.690-0500 c20013| 2016-04-06T02:52:08.368-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 302 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:27.690-0500 c20013| 2016-04-06T02:52:08.368-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 302 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.694-0500 c20013| 2016-04-06T02:52:08.369-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 304 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.369-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:27.695-0500 c20013| 2016-04-06T02:52:08.369-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 304 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:27.699-0500 c20013| 2016-04-06T02:52:08.370-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:27.703-0500 c20013| 2016-04-06T02:52:08.370-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 305 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:27.704-0500 c20013| 2016-04-06T02:52:08.370-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 305 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:27.705-0500 c20013| 2016-04-06T02:52:08.371-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 305 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.706-0500 c20013| 2016-04-06T02:52:08.371-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 304 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.707-0500 c20013| 2016-04-06T02:52:08.371-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.708-0500 c20013| 2016-04-06T02:52:08.371-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:27.714-0500 c20013| 2016-04-06T02:52:08.371-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 308 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.371-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:27.716-0500 c20013| 2016-04-06T02:52:08.372-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 308 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:27.720-0500 c20013| 2016-04-06T02:52:08.374-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 308 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|3, t: 1, h: -1942800269136220941, v: 2, op: "c", ns: "config.$cmd", o: { create: "databases" } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.723-0500 c20013| 2016-04-06T02:52:08.375-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|3 and ending at ts: Timestamp 1459929128000|3 [js_test:multi_coll_drop] 2016-04-06T02:52:27.724-0500 c20013| 2016-04-06T02:52:08.375-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:27.727-0500 c20013| 2016-04-06T02:52:08.375-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.728-0500 c20013| 2016-04-06T02:52:08.375-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.728-0500 c20013| 2016-04-06T02:52:08.375-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.728-0500 c20013| 2016-04-06T02:52:08.375-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.729-0500 c20013| 2016-04-06T02:52:08.375-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.736-0500 c20011| 2016-04-06T02:52:08.477-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:27.739-0500 c20011| 2016-04-06T02:52:08.477-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|8, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 8ms [js_test:multi_coll_drop] 2016-04-06T02:52:27.745-0500 c20011| 2016-04-06T02:52:08.477-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|8, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 8ms [js_test:multi_coll_drop] 2016-04-06T02:52:27.747-0500 c20011| 2016-04-06T02:52:08.478-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:27.752-0500 c20011| 2016-04-06T02:52:08.478-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:27.754-0500 c20011| 2016-04-06T02:52:08.478-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:27.755-0500 c20011| 2016-04-06T02:52:08.478-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:27.758-0500 c20011| 2016-04-06T02:52:08.478-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|9, t: 1 } and is durable through: { ts: Timestamp 1459929128000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.761-0500 c20011| 2016-04-06T02:52:08.478-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|10, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|9, t: 1 }, name-id: "89" } [js_test:multi_coll_drop] 2016-04-06T02:52:27.763-0500 c20011| 2016-04-06T02:52:08.478-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.769-0500 c20011| 2016-04-06T02:52:08.478-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:27.773-0500 c20011| 2016-04-06T02:52:08.480-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:27.774-0500 c20011| 2016-04-06T02:52:08.480-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:27.779-0500 c20011| 2016-04-06T02:52:08.480-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|10, t: 1 } and is durable through: { ts: Timestamp 1459929128000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.783-0500 c20011| 2016-04-06T02:52:08.480-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|10, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|9, t: 1 }, name-id: "89" } [js_test:multi_coll_drop] 2016-04-06T02:52:27.787-0500 c20011| 2016-04-06T02:52:08.480-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|10, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|9, t: 1 }, name-id: "89" } [js_test:multi_coll_drop] 2016-04-06T02:52:27.789-0500 c20011| 2016-04-06T02:52:08.480-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.793-0500 c20011| 2016-04-06T02:52:08.480-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:27.795-0500 c20011| 2016-04-06T02:52:08.480-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:27.796-0500 c20011| 2016-04-06T02:52:08.480-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:27.801-0500 c20011| 2016-04-06T02:52:08.480-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|10, t: 1 } and is durable through: { ts: Timestamp 1459929128000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.806-0500 c20011| 2016-04-06T02:52:08.480-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.817-0500 c20011| 2016-04-06T02:52:08.480-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:27.820-0500 c20011| 2016-04-06T02:52:08.482-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:27.821-0500 c20011| 2016-04-06T02:52:08.482-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:27.824-0500 c20011| 2016-04-06T02:52:08.482-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.826-0500 c20011| 2016-04-06T02:52:08.482-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|10, t: 1 } and is durable through: { ts: Timestamp 1459929128000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.828-0500 c20011| 2016-04-06T02:52:08.482-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.833-0500 c20011| 2016-04-06T02:52:08.482-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:27.838-0500 c20011| 2016-04-06T02:52:08.482-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|9, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:27.842-0500 c20011| 2016-04-06T02:52:08.482-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|9, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:27.846-0500 c20011| 2016-04-06T02:52:08.482-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:27.847-0500 c20011| 2016-04-06T02:52:08.482-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:27.849-0500 c20011| 2016-04-06T02:52:08.483-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|10, t: 1 } and is durable through: { ts: Timestamp 1459929128000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.851-0500 c20011| 2016-04-06T02:52:08.483-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:27.856-0500 c20011| 2016-04-06T02:52:08.483-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.865-0500 c20011| 2016-04-06T02:52:08.483-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:27.870-0500 c20011| 2016-04-06T02:52:08.483-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:27.875-0500 c20011| 2016-04-06T02:52:08.484-0500 I COMMAND [conn10] command config.$cmd command: update { update: "collections", updates: [ { q: { _id: "multidrop.coll" }, u: { _id: "multidrop.coll", lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), lastmod: new Date(4294967296), dropped: false, key: { _id: 1.0 }, unique: false }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:444 locks:{ Global: { acquireCount: { r: 5, w: 5 } }, Database: { acquireCount: { w: 4, W: 1 } }, Collection: { acquireCount: { w: 2 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 35ms [js_test:multi_coll_drop] 2016-04-06T02:52:27.879-0500 c20011| 2016-04-06T02:52:08.485-0500 D COMMAND [conn25] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|10, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.883-0500 c20011| 2016-04-06T02:52:08.485-0500 D COMMAND [conn25] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|10, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:27.888-0500 c20011| 2016-04-06T02:52:08.485-0500 D COMMAND [conn25] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|10, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.890-0500 c20011| 2016-04-06T02:52:08.485-0500 D QUERY [conn25] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:27.896-0500 c20011| 2016-04-06T02:52:08.485-0500 I COMMAND [conn25] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|10, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:512 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:27.899-0500 c20011| 2016-04-06T02:52:08.490-0500 D COMMAND [conn10] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.489-0500-5704c02806c33406d4d9c0c1", server: "mongovm16", clientAddr: "127.0.0.1:55066", time: new Date(1459929128489), what: "shardCollection.end", ns: "multidrop.coll", details: { version: "1|0||5704c02806c33406d4d9c0c0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.906-0500 c20011| 2016-04-06T02:52:08.490-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|10, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:695 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:52:27.909-0500 c20011| 2016-04-06T02:52:08.490-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|10, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:695 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:52:27.910-0500 c20011| 2016-04-06T02:52:08.492-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929128000|11, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|10, t: 1 }, name-id: "90" } [js_test:multi_coll_drop] 2016-04-06T02:52:27.914-0500 c20011| 2016-04-06T02:52:08.493-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:27.924-0500 c20011| 2016-04-06T02:52:08.493-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:27.924-0500 c20011| 2016-04-06T02:52:08.493-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:27.934-0500 c20011| 2016-04-06T02:52:08.493-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|11, t: 1 } and is durable through: { ts: Timestamp 1459929128000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.943-0500 c20011| 2016-04-06T02:52:08.493-0500 D REPL [conn17] Required snapshot optime: { ts: Timestamp 1459929128000|11, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|10, t: 1 }, name-id: "90" } [js_test:multi_coll_drop] 2016-04-06T02:52:27.953-0500 c20011| 2016-04-06T02:52:08.493-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:27.964-0500 c20011| 2016-04-06T02:52:08.493-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:27.970-0500 c20013| 2016-04-06T02:52:08.375-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.973-0500 c20013| 2016-04-06T02:52:08.375-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.976-0500 c20013| 2016-04-06T02:52:08.375-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.976-0500 c20013| 2016-04-06T02:52:08.375-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.978-0500 c20013| 2016-04-06T02:52:08.375-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.982-0500 s20014| 2016-04-06T02:52:13.716-0500 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:27.983-0500 c20013| 2016-04-06T02:52:08.375-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.985-0500 c20013| 2016-04-06T02:52:08.375-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.986-0500 c20013| 2016-04-06T02:52:08.375-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.989-0500 c20013| 2016-04-06T02:52:08.375-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.989-0500 c20013| 2016-04-06T02:52:08.375-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:27.990-0500 c20013| 2016-04-06T02:52:08.375-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:28.012-0500 c20011| 2016-04-06T02:52:08.494-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:28.025-0500 c20011| 2016-04-06T02:52:08.495-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:28.026-0500 c20013| 2016-04-06T02:52:08.376-0500 D STORAGE [repl writer worker 0] create collection config.databases {} [js_test:multi_coll_drop] 2016-04-06T02:52:28.027-0500 c20013| 2016-04-06T02:52:08.376-0500 D STORAGE [repl writer worker 0] stored meta data for config.databases @ RecordId(16) [js_test:multi_coll_drop] 2016-04-06T02:52:28.030-0500 c20013| 2016-04-06T02:52:08.376-0500 D STORAGE [repl writer worker 0] WiredTigerKVEngine::createRecordStore uri: table:collection-37-751336887848580549 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:28.032-0500 c20013| 2016-04-06T02:52:08.376-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.035-0500 c20013| 2016-04-06T02:52:08.377-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 310 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.377-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:28.038-0500 c20013| 2016-04-06T02:52:08.377-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 310 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:28.042-0500 c20013| 2016-04-06T02:52:08.378-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 310 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|4, t: 1, h: 4575545370530673351, v: 2, op: "i", ns: "config.databases", o: { _id: "multidrop", primary: "shard0000", partitioned: true } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.045-0500 c20013| 2016-04-06T02:52:08.378-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|4 and ending at ts: Timestamp 1459929128000|4 [js_test:multi_coll_drop] 2016-04-06T02:52:28.048-0500 c20013| 2016-04-06T02:52:08.380-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 312 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.380-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:28.049-0500 c20013| 2016-04-06T02:52:08.380-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 312 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:28.052-0500 c20013| 2016-04-06T02:52:08.385-0500 D STORAGE [repl writer worker 0] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-37-751336887848580549 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:28.056-0500 c20013| 2016-04-06T02:52:08.386-0500 D STORAGE [repl writer worker 0] config.databases: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:28.061-0500 c20013| 2016-04-06T02:52:08.386-0500 D STORAGE [repl writer worker 0] WiredTigerKVEngine::createSortedDataInterface ident: index-38-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.databases" }), [js_test:multi_coll_drop] 2016-04-06T02:52:28.067-0500 c20013| 2016-04-06T02:52:08.386-0500 D STORAGE [repl writer worker 0] create uri: table:index-38-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.databases" }), [js_test:multi_coll_drop] 2016-04-06T02:52:28.068-0500 c20013| 2016-04-06T02:52:08.390-0500 D STORAGE [repl writer worker 0] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-38-751336887848580549 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:28.073-0500 c20013| 2016-04-06T02:52:08.390-0500 D STORAGE [repl writer worker 0] config.databases: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:28.075-0500 c20013| 2016-04-06T02:52:08.391-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.077-0500 c20013| 2016-04-06T02:52:08.391-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.078-0500 c20013| 2016-04-06T02:52:08.391-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.079-0500 c20013| 2016-04-06T02:52:08.391-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.081-0500 c20013| 2016-04-06T02:52:08.391-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.081-0500 c20013| 2016-04-06T02:52:08.391-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.082-0500 c20013| 2016-04-06T02:52:08.391-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.085-0500 c20013| 2016-04-06T02:52:08.391-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.086-0500 c20013| 2016-04-06T02:52:08.391-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.088-0500 c20013| 2016-04-06T02:52:08.391-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.091-0500 c20013| 2016-04-06T02:52:08.391-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.093-0500 c20013| 2016-04-06T02:52:08.391-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.095-0500 c20013| 2016-04-06T02:52:08.391-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.097-0500 c20013| 2016-04-06T02:52:08.391-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.098-0500 c20013| 2016-04-06T02:52:08.391-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.101-0500 c20013| 2016-04-06T02:52:08.391-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.115-0500 c20013| 2016-04-06T02:52:08.392-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:28.118-0500 c20013| 2016-04-06T02:52:08.392-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:28.121-0500 c20013| 2016-04-06T02:52:08.392-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.122-0500 c20013| 2016-04-06T02:52:08.392-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.122-0500 c20013| 2016-04-06T02:52:08.392-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.124-0500 c20013| 2016-04-06T02:52:08.392-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.125-0500 c20011| 2016-04-06T02:52:08.495-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:28.127-0500 c20011| 2016-04-06T02:52:08.495-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|11, t: 1 } and is durable through: { ts: Timestamp 1459929128000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.131-0500 c20011| 2016-04-06T02:52:08.495-0500 D REPL [conn17] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.133-0500 c20011| 2016-04-06T02:52:08.495-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.140-0500 c20011| 2016-04-06T02:52:08.495-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.145-0500 c20011| 2016-04-06T02:52:08.496-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:28.145-0500 c20011| 2016-04-06T02:52:08.496-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:28.152-0500 c20011| 2016-04-06T02:52:08.496-0500 I COMMAND [conn10] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.489-0500-5704c02806c33406d4d9c0c1", server: "mongovm16", clientAddr: "127.0.0.1:55066", time: new Date(1459929128489), what: "shardCollection.end", ns: "multidrop.coll", details: { version: "1|0||5704c02806c33406d4d9c0c0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.154-0500 c20011| 2016-04-06T02:52:08.496-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.157-0500 c20011| 2016-04-06T02:52:08.496-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|11, t: 1 } and is durable through: { ts: Timestamp 1459929128000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.160-0500 c20011| 2016-04-06T02:52:08.496-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.162-0500 c20011| 2016-04-06T02:52:08.496-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|10, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.167-0500 c20011| 2016-04-06T02:52:08.496-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|10, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.169-0500 c20011| 2016-04-06T02:52:08.496-0500 D COMMAND [conn10] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c02806c33406d4d9c0be') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.171-0500 c20011| 2016-04-06T02:52:08.496-0500 D QUERY [conn10] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:28.174-0500 c20011| 2016-04-06T02:52:08.496-0500 D QUERY [conn10] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c02806c33406d4d9c0be') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.178-0500 c20011| 2016-04-06T02:52:08.496-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:28.184-0500 c20011| 2016-04-06T02:52:08.496-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|11, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.186-0500 c20011| 2016-04-06T02:52:08.496-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:28.190-0500 c20011| 2016-04-06T02:52:08.497-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:28.193-0500 c20011| 2016-04-06T02:52:08.498-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:28.193-0500 c20011| 2016-04-06T02:52:08.498-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.196-0500 c20011| 2016-04-06T02:52:08.498-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|11, t: 1 } and is durable through: { ts: Timestamp 1459929128000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.205-0500 c20011| 2016-04-06T02:52:08.498-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.211-0500 c20011| 2016-04-06T02:52:08.498-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|11, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.215-0500 c20011| 2016-04-06T02:52:08.499-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:28.218-0500 c20011| 2016-04-06T02:52:08.500-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:28.225-0500 c20011| 2016-04-06T02:52:08.500-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:28.225-0500 c20011| 2016-04-06T02:52:08.500-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:28.229-0500 c20011| 2016-04-06T02:52:08.500-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|12, t: 1 } and is durable through: { ts: Timestamp 1459929128000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.231-0500 c20011| 2016-04-06T02:52:08.500-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.235-0500 c20011| 2016-04-06T02:52:08.500-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.241-0500 c20011| 2016-04-06T02:52:08.500-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:28.241-0500 c20011| 2016-04-06T02:52:08.500-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:28.243-0500 c20013| 2016-04-06T02:52:08.392-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.244-0500 c20013| 2016-04-06T02:52:08.392-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.246-0500 c20013| 2016-04-06T02:52:08.392-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.248-0500 c20013| 2016-04-06T02:52:08.392-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.251-0500 c20013| 2016-04-06T02:52:08.392-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.252-0500 c20013| 2016-04-06T02:52:08.392-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.256-0500 c20013| 2016-04-06T02:52:08.392-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.260-0500 c20013| 2016-04-06T02:52:08.392-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.261-0500 c20013| 2016-04-06T02:52:08.392-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.261-0500 c20013| 2016-04-06T02:52:08.392-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.262-0500 c20013| 2016-04-06T02:52:08.392-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.265-0500 c20013| 2016-04-06T02:52:08.392-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:28.269-0500 c20013| 2016-04-06T02:52:08.392-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.280-0500 c20013| 2016-04-06T02:52:08.392-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:28.285-0500 c20013| 2016-04-06T02:52:08.392-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 313 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:28.288-0500 c20013| 2016-04-06T02:52:08.392-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 313 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:28.288-0500 c20013| 2016-04-06T02:52:08.392-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.289-0500 c20013| 2016-04-06T02:52:08.393-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 313 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.290-0500 c20013| 2016-04-06T02:52:08.393-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.291-0500 c20013| 2016-04-06T02:52:08.393-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.291-0500 c20013| 2016-04-06T02:52:08.393-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.292-0500 c20013| 2016-04-06T02:52:08.393-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.292-0500 c20013| 2016-04-06T02:52:08.393-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.293-0500 c20013| 2016-04-06T02:52:08.393-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.299-0500 c20013| 2016-04-06T02:52:08.393-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.302-0500 c20013| 2016-04-06T02:52:08.393-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.303-0500 c20013| 2016-04-06T02:52:08.393-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.304-0500 c20013| 2016-04-06T02:52:08.393-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.305-0500 c20013| 2016-04-06T02:52:08.393-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.312-0500 c20013| 2016-04-06T02:52:08.393-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:28.313-0500 c20013| 2016-04-06T02:52:08.393-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.321-0500 c20013| 2016-04-06T02:52:08.393-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 315 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:28.323-0500 c20013| 2016-04-06T02:52:08.393-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.324-0500 c20013| 2016-04-06T02:52:08.393-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.325-0500 c20013| 2016-04-06T02:52:08.393-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:28.330-0500 c20011| 2016-04-06T02:52:08.500-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.335-0500 c20011| 2016-04-06T02:52:08.501-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|12, t: 1 } and is durable through: { ts: Timestamp 1459929128000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.341-0500 c20011| 2016-04-06T02:52:08.501-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.346-0500 c20011| 2016-04-06T02:52:08.501-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929128000|12, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|11, t: 1 }, name-id: "91" } [js_test:multi_coll_drop] 2016-04-06T02:52:28.350-0500 c20011| 2016-04-06T02:52:08.502-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:28.367-0500 c20011| 2016-04-06T02:52:08.502-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:28.370-0500 c20011| 2016-04-06T02:52:08.502-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.373-0500 c20011| 2016-04-06T02:52:08.502-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|12, t: 1 } and is durable through: { ts: Timestamp 1459929128000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.378-0500 c20011| 2016-04-06T02:52:08.502-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.388-0500 c20011| 2016-04-06T02:52:08.502-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.396-0500 c20011| 2016-04-06T02:52:08.502-0500 I COMMAND [conn10] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c02806c33406d4d9c0be') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:561 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.400-0500 c20011| 2016-04-06T02:52:08.502-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|11, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.404-0500 c20011| 2016-04-06T02:52:08.502-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|11, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.410-0500 c20011| 2016-04-06T02:52:08.503-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:28.411-0500 c20011| 2016-04-06T02:52:08.503-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:28.440-0500 c20011| 2016-04-06T02:52:08.503-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|12, t: 1 } and is durable through: { ts: Timestamp 1459929128000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.443-0500 c20011| 2016-04-06T02:52:08.503-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.446-0500 c20011| 2016-04-06T02:52:08.503-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.447-0500 c20011| 2016-04-06T02:52:08.503-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:28.449-0500 c20011| 2016-04-06T02:52:08.503-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:28.451-0500 c20011| 2016-04-06T02:52:08.506-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|12, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.455-0500 c20011| 2016-04-06T02:52:08.506-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|12, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:28.458-0500 c20011| 2016-04-06T02:52:08.506-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|12, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.462-0500 c20011| 2016-04-06T02:52:08.506-0500 D QUERY [conn10] Relevant index 0 is kp: { ns: 1, min: 1 } unique name: 'ns_1_min_1' io: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:28.464-0500 c20011| 2016-04-06T02:52:08.506-0500 D QUERY [conn10] Relevant index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: 'ns_1_shard_1_min_1' io: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:28.466-0500 c20011| 2016-04-06T02:52:08.506-0500 D QUERY [conn10] Relevant index 2 is kp: { ns: 1, lastmod: 1 } unique name: 'ns_1_lastmod_1' io: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:28.467-0500 c20011| 2016-04-06T02:52:08.506-0500 D QUERY [conn10] Scoring query plan: IXSCAN { ns: 1, min: 1 } planHitEOF=0 [js_test:multi_coll_drop] 2016-04-06T02:52:28.469-0500 c20011| 2016-04-06T02:52:08.506-0500 D QUERY [conn10] score(1.0002) = baseScore(1) + productivity((0 advanced)/(1 works) = 0) + tieBreakers(0.0001 noFetchBonus + 0 noSortBonus + 0.0001 noIxisectBonus = 0.0002) [js_test:multi_coll_drop] 2016-04-06T02:52:28.472-0500 c20011| 2016-04-06T02:52:08.506-0500 D QUERY [conn10] Scoring query plan: IXSCAN { ns: 1, shard: 1, min: 1 } planHitEOF=0 [js_test:multi_coll_drop] 2016-04-06T02:52:28.474-0500 c20011| 2016-04-06T02:52:08.506-0500 D QUERY [conn10] score(1.0002) = baseScore(1) + productivity((0 advanced)/(1 works) = 0) + tieBreakers(0.0001 noFetchBonus + 0 noSortBonus + 0.0001 noIxisectBonus = 0.0002) [js_test:multi_coll_drop] 2016-04-06T02:52:28.475-0500 c20011| 2016-04-06T02:52:08.506-0500 D QUERY [conn10] Scoring query plan: IXSCAN { ns: 1, lastmod: 1 } planHitEOF=1 [js_test:multi_coll_drop] 2016-04-06T02:52:28.492-0500 c20011| 2016-04-06T02:52:08.506-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:28.497-0500 c20011| 2016-04-06T02:52:08.506-0500 D QUERY [conn10] Winning plan: IXSCAN { ns: 1, lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.506-0500 c20011| 2016-04-06T02:52:08.506-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|12, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 fromMultiPlanner:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:550 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.517-0500 c20011| 2016-04-06T02:52:08.513-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f17c'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128507), why: "splitting chunk [{ _id: MinKey }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.520-0500 c20011| 2016-04-06T02:52:08.513-0500 D QUERY [conn25] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:28.521-0500 c20011| 2016-04-06T02:52:08.513-0500 D QUERY [conn25] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:28.524-0500 c20011| 2016-04-06T02:52:08.513-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.528-0500 c20011| 2016-04-06T02:52:08.514-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|12, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:705 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 11ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.532-0500 c20011| 2016-04-06T02:52:08.514-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|12, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:705 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 11ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.537-0500 c20011| 2016-04-06T02:52:08.517-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:28.539-0500 c20011| 2016-04-06T02:52:08.517-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:28.542-0500 c20011| 2016-04-06T02:52:08.517-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|13, t: 1 } and is durable through: { ts: Timestamp 1459929128000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.543-0500 c20011| 2016-04-06T02:52:08.517-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.549-0500 c20011| 2016-04-06T02:52:08.517-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.551-0500 c20011| 2016-04-06T02:52:08.517-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:28.556-0500 c20011| 2016-04-06T02:52:08.518-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:28.562-0500 c20011| 2016-04-06T02:52:08.518-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|13, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|12, t: 1 }, name-id: "92" } [js_test:multi_coll_drop] 2016-04-06T02:52:28.565-0500 c20011| 2016-04-06T02:52:08.518-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:28.568-0500 c20011| 2016-04-06T02:52:08.518-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:28.573-0500 c20011| 2016-04-06T02:52:08.518-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.577-0500 c20011| 2016-04-06T02:52:08.518-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|13, t: 1 } and is durable through: { ts: Timestamp 1459929128000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.581-0500 c20011| 2016-04-06T02:52:08.518-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929128000|13, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|12, t: 1 }, name-id: "92" } [js_test:multi_coll_drop] 2016-04-06T02:52:28.586-0500 c20011| 2016-04-06T02:52:08.518-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.592-0500 c20011| 2016-04-06T02:52:08.519-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:28.593-0500 c20011| 2016-04-06T02:52:08.519-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:28.600-0500 c20011| 2016-04-06T02:52:08.519-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|13, t: 1 } and is durable through: { ts: Timestamp 1459929128000|13, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.602-0500 c20011| 2016-04-06T02:52:08.519-0500 D REPL [conn17] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|13, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.603-0500 c20011| 2016-04-06T02:52:08.519-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.609-0500 c20011| 2016-04-06T02:52:08.519-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.613-0500 c20011| 2016-04-06T02:52:08.520-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|12, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.622-0500 c20011| 2016-04-06T02:52:08.520-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f17c'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128507), why: "splitting chunk [{ _id: MinKey }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c02865c17830b843f17c'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128507), why: "splitting chunk [{ _id: MinKey }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:612 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.627-0500 c20011| 2016-04-06T02:52:08.520-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|12, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.630-0500 c20011| 2016-04-06T02:52:08.520-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|13, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:28.632-0500 c20011| 2016-04-06T02:52:08.520-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|13, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:28.636-0500 c20011| 2016-04-06T02:52:08.521-0500 D COMMAND [conn25] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|0 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|13, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.641-0500 c20011| 2016-04-06T02:52:08.521-0500 D COMMAND [conn25] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|13, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:28.644-0500 c20011| 2016-04-06T02:52:08.521-0500 D COMMAND [conn25] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|0 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|13, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.647-0500 c20011| 2016-04-06T02:52:08.521-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:28.648-0500 c20011| 2016-04-06T02:52:08.521-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:28.650-0500 c20011| 2016-04-06T02:52:08.521-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.652-0500 c20011| 2016-04-06T02:52:08.521-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|13, t: 1 } and is durable through: { ts: Timestamp 1459929128000|13, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.658-0500 c20011| 2016-04-06T02:52:08.521-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.661-0500 c20011| 2016-04-06T02:52:08.521-0500 D QUERY [conn25] Relevant index 0 is kp: { ns: 1, min: 1 } unique name: 'ns_1_min_1' io: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:28.663-0500 c20011| 2016-04-06T02:52:08.521-0500 D QUERY [conn25] Relevant index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: 'ns_1_shard_1_min_1' io: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:28.664-0500 c20011| 2016-04-06T02:52:08.521-0500 D QUERY [conn25] Relevant index 2 is kp: { ns: 1, lastmod: 1 } unique name: 'ns_1_lastmod_1' io: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:28.669-0500 c20011| 2016-04-06T02:52:08.521-0500 D QUERY [conn25] Relevant index 0 is kp: { lastmod: 1 } multikey name: 'doesnt_matter' [js_test:multi_coll_drop] 2016-04-06T02:52:28.671-0500 c20011| 2016-04-06T02:52:08.521-0500 D QUERY [conn25] Relevant index 0 is kp: { lastmod: 1 } multikey name: 'doesnt_matter' [js_test:multi_coll_drop] 2016-04-06T02:52:28.674-0500 c20011| 2016-04-06T02:52:08.521-0500 D QUERY [conn25] Scoring query plan: IXSCAN { ns: 1, lastmod: 1 } planHitEOF=1 [js_test:multi_coll_drop] 2016-04-06T02:52:28.676-0500 c20011| 2016-04-06T02:52:08.521-0500 D QUERY [conn25] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:28.677-0500 c20011| 2016-04-06T02:52:08.521-0500 D QUERY [conn25] Scoring query plan: IXSCAN { ns: 1, shard: 1, min: 1 } planHitEOF=0 [js_test:multi_coll_drop] 2016-04-06T02:52:28.680-0500 c20011| 2016-04-06T02:52:08.521-0500 D QUERY [conn25] score(1.0002) = baseScore(1) + productivity((0 advanced)/(2 works) = 0) + tieBreakers(0.0001 noFetchBonus + 0 noSortBonus + 0.0001 noIxisectBonus = 0.0002) [js_test:multi_coll_drop] 2016-04-06T02:52:28.681-0500 c20011| 2016-04-06T02:52:08.521-0500 D QUERY [conn25] Scoring query plan: IXSCAN { ns: 1, min: 1 } planHitEOF=0 [js_test:multi_coll_drop] 2016-04-06T02:52:28.686-0500 c20011| 2016-04-06T02:52:08.521-0500 D QUERY [conn25] score(1.0002) = baseScore(1) + productivity((0 advanced)/(2 works) = 0) + tieBreakers(0.0001 noFetchBonus + 0 noSortBonus + 0.0001 noIxisectBonus = 0.0002) [js_test:multi_coll_drop] 2016-04-06T02:52:28.695-0500 c20011| 2016-04-06T02:52:08.521-0500 D QUERY [conn25] Winning plan: IXSCAN { ns: 1, lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.702-0500 c20011| 2016-04-06T02:52:08.521-0500 I COMMAND [conn25] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|0 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|13, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 fromMultiPlanner:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:550 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.707-0500 c20011| 2016-04-06T02:52:08.522-0500 D COMMAND [conn25] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: MinKey }, max: { _id: -100.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_MinKey" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-100.0", lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -100.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-100.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|0 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.709-0500 c20011| 2016-04-06T02:52:08.522-0500 D QUERY [conn25] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:28.710-0500 c20011| 2016-04-06T02:52:08.522-0500 D QUERY [conn25] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:28.714-0500 c20011| 2016-04-06T02:52:08.522-0500 I COMMAND [conn25] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:177 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.715-0500 c20011| 2016-04-06T02:52:08.522-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_MinKey" } [js_test:multi_coll_drop] 2016-04-06T02:52:28.716-0500 c20011| 2016-04-06T02:52:08.522-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-100.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:28.722-0500 c20011| 2016-04-06T02:52:08.524-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|13, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:1034 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.726-0500 c20011| 2016-04-06T02:52:08.524-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|13, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:1034 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.730-0500 c20011| 2016-04-06T02:52:08.526-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|14, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|13, t: 1 }, name-id: "93" } [js_test:multi_coll_drop] 2016-04-06T02:52:28.733-0500 c20011| 2016-04-06T02:52:08.526-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:28.734-0500 c20011| 2016-04-06T02:52:08.526-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:28.735-0500 c20011| 2016-04-06T02:52:08.526-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.739-0500 c20011| 2016-04-06T02:52:08.526-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|14, t: 1 } and is durable through: { ts: Timestamp 1459929128000|13, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.745-0500 c20011| 2016-04-06T02:52:08.526-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929128000|14, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|13, t: 1 }, name-id: "93" } [js_test:multi_coll_drop] 2016-04-06T02:52:28.747-0500 c20011| 2016-04-06T02:52:08.526-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.754-0500 c20011| 2016-04-06T02:52:08.526-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:28.755-0500 c20011| 2016-04-06T02:52:08.526-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:28.760-0500 c20011| 2016-04-06T02:52:08.526-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|14, t: 1 } and is durable through: { ts: Timestamp 1459929128000|13, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.764-0500 c20011| 2016-04-06T02:52:08.526-0500 D REPL [conn17] Required snapshot optime: { ts: Timestamp 1459929128000|14, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|13, t: 1 }, name-id: "93" } [js_test:multi_coll_drop] 2016-04-06T02:52:28.766-0500 c20011| 2016-04-06T02:52:08.526-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.771-0500 c20011| 2016-04-06T02:52:08.526-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.774-0500 c20011| 2016-04-06T02:52:08.526-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|13, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:28.775-0500 c20011| 2016-04-06T02:52:08.526-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|13, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:28.779-0500 c20011| 2016-04-06T02:52:08.527-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:28.780-0500 c20011| 2016-04-06T02:52:08.527-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:28.783-0500 c20011| 2016-04-06T02:52:08.527-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|14, t: 1 } and is durable through: { ts: Timestamp 1459929128000|14, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.795-0500 c20011| 2016-04-06T02:52:08.527-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:28.797-0500 c20011| 2016-04-06T02:52:08.527-0500 D REPL [conn17] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|14, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.800-0500 c20011| 2016-04-06T02:52:08.527-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:28.804-0500 c20011| 2016-04-06T02:52:08.527-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.807-0500 c20011| 2016-04-06T02:52:08.527-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.808-0500 c20011| 2016-04-06T02:52:08.527-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.812-0500 c20011| 2016-04-06T02:52:08.527-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|14, t: 1 } and is durable through: { ts: Timestamp 1459929128000|14, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.817-0500 c20011| 2016-04-06T02:52:08.528-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.822-0500 c20011| 2016-04-06T02:52:08.528-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|13, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.829-0500 c20011| 2016-04-06T02:52:08.528-0500 I COMMAND [conn25] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: MinKey }, max: { _id: -100.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_MinKey" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-100.0", lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -100.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-100.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|0 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.836-0500 c20011| 2016-04-06T02:52:08.528-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|13, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.838-0500 c20011| 2016-04-06T02:52:08.528-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:28.839-0500 c20011| 2016-04-06T02:52:08.528-0500 D COMMAND [conn25] run command config.$cmd { create: "changelog", capped: true, size: 10485760, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.839-0500 c20011| 2016-04-06T02:52:08.528-0500 D STORAGE [conn25] create collection config.changelog { capped: true, size: 10485760, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.842-0500 c20011| 2016-04-06T02:52:08.528-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:28.848-0500 c20011| 2016-04-06T02:52:08.528-0500 I COMMAND [conn25] command config.changelog command: create { create: "changelog", capped: true, size: 10485760, maxTimeMS: 30000 } numYields:0 reslen:356 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.860-0500 c20011| 2016-04-06T02:52:08.528-0500 D COMMAND [conn25] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.528-0500-5704c02865c17830b843f17d", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128528), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey } }, left: { min: { _id: MinKey }, max: { _id: -100.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -100.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.866-0500 c20011| 2016-04-06T02:52:08.528-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|14, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:871 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.880-0500 c20011| 2016-04-06T02:52:08.528-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|14, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:871 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.894-0500 c20011| 2016-04-06T02:52:08.530-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:28.894-0500 c20011| 2016-04-06T02:52:08.530-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:28.897-0500 c20011| 2016-04-06T02:52:08.530-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|15, t: 1 } and is durable through: { ts: Timestamp 1459929128000|14, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.899-0500 c20011| 2016-04-06T02:52:08.530-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.902-0500 c20011| 2016-04-06T02:52:08.530-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.905-0500 c20011| 2016-04-06T02:52:08.530-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:28.907-0500 c20011| 2016-04-06T02:52:08.530-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:28.911-0500 c20011| 2016-04-06T02:52:08.530-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.913-0500 c20011| 2016-04-06T02:52:08.530-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|15, t: 1 } and is durable through: { ts: Timestamp 1459929128000|14, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.917-0500 c20011| 2016-04-06T02:52:08.530-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.923-0500 c20011| 2016-04-06T02:52:08.531-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|15, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|14, t: 1 }, name-id: "94" } [js_test:multi_coll_drop] 2016-04-06T02:52:28.931-0500 c20011| 2016-04-06T02:52:08.531-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:28.933-0500 c20011| 2016-04-06T02:52:08.531-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:28.943-0500 c20011| 2016-04-06T02:52:08.531-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:28.950-0500 c20011| 2016-04-06T02:52:08.531-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:28.951-0500 c20011| 2016-04-06T02:52:08.531-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:28.955-0500 c20011| 2016-04-06T02:52:08.531-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:28.957-0500 c20011| 2016-04-06T02:52:08.531-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|15, t: 1 } and is durable through: { ts: Timestamp 1459929128000|15, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.958-0500 c20011| 2016-04-06T02:52:08.531-0500 D REPL [conn17] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|15, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.964-0500 c20011| 2016-04-06T02:52:08.531-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.968-0500 c20011| 2016-04-06T02:52:08.531-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.972-0500 c20011| 2016-04-06T02:52:08.531-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.974-0500 c20011| 2016-04-06T02:52:08.531-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|15, t: 1 } and is durable through: { ts: Timestamp 1459929128000|15, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:28.980-0500 c20011| 2016-04-06T02:52:08.531-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.986-0500 c20011| 2016-04-06T02:52:08.531-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|14, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:28.998-0500 c20011| 2016-04-06T02:52:08.531-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|14, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.024-0500 c20011| 2016-04-06T02:52:08.531-0500 I COMMAND [conn25] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.528-0500-5704c02865c17830b843f17d", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128528), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey } }, left: { min: { _id: MinKey }, max: { _id: -100.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -100.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.027-0500 c20011| 2016-04-06T02:52:08.532-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f17c') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.029-0500 c20011| 2016-04-06T02:52:08.532-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|15, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:29.033-0500 c20011| 2016-04-06T02:52:08.532-0500 D QUERY [conn25] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:29.035-0500 c20011| 2016-04-06T02:52:08.532-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c02865c17830b843f17c') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.039-0500 c20011| 2016-04-06T02:52:08.532-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|15, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:29.043-0500 c20011| 2016-04-06T02:52:08.532-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|15, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.045-0500 c20011| 2016-04-06T02:52:08.532-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|15, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.051-0500 c20011| 2016-04-06T02:52:08.534-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:29.057-0500 c20011| 2016-04-06T02:52:08.534-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:29.076-0500 c20011| 2016-04-06T02:52:08.534-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|16, t: 1 } and is durable through: { ts: Timestamp 1459929128000|15, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.082-0500 c20011| 2016-04-06T02:52:08.534-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.089-0500 c20011| 2016-04-06T02:52:08.534-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.096-0500 c20011| 2016-04-06T02:52:08.534-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:29.097-0500 c20011| 2016-04-06T02:52:08.534-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:29.102-0500 c20011| 2016-04-06T02:52:08.534-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.115-0500 c20011| 2016-04-06T02:52:08.534-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|16, t: 1 } and is durable through: { ts: Timestamp 1459929128000|15, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.121-0500 c20011| 2016-04-06T02:52:08.534-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.122-0500 c20011| 2016-04-06T02:52:08.535-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|16, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|15, t: 1 }, name-id: "95" } [js_test:multi_coll_drop] 2016-04-06T02:52:29.125-0500 c20011| 2016-04-06T02:52:08.535-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|15, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:29.127-0500 c20011| 2016-04-06T02:52:08.535-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|15, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:29.133-0500 c20011| 2016-04-06T02:52:08.539-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:29.133-0500 c20011| 2016-04-06T02:52:08.539-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:29.137-0500 c20011| 2016-04-06T02:52:08.539-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|16, t: 1 } and is durable through: { ts: Timestamp 1459929128000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.138-0500 c20011| 2016-04-06T02:52:08.539-0500 D REPL [conn17] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.139-0500 c20011| 2016-04-06T02:52:08.539-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.144-0500 c20011| 2016-04-06T02:52:08.539-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.147-0500 c20011| 2016-04-06T02:52:08.539-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|15, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.164-0500 c20011| 2016-04-06T02:52:08.539-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f17c') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:612 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.173-0500 c20011| 2016-04-06T02:52:08.539-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|15, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.177-0500 c20011| 2016-04-06T02:52:08.539-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|16, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:29.179-0500 c20011| 2016-04-06T02:52:08.539-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|16, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:29.184-0500 c20011| 2016-04-06T02:52:08.540-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|16, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.187-0500 c20011| 2016-04-06T02:52:08.540-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|16, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:29.190-0500 c20011| 2016-04-06T02:52:08.540-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|16, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.192-0500 c20011| 2016-04-06T02:52:08.540-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:29.197-0500 c20011| 2016-04-06T02:52:08.540-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|16, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:558 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.207-0500 c20011| 2016-04-06T02:52:08.541-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:29.208-0500 c20011| 2016-04-06T02:52:08.541-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:29.209-0500 c20011| 2016-04-06T02:52:08.541-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.212-0500 c20011| 2016-04-06T02:52:08.541-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|16, t: 1 } and is durable through: { ts: Timestamp 1459929128000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.224-0500 c20011| 2016-04-06T02:52:08.541-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.232-0500 c20011| 2016-04-06T02:52:08.541-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|16, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.233-0500 c20011| 2016-04-06T02:52:08.541-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|16, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:29.239-0500 c20011| 2016-04-06T02:52:08.541-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|16, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.240-0500 c20011| 2016-04-06T02:52:08.541-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:29.244-0500 c20011| 2016-04-06T02:52:08.541-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|16, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:558 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.249-0500 c20011| 2016-04-06T02:52:08.542-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f17e'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128542), why: "splitting chunk [{ _id: -100.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.250-0500 c20011| 2016-04-06T02:52:08.542-0500 D QUERY [conn25] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:29.252-0500 c20011| 2016-04-06T02:52:08.542-0500 D QUERY [conn25] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:29.256-0500 c20011| 2016-04-06T02:52:08.542-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.260-0500 c20011| 2016-04-06T02:52:08.542-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|16, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:603 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.264-0500 c20011| 2016-04-06T02:52:08.542-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|16, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:603 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.270-0500 c20011| 2016-04-06T02:52:08.544-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:29.271-0500 c20011| 2016-04-06T02:52:08.544-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:29.275-0500 c20011| 2016-04-06T02:52:08.544-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.280-0500 c20011| 2016-04-06T02:52:08.544-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|17, t: 1 } and is durable through: { ts: Timestamp 1459929128000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.286-0500 c20011| 2016-04-06T02:52:08.544-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.291-0500 c20011| 2016-04-06T02:52:08.545-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|17, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|16, t: 1 }, name-id: "96" } [js_test:multi_coll_drop] 2016-04-06T02:52:29.295-0500 c20011| 2016-04-06T02:52:08.545-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:29.296-0500 c20011| 2016-04-06T02:52:08.545-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:29.298-0500 c20011| 2016-04-06T02:52:08.545-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|17, t: 1 } and is durable through: { ts: Timestamp 1459929128000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.301-0500 c20011| 2016-04-06T02:52:08.545-0500 D REPL [conn17] Required snapshot optime: { ts: Timestamp 1459929128000|17, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|16, t: 1 }, name-id: "96" } [js_test:multi_coll_drop] 2016-04-06T02:52:29.303-0500 c20011| 2016-04-06T02:52:08.545-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.310-0500 c20011| 2016-04-06T02:52:08.545-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.312-0500 c20011| 2016-04-06T02:52:08.545-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|16, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:29.316-0500 c20011| 2016-04-06T02:52:08.545-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|16, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:29.319-0500 c20011| 2016-04-06T02:52:08.547-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:29.319-0500 c20011| 2016-04-06T02:52:08.547-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:29.322-0500 c20011| 2016-04-06T02:52:08.547-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.325-0500 c20011| 2016-04-06T02:52:08.547-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|17, t: 1 } and is durable through: { ts: Timestamp 1459929128000|17, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.327-0500 c20011| 2016-04-06T02:52:08.547-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|17, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.332-0500 c20011| 2016-04-06T02:52:08.547-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.341-0500 c20011| 2016-04-06T02:52:08.547-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f17e'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128542), why: "splitting chunk [{ _id: -100.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c02865c17830b843f17e'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128542), why: "splitting chunk [{ _id: -100.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:612 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.347-0500 c20011| 2016-04-06T02:52:08.547-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|16, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.355-0500 c20011| 2016-04-06T02:52:08.547-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|16, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.379-0500 c20011| 2016-04-06T02:52:08.547-0500 D COMMAND [conn25] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|17, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.381-0500 c20011| 2016-04-06T02:52:08.547-0500 D COMMAND [conn25] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|17, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:29.383-0500 c20011| 2016-04-06T02:52:08.547-0500 D COMMAND [conn25] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|17, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.384-0500 c20011| 2016-04-06T02:52:08.547-0500 D QUERY [conn25] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:29.387-0500 c20011| 2016-04-06T02:52:08.547-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|17, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:29.392-0500 c20011| 2016-04-06T02:52:08.547-0500 I COMMAND [conn25] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|17, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:512 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.401-0500 c20011| 2016-04-06T02:52:08.548-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:29.401-0500 c20011| 2016-04-06T02:52:08.548-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:29.405-0500 c20011| 2016-04-06T02:52:08.548-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|17, t: 1 } and is durable through: { ts: Timestamp 1459929128000|17, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.410-0500 c20011| 2016-04-06T02:52:08.548-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.416-0500 c20011| 2016-04-06T02:52:08.548-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.423-0500 c20011| 2016-04-06T02:52:08.548-0500 D COMMAND [conn25] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|2 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|17, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.425-0500 c20011| 2016-04-06T02:52:08.548-0500 D COMMAND [conn25] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|17, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:29.433-0500 c20011| 2016-04-06T02:52:08.548-0500 D COMMAND [conn25] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|2 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|17, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.436-0500 2016-04-06T02:52:14.049-0500 I NETWORK [thread2] trying reconnect to mongovm16:20011 (192.168.100.28) failed [js_test:multi_coll_drop] 2016-04-06T02:52:29.440-0500 c20011| 2016-04-06T02:52:08.548-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|17, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:29.442-0500 c20011| 2016-04-06T02:52:08.548-0500 D QUERY [conn25] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:29.449-0500 c20011| 2016-04-06T02:52:08.548-0500 I COMMAND [conn25] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|2 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|17, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:558 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.456-0500 c20011| 2016-04-06T02:52:08.548-0500 D COMMAND [conn25] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-100.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -100.0 }, max: { _id: -99.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-100.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-99.0", lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -99.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-99.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|2 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.457-0500 c20011| 2016-04-06T02:52:08.548-0500 D QUERY [conn25] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:29.460-0500 c20011| 2016-04-06T02:52:08.549-0500 D QUERY [conn25] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:29.464-0500 c20011| 2016-04-06T02:52:08.549-0500 I COMMAND [conn25] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:185 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.469-0500 c20011| 2016-04-06T02:52:08.549-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-100.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:29.469-0500 c20011| 2016-04-06T02:52:08.549-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-99.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:29.474-0500 c20011| 2016-04-06T02:52:08.549-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|17, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:1040 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.478-0500 c20011| 2016-04-06T02:52:08.550-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|17, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:1040 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.486-0500 c20011| 2016-04-06T02:52:08.552-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|18, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:29.486-0500 c20011| 2016-04-06T02:52:08.552-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:29.490-0500 c20011| 2016-04-06T02:52:08.552-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.493-0500 c20011| 2016-04-06T02:52:08.552-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|18, t: 1 } and is durable through: { ts: Timestamp 1459929128000|17, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.500-0500 c20011| 2016-04-06T02:52:08.552-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|18, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.503-0500 c20011| 2016-04-06T02:52:08.553-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|18, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|17, t: 1 }, name-id: "97" } [js_test:multi_coll_drop] 2016-04-06T02:52:29.506-0500 c20011| 2016-04-06T02:52:08.554-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|18, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|18, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:29.509-0500 c20011| 2016-04-06T02:52:08.554-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:29.513-0500 c20011| 2016-04-06T02:52:08.554-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.518-0500 c20011| 2016-04-06T02:52:08.554-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|18, t: 1 } and is durable through: { ts: Timestamp 1459929128000|18, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.526-0500 c20011| 2016-04-06T02:52:08.554-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|18, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.534-0500 c20011| 2016-04-06T02:52:08.554-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|18, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|18, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.547-0500 c20011| 2016-04-06T02:52:08.554-0500 I COMMAND [conn25] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-100.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -100.0 }, max: { _id: -99.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-100.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-99.0", lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -99.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-99.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|2 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.554-0500 c20011| 2016-04-06T02:52:08.554-0500 D COMMAND [conn25] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.554-0500-5704c02865c17830b843f17f", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128554), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -100.0 }, max: { _id: MaxKey } }, left: { min: { _id: -100.0 }, max: { _id: -99.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -99.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.556-0500 c20011| 2016-04-06T02:52:08.554-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|17, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:29.565-0500 c20011| 2016-04-06T02:52:08.555-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|17, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.568-0500 c20011| 2016-04-06T02:52:08.556-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|18, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:29.568-0500 c20011| 2016-04-06T02:52:08.556-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:29.571-0500 c20011| 2016-04-06T02:52:08.556-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|18, t: 1 } and is durable through: { ts: Timestamp 1459929128000|17, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.572-0500 c20011| 2016-04-06T02:52:08.556-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.578-0500 c20011| 2016-04-06T02:52:08.556-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|18, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.579-0500 c20011| 2016-04-06T02:52:08.556-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|17, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:29.583-0500 c20011| 2016-04-06T02:52:08.558-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|19, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|18, t: 1 }, name-id: "98" } [js_test:multi_coll_drop] 2016-04-06T02:52:29.591-0500 c20011| 2016-04-06T02:52:08.558-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|18, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:29.592-0500 c20011| 2016-04-06T02:52:08.558-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:29.596-0500 c20011| 2016-04-06T02:52:08.558-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.598-0500 c20011| 2016-04-06T02:52:08.558-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|19, t: 1 } and is durable through: { ts: Timestamp 1459929128000|18, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.602-0500 c20011| 2016-04-06T02:52:08.558-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929128000|19, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|18, t: 1 }, name-id: "98" } [js_test:multi_coll_drop] 2016-04-06T02:52:29.612-0500 c20011| 2016-04-06T02:52:08.558-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|18, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.622-0500 c20011| 2016-04-06T02:52:08.559-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|18, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:29.644-0500 c20011| 2016-04-06T02:52:08.559-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|17, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.651-0500 c20011| 2016-04-06T02:52:08.561-0500 D COMMAND [conn17] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:29.652-0500 c20011| 2016-04-06T02:52:08.561-0500 D COMMAND [conn17] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:29.654-0500 c20011| 2016-04-06T02:52:08.561-0500 D REPL [conn17] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|19, t: 1 } and is durable through: { ts: Timestamp 1459929128000|17, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.657-0500 c20011| 2016-04-06T02:52:08.561-0500 D REPL [conn17] Required snapshot optime: { ts: Timestamp 1459929128000|19, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|18, t: 1 }, name-id: "98" } [js_test:multi_coll_drop] 2016-04-06T02:52:29.658-0500 c20012| 2016-04-06T02:52:08.357-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:29.661-0500 c20012| 2016-04-06T02:52:08.357-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 298 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.357-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:29.662-0500 c20012| 2016-04-06T02:52:08.357-0500 D COMMAND [conn7] run command config.$cmd { find: "databases", filter: { _id: /^multidrop$/i }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|1, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.666-0500 c20012| 2016-04-06T02:52:08.357-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|1, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:29.669-0500 c20012| 2016-04-06T02:52:08.357-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 298 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:29.671-0500 c20012| 2016-04-06T02:52:08.357-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "databases", filter: { _id: /^multidrop$/i }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|1, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.672-0500 c20012| 2016-04-06T02:52:08.357-0500 D QUERY [conn7] Collection config.databases does not exist. Using EOF plan: query: { _id: /^multidrop$/i } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:29.674-0500 c20012| 2016-04-06T02:52:08.357-0500 I COMMAND [conn7] command config.databases command: find { find: "databases", filter: { _id: /^multidrop$/i }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|1, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:373 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.676-0500 c20012| 2016-04-06T02:52:08.363-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:36771 #10 (8 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:29.677-0500 c20012| 2016-04-06T02:52:08.363-0500 D COMMAND [conn10] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20010" } [js_test:multi_coll_drop] 2016-04-06T02:52:29.681-0500 c20012| 2016-04-06T02:52:08.363-0500 I COMMAND [conn10] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20010" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.681-0500 c20012| 2016-04-06T02:52:08.363-0500 D COMMAND [conn10] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.683-0500 c20012| 2016-04-06T02:52:08.363-0500 I COMMAND [conn10] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.685-0500 c20012| 2016-04-06T02:52:08.363-0500 D COMMAND [conn10] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.691-0500 c20012| 2016-04-06T02:52:08.363-0500 I COMMAND [conn10] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:29.700-0500 c20012| 2016-04-06T02:52:08.365-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 298 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|2, t: 1, h: 3529413680518098651, v: 2, op: "i", ns: "config.lockpings", o: { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929128362) } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.702-0500 c20012| 2016-04-06T02:52:08.365-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|2 and ending at ts: Timestamp 1459929128000|2 [js_test:multi_coll_drop] 2016-04-06T02:52:29.703-0500 c20012| 2016-04-06T02:52:08.365-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:29.705-0500 c20012| 2016-04-06T02:52:08.365-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.707-0500 c20012| 2016-04-06T02:52:08.365-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.707-0500 c20012| 2016-04-06T02:52:08.366-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.710-0500 c20012| 2016-04-06T02:52:08.366-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.710-0500 c20012| 2016-04-06T02:52:08.366-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.713-0500 c20012| 2016-04-06T02:52:08.366-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.715-0500 c20012| 2016-04-06T02:52:08.366-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.718-0500 c20012| 2016-04-06T02:52:08.366-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.718-0500 c20012| 2016-04-06T02:52:08.366-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.719-0500 c20012| 2016-04-06T02:52:08.366-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.720-0500 c20012| 2016-04-06T02:52:08.366-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.721-0500 c20012| 2016-04-06T02:52:08.366-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.722-0500 c20012| 2016-04-06T02:52:08.366-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:29.722-0500 c20012| 2016-04-06T02:52:08.366-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.725-0500 c20012| 2016-04-06T02:52:08.366-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.727-0500 c20012| 2016-04-06T02:52:08.366-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.730-0500 c20012| 2016-04-06T02:52:08.366-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.731-0500 c20012| 2016-04-06T02:52:08.366-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.732-0500 c20012| 2016-04-06T02:52:08.366-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.734-0500 c20012| 2016-04-06T02:52:08.366-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.734-0500 c20012| 2016-04-06T02:52:08.366-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.736-0500 c20012| 2016-04-06T02:52:08.366-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.736-0500 c20012| 2016-04-06T02:52:08.366-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.738-0500 c20012| 2016-04-06T02:52:08.366-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.739-0500 c20012| 2016-04-06T02:52:08.366-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.741-0500 c20012| 2016-04-06T02:52:08.366-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.741-0500 c20012| 2016-04-06T02:52:08.366-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.743-0500 c20012| 2016-04-06T02:52:08.366-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.744-0500 c20012| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.748-0500 c20012| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.750-0500 c20012| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.753-0500 c20012| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.754-0500 c20012| 2016-04-06T02:52:08.367-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.758-0500 c20012| 2016-04-06T02:52:08.367-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 300 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.367-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:29.762-0500 c20012| 2016-04-06T02:52:08.367-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 300 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:29.765-0500 c20012| 2016-04-06T02:52:08.368-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:29.771-0500 c20012| 2016-04-06T02:52:08.368-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:29.783-0500 c20012| 2016-04-06T02:52:08.368-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 301 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:29.786-0500 c20012| 2016-04-06T02:52:08.368-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 301 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:29.790-0500 c20012| 2016-04-06T02:52:08.368-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 301 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.795-0500 c20012| 2016-04-06T02:52:08.370-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:29.800-0500 c20012| 2016-04-06T02:52:08.370-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 303 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:29.803-0500 c20012| 2016-04-06T02:52:08.371-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 303 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:29.805-0500 c20012| 2016-04-06T02:52:08.371-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 303 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.806-0500 c20012| 2016-04-06T02:52:08.371-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 300 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.806-0500 c20012| 2016-04-06T02:52:08.371-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.808-0500 c20012| 2016-04-06T02:52:08.371-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:29.810-0500 c20012| 2016-04-06T02:52:08.371-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 306 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.371-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:29.813-0500 c20012| 2016-04-06T02:52:08.371-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 306 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:29.819-0500 c20012| 2016-04-06T02:52:08.374-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 306 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|3, t: 1, h: -1942800269136220941, v: 2, op: "c", ns: "config.$cmd", o: { create: "databases" } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.820-0500 c20012| 2016-04-06T02:52:08.374-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|3 and ending at ts: Timestamp 1459929128000|3 [js_test:multi_coll_drop] 2016-04-06T02:52:29.821-0500 c20012| 2016-04-06T02:52:08.374-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:29.821-0500 c20012| 2016-04-06T02:52:08.374-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.827-0500 c20012| 2016-04-06T02:52:08.374-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.827-0500 c20012| 2016-04-06T02:52:08.374-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.830-0500 c20012| 2016-04-06T02:52:08.374-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.832-0500 c20012| 2016-04-06T02:52:08.374-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.833-0500 c20012| 2016-04-06T02:52:08.374-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.836-0500 c20012| 2016-04-06T02:52:08.374-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.838-0500 c20012| 2016-04-06T02:52:08.374-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.843-0500 c20012| 2016-04-06T02:52:08.375-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.844-0500 c20012| 2016-04-06T02:52:08.375-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.847-0500 c20012| 2016-04-06T02:52:08.375-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.849-0500 c20012| 2016-04-06T02:52:08.375-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.851-0500 c20012| 2016-04-06T02:52:08.375-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.852-0500 c20012| 2016-04-06T02:52:08.375-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.853-0500 c20012| 2016-04-06T02:52:08.375-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:29.854-0500 c20012| 2016-04-06T02:52:08.375-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.855-0500 c20012| 2016-04-06T02:52:08.375-0500 D STORAGE [repl writer worker 15] create collection config.databases {} [js_test:multi_coll_drop] 2016-04-06T02:52:29.860-0500 c20012| 2016-04-06T02:52:08.375-0500 D STORAGE [repl writer worker 15] stored meta data for config.databases @ RecordId(16) [js_test:multi_coll_drop] 2016-04-06T02:52:29.861-0500 c20012| 2016-04-06T02:52:08.375-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.868-0500 c20012| 2016-04-06T02:52:08.375-0500 D STORAGE [repl writer worker 15] WiredTigerKVEngine::createRecordStore uri: table:collection-37-6577373056560964212 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:29.874-0500 c20012| 2016-04-06T02:52:08.377-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 308 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.377-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:29.875-0500 c20012| 2016-04-06T02:52:08.377-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 308 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:29.880-0500 c20012| 2016-04-06T02:52:08.377-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 308 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|4, t: 1, h: 4575545370530673351, v: 2, op: "i", ns: "config.databases", o: { _id: "multidrop", primary: "shard0000", partitioned: true } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.882-0500 c20012| 2016-04-06T02:52:08.377-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|4 and ending at ts: Timestamp 1459929128000|4 [js_test:multi_coll_drop] 2016-04-06T02:52:29.888-0500 c20012| 2016-04-06T02:52:08.379-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 310 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.379-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:29.890-0500 c20012| 2016-04-06T02:52:08.379-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 310 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:29.893-0500 c20012| 2016-04-06T02:52:08.384-0500 D STORAGE [repl writer worker 15] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-37-6577373056560964212 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:29.897-0500 c20012| 2016-04-06T02:52:08.384-0500 D STORAGE [repl writer worker 15] config.databases: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:29.904-0500 c20012| 2016-04-06T02:52:08.384-0500 D STORAGE [repl writer worker 15] WiredTigerKVEngine::createSortedDataInterface ident: index-38-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.databases" }), [js_test:multi_coll_drop] 2016-04-06T02:52:29.910-0500 c20012| 2016-04-06T02:52:08.384-0500 D STORAGE [repl writer worker 15] create uri: table:index-38-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.databases" }), [js_test:multi_coll_drop] 2016-04-06T02:52:29.916-0500 c20012| 2016-04-06T02:52:08.389-0500 D STORAGE [repl writer worker 15] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-38-6577373056560964212 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:29.917-0500 c20012| 2016-04-06T02:52:08.389-0500 D STORAGE [repl writer worker 15] config.databases: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:29.921-0500 c20012| 2016-04-06T02:52:08.390-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.922-0500 c20012| 2016-04-06T02:52:08.390-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.927-0500 c20012| 2016-04-06T02:52:08.393-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 310 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.931-0500 c20012| 2016-04-06T02:52:08.394-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.932-0500 c20012| 2016-04-06T02:52:08.394-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:29.938-0500 c20012| 2016-04-06T02:52:08.394-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 312 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.394-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:29.939-0500 c20012| 2016-04-06T02:52:08.394-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 312 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:29.939-0500 c20012| 2016-04-06T02:52:08.395-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 312 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.942-0500 c20012| 2016-04-06T02:52:08.395-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.943-0500 c20012| 2016-04-06T02:52:08.395-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:29.944-0500 c20012| 2016-04-06T02:52:08.395-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.949-0500 c20012| 2016-04-06T02:52:08.395-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.954-0500 c20012| 2016-04-06T02:52:08.395-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 314 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.395-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:29.956-0500 c20012| 2016-04-06T02:52:08.395-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.958-0500 c20012| 2016-04-06T02:52:08.395-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 314 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:29.962-0500 c20012| 2016-04-06T02:52:08.396-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.964-0500 c20012| 2016-04-06T02:52:08.396-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.964-0500 c20012| 2016-04-06T02:52:08.396-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.967-0500 c20012| 2016-04-06T02:52:08.396-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.969-0500 c20012| 2016-04-06T02:52:08.396-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.971-0500 c20012| 2016-04-06T02:52:08.396-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.972-0500 c20012| 2016-04-06T02:52:08.396-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.973-0500 c20012| 2016-04-06T02:52:08.396-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.974-0500 c20012| 2016-04-06T02:52:08.396-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.974-0500 c20012| 2016-04-06T02:52:08.396-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.980-0500 c20012| 2016-04-06T02:52:08.396-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:29.985-0500 c20012| 2016-04-06T02:52:08.396-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 314 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|5, t: 1, h: 1999879611050407382, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop" }, o: { $set: { state: 0 } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:29.986-0500 c20012| 2016-04-06T02:52:08.396-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|5 and ending at ts: Timestamp 1459929128000|5 [js_test:multi_coll_drop] 2016-04-06T02:52:29.990-0500 c20012| 2016-04-06T02:52:08.397-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:29.994-0500 c20012| 2016-04-06T02:52:08.397-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:30.020-0500 c20012| 2016-04-06T02:52:08.397-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:30.021-0500 c20012| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.029-0500 c20012| 2016-04-06T02:52:08.397-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 316 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:30.031-0500 c20012| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.033-0500 c20012| 2016-04-06T02:52:08.397-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 316 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:30.046-0500 c20012| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.048-0500 c20012| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.048-0500 c20012| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.048-0500 c20012| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.052-0500 c20012| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.053-0500 c20012| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.056-0500 c20012| 2016-04-06T02:52:08.397-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 316 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.056-0500 c20012| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.059-0500 c20012| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.060-0500 c20012| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.062-0500 c20012| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.064-0500 c20012| 2016-04-06T02:52:08.397-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:30.065-0500 c20012| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.066-0500 c20012| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.067-0500 c20012| 2016-04-06T02:52:08.398-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.070-0500 c20012| 2016-04-06T02:52:08.398-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.071-0500 c20012| 2016-04-06T02:52:08.398-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.072-0500 c20012| 2016-04-06T02:52:08.398-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.072-0500 c20012| 2016-04-06T02:52:08.398-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.073-0500 c20012| 2016-04-06T02:52:08.398-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.073-0500 c20012| 2016-04-06T02:52:08.398-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.074-0500 c20012| 2016-04-06T02:52:08.398-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.077-0500 c20012| 2016-04-06T02:52:08.398-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.078-0500 c20012| 2016-04-06T02:52:08.398-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.080-0500 c20012| 2016-04-06T02:52:08.398-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.081-0500 c20012| 2016-04-06T02:52:08.398-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.082-0500 c20012| 2016-04-06T02:52:08.398-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.086-0500 c20012| 2016-04-06T02:52:08.398-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.088-0500 c20012| 2016-04-06T02:52:08.398-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.089-0500 c20012| 2016-04-06T02:52:08.398-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.091-0500 c20012| 2016-04-06T02:52:08.398-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 318 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.398-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:30.094-0500 c20012| 2016-04-06T02:52:08.398-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 318 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:30.098-0500 c20012| 2016-04-06T02:52:08.399-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:30.102-0500 c20012| 2016-04-06T02:52:08.399-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 319 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:30.103-0500 c20012| 2016-04-06T02:52:08.399-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 319 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:30.103-0500 c20012| 2016-04-06T02:52:08.399-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 319 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.110-0500 c20012| 2016-04-06T02:52:08.400-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 318 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.115-0500 c20012| 2016-04-06T02:52:08.400-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.118-0500 c20012| 2016-04-06T02:52:08.400-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:30.121-0500 c20012| 2016-04-06T02:52:08.400-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 322 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.400-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:30.122-0500 c20012| 2016-04-06T02:52:08.400-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 322 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:30.139-0500 c20012| 2016-04-06T02:52:08.400-0500 D COMMAND [conn7] run command config.$cmd { find: "databases", filter: { _id: "multidrop" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|5, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.143-0500 c20012| 2016-04-06T02:52:08.400-0500 D REPL [conn7] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929128000|5, t: 1 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929128000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.148-0500 c20012| 2016-04-06T02:52:08.400-0500 D REPL [conn7] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999980μs [js_test:multi_coll_drop] 2016-04-06T02:52:30.156-0500 c20012| 2016-04-06T02:52:08.401-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.157-0500 c20012| 2016-04-06T02:52:08.401-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.159-0500 c20012| 2016-04-06T02:52:08.402-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:30.163-0500 c20012| 2016-04-06T02:52:08.402-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:30.165-0500 c20012| 2016-04-06T02:52:08.402-0500 D REPL [conn7] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29998362μs [js_test:multi_coll_drop] 2016-04-06T02:52:30.173-0500 c20012| 2016-04-06T02:52:08.402-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 323 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:30.175-0500 c20012| 2016-04-06T02:52:08.402-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:30.176-0500 c20012| 2016-04-06T02:52:08.402-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 323 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:30.179-0500 c20011| 2016-04-06T02:52:08.561-0500 D REPL [conn17] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.182-0500 c20011| 2016-04-06T02:52:08.561-0500 I COMMAND [conn17] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:30.186-0500 s20014| 2016-04-06T02:52:14.043-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 250 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-81.0", lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -81.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.192-0500 d20010| 2016-04-06T02:52:14.044-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -81.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -80.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|40, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:52:30.193-0500 s20014| 2016-04-06T02:52:14.044-0500 I COMMAND [conn1] splitting chunk [{ _id: -81.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:30.195-0500 s20014| 2016-04-06T02:52:14.044-0500 D NETWORK [conn1] polling for status of connection to 192.168.100.28:20010, no events [js_test:multi_coll_drop] 2016-04-06T02:52:30.196-0500 d20010| 2016-04-06T02:52:14.552-0500 I NETWORK [conn5] Socket closed remotely, no longer connected (idle 6 secs, remote host 192.168.100.28:20011) [js_test:multi_coll_drop] 2016-04-06T02:52:30.196-0500 d20010| 2016-04-06T02:52:14.553-0500 W NETWORK [conn5] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:30.198-0500 d20010| 2016-04-06T02:52:15.055-0500 W NETWORK [conn5] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:30.201-0500 d20010| 2016-04-06T02:52:15.556-0500 W NETWORK [conn5] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:30.210-0500 c20011| 2016-04-06T02:52:08.561-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|18, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:30.211-0500 c20011| 2016-04-06T02:52:08.561-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:30.223-0500 c20011| 2016-04-06T02:52:08.561-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|19, t: 1 } and is durable through: { ts: Timestamp 1459929128000|18, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.224-0500 c20011| 2016-04-06T02:52:08.561-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|19, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|18, t: 1 }, name-id: "98" } [js_test:multi_coll_drop] 2016-04-06T02:52:30.231-0500 c20011| 2016-04-06T02:52:08.562-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.241-0500 c20011| 2016-04-06T02:52:08.562-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|18, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:30.247-0500 c20011| 2016-04-06T02:52:08.562-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|18, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:30.260-0500 c20011| 2016-04-06T02:52:08.563-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:30.260-0500 c20011| 2016-04-06T02:52:08.563-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:30.263-0500 c20011| 2016-04-06T02:52:08.563-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.266-0500 c20011| 2016-04-06T02:52:08.563-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|19, t: 1 } and is durable through: { ts: Timestamp 1459929128000|19, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.266-0500 c20011| 2016-04-06T02:52:08.563-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|19, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.276-0500 2016-04-06T02:52:15.851-0500 I NETWORK [thread2] c20011| 2016-04-06T02:52:08.563-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:30.294-0500 c20011| 2016-04-06T02:52:08.563-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|18, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:30.316-0500 c20011| 2016-04-06T02:52:08.563-0500 I COMMAND [conn25] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.554-0500-5704c02865c17830b843f17f", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128554), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -100.0 }, max: { _id: MaxKey } }, left: { min: { _id: -100.0 }, max: { _id: -99.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -99.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 8ms [js_test:multi_coll_drop] 2016-04-06T02:52:30.316-0500 reconnect mongovm16:20011 (192.168.100.28) ok [js_test:multi_coll_drop] 2016-04-06T02:52:30.319-0500 c20013| 2016-04-06T02:52:08.393-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 315 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:30.320-0500 c20013| 2016-04-06T02:52:08.393-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 315 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.322-0500 c20013| 2016-04-06T02:52:08.394-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:30.323-0500 c20013| 2016-04-06T02:52:08.394-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 312 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.330-0500 c20013| 2016-04-06T02:52:08.394-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:30.341-0500 c20013| 2016-04-06T02:52:08.394-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 318 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:30.343-0500 c20013| 2016-04-06T02:52:08.394-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 318 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:30.345-0500 c20013| 2016-04-06T02:52:08.394-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 318 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.346-0500 c20013| 2016-04-06T02:52:08.394-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.352-0500 c20013| 2016-04-06T02:52:08.394-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:30.355-0500 c20013| 2016-04-06T02:52:08.394-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 320 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.394-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:30.357-0500 c20013| 2016-04-06T02:52:08.394-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 320 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:30.365-0500 c20013| 2016-04-06T02:52:08.395-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:30.370-0500 c20013| 2016-04-06T02:52:08.395-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 321 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:30.371-0500 c20013| 2016-04-06T02:52:08.395-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 321 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:30.372-0500 c20013| 2016-04-06T02:52:08.395-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 321 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.376-0500 c20013| 2016-04-06T02:52:08.395-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 320 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.376-0500 c20013| 2016-04-06T02:52:08.395-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.379-0500 c20013| 2016-04-06T02:52:08.395-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:30.382-0500 c20013| 2016-04-06T02:52:08.395-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 324 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.395-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:30.383-0500 c20013| 2016-04-06T02:52:08.395-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 324 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:30.390-0500 c20013| 2016-04-06T02:52:08.396-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 324 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|5, t: 1, h: 1999879611050407382, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop" }, o: { $set: { state: 0 } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.393-0500 c20013| 2016-04-06T02:52:08.396-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|5 and ending at ts: Timestamp 1459929128000|5 [js_test:multi_coll_drop] 2016-04-06T02:52:30.394-0500 c20013| 2016-04-06T02:52:08.396-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:30.396-0500 c20013| 2016-04-06T02:52:08.396-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.397-0500 c20013| 2016-04-06T02:52:08.396-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.397-0500 c20013| 2016-04-06T02:52:08.396-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.400-0500 c20013| 2016-04-06T02:52:08.396-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.402-0500 c20013| 2016-04-06T02:52:08.396-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.403-0500 c20013| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.403-0500 c20013| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.404-0500 c20013| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.405-0500 c20013| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.407-0500 c20013| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.412-0500 c20013| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.413-0500 c20013| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.415-0500 c20013| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.418-0500 c20013| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.421-0500 c20013| 2016-04-06T02:52:08.397-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:30.426-0500 c20013| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.431-0500 c20011| 2016-04-06T02:52:08.563-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f17e') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.434-0500 c20011| 2016-04-06T02:52:08.563-0500 D QUERY [conn25] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:30.461-0500 c20011| 2016-04-06T02:52:08.563-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c02865c17830b843f17e') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.464-0500 c20011| 2016-04-06T02:52:08.563-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|19, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:30.471-0500 c20011| 2016-04-06T02:52:08.563-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:30.472-0500 c20011| 2016-04-06T02:52:08.563-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:30.474-0500 c20011| 2016-04-06T02:52:08.563-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|18, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:30.482-0500 c20011| 2016-04-06T02:52:08.563-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|19, t: 1 } and is durable through: { ts: Timestamp 1459929128000|19, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.485-0500 c20011| 2016-04-06T02:52:08.563-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.490-0500 c20011| 2016-04-06T02:52:08.563-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:30.495-0500 c20011| 2016-04-06T02:52:08.564-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|19, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:30.499-0500 c20011| 2016-04-06T02:52:08.565-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:30.499-0500 c20011| 2016-04-06T02:52:08.565-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:30.502-0500 d20010| 2016-04-06T02:52:16.057-0500 W NETWORK [conn5] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:30.504-0500 c20012| 2016-04-06T02:52:08.402-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.505-0500 c20012| 2016-04-06T02:52:08.402-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.506-0500 c20012| 2016-04-06T02:52:08.402-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.507-0500 c20012| 2016-04-06T02:52:08.402-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.508-0500 c20012| 2016-04-06T02:52:08.402-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 323 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.509-0500 c20012| 2016-04-06T02:52:08.402-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.510-0500 c20012| 2016-04-06T02:52:08.402-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.511-0500 c20012| 2016-04-06T02:52:08.402-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.514-0500 c20012| 2016-04-06T02:52:08.402-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.518-0500 c20012| 2016-04-06T02:52:08.402-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.522-0500 c20012| 2016-04-06T02:52:08.402-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.525-0500 c20011| 2016-04-06T02:52:08.566-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.536-0500 c20011| 2016-04-06T02:52:08.566-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|20, t: 1 } and is durable through: { ts: Timestamp 1459929128000|19, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.538-0500 c20011| 2016-04-06T02:52:08.566-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:30.540-0500 c20011| 2016-04-06T02:52:08.566-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|19, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:30.543-0500 c20011| 2016-04-06T02:52:08.566-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|20, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|19, t: 1 }, name-id: "99" } [js_test:multi_coll_drop] 2016-04-06T02:52:30.544-0500 c20011| 2016-04-06T02:52:08.566-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|19, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:30.548-0500 c20011| 2016-04-06T02:52:08.566-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|19, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:30.552-0500 c20011| 2016-04-06T02:52:08.569-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|19, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:30.555-0500 c20011| 2016-04-06T02:52:08.569-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:30.556-0500 c20011| 2016-04-06T02:52:08.569-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:30.558-0500 c20011| 2016-04-06T02:52:08.569-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|20, t: 1 } and is durable through: { ts: Timestamp 1459929128000|19, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.561-0500 c20011| 2016-04-06T02:52:08.569-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|20, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|19, t: 1 }, name-id: "99" } [js_test:multi_coll_drop] 2016-04-06T02:52:30.563-0500 c20011| 2016-04-06T02:52:08.569-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.571-0500 c20011| 2016-04-06T02:52:08.569-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:30.571-0500 d20010| 2016-04-06T02:52:16.559-0500 W NETWORK [conn5] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:30.576-0500 c20012| 2016-04-06T02:52:08.402-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.577-0500 c20013| 2016-04-06T02:52:08.397-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop" } [js_test:multi_coll_drop] 2016-04-06T02:52:30.578-0500 c20013| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.580-0500 c20013| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.582-0500 c20013| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.582-0500 c20013| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.589-0500 c20011| 2016-04-06T02:52:08.572-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:30.589-0500 c20012| 2016-04-06T02:52:08.402-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.591-0500 c20012| 2016-04-06T02:52:08.402-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.591-0500 c20012| 2016-04-06T02:52:08.402-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.594-0500 c20012| 2016-04-06T02:52:08.402-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.594-0500 c20012| 2016-04-06T02:52:08.402-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:30.595-0500 c20012| 2016-04-06T02:52:08.402-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.600-0500 c20012| 2016-04-06T02:52:08.402-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop" } [js_test:multi_coll_drop] 2016-04-06T02:52:30.604-0500 c20012| 2016-04-06T02:52:08.402-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:30.608-0500 c20012| 2016-04-06T02:52:08.402-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 325 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:30.609-0500 c20012| 2016-04-06T02:52:08.402-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 325 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:30.611-0500 c20012| 2016-04-06T02:52:08.403-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.611-0500 c20012| 2016-04-06T02:52:08.403-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.614-0500 c20012| 2016-04-06T02:52:08.403-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.619-0500 c20012| 2016-04-06T02:52:08.403-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 325 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.620-0500 c20012| 2016-04-06T02:52:08.403-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.623-0500 c20012| 2016-04-06T02:52:08.403-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.628-0500 c20012| 2016-04-06T02:52:08.403-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.629-0500 c20012| 2016-04-06T02:52:08.403-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.633-0500 c20012| 2016-04-06T02:52:08.403-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.633-0500 c20012| 2016-04-06T02:52:08.403-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.635-0500 c20012| 2016-04-06T02:52:08.403-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.637-0500 c20012| 2016-04-06T02:52:08.403-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.639-0500 c20012| 2016-04-06T02:52:08.403-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.641-0500 c20012| 2016-04-06T02:52:08.403-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.652-0500 c20012| 2016-04-06T02:52:08.403-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.654-0500 c20012| 2016-04-06T02:52:08.403-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.657-0500 c20012| 2016-04-06T02:52:08.403-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.661-0500 c20012| 2016-04-06T02:52:08.403-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:30.668-0500 c20012| 2016-04-06T02:52:08.403-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|5, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:30.675-0500 c20012| 2016-04-06T02:52:08.403-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:30.680-0500 c20012| 2016-04-06T02:52:08.403-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "databases", filter: { _id: "multidrop" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|5, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.684-0500 c20012| 2016-04-06T02:52:08.403-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 327 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:30.686-0500 c20012| 2016-04-06T02:52:08.403-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 327 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:30.688-0500 c20012| 2016-04-06T02:52:08.403-0500 D QUERY [conn7] Using idhack: query: { _id: "multidrop" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:30.691-0500 c20012| 2016-04-06T02:52:08.403-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 327 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.694-0500 c20012| 2016-04-06T02:52:08.403-0500 I COMMAND [conn7] command config.databases command: find { find: "databases", filter: { _id: "multidrop" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|5, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:437 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:30.702-0500 c20012| 2016-04-06T02:52:08.404-0500 D COMMAND [conn7] run command config.$cmd { find: "databases", filter: { _id: "multidrop" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|5, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.706-0500 c20012| 2016-04-06T02:52:08.404-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|5, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:30.710-0500 c20012| 2016-04-06T02:52:08.404-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "databases", filter: { _id: "multidrop" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|5, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.712-0500 c20012| 2016-04-06T02:52:08.404-0500 D QUERY [conn7] Using idhack: query: { _id: "multidrop" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:30.716-0500 c20012| 2016-04-06T02:52:08.404-0500 I COMMAND [conn7] command config.databases command: find { find: "databases", filter: { _id: "multidrop" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|5, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:437 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:30.722-0500 c20012| 2016-04-06T02:52:08.406-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:30.729-0500 c20012| 2016-04-06T02:52:08.406-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 329 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:30.733-0500 c20012| 2016-04-06T02:52:08.406-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 329 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:30.735-0500 c20012| 2016-04-06T02:52:08.407-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 329 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.739-0500 c20012| 2016-04-06T02:52:08.414-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 322 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|6, t: 1, h: -3361189010770049215, v: 2, op: "i", ns: "config.locks", o: { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c02806c33406d4d9c0be'), who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929128413), why: "shardCollection" } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.741-0500 c20012| 2016-04-06T02:52:08.414-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|6 and ending at ts: Timestamp 1459929128000|6 [js_test:multi_coll_drop] 2016-04-06T02:52:30.746-0500 c20012| 2016-04-06T02:52:08.414-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:30.747-0500 c20012| 2016-04-06T02:52:08.414-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.748-0500 c20012| 2016-04-06T02:52:08.414-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.751-0500 c20012| 2016-04-06T02:52:08.414-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.752-0500 c20012| 2016-04-06T02:52:08.414-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.753-0500 c20012| 2016-04-06T02:52:08.414-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.757-0500 c20012| 2016-04-06T02:52:08.414-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.760-0500 c20012| 2016-04-06T02:52:08.414-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.764-0500 c20012| 2016-04-06T02:52:08.414-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.776-0500 c20012| 2016-04-06T02:52:08.414-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.780-0500 c20012| 2016-04-06T02:52:08.414-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.781-0500 c20012| 2016-04-06T02:52:08.414-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.792-0500 c20012| 2016-04-06T02:52:08.414-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.792-0500 c20012| 2016-04-06T02:52:08.414-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.795-0500 c20012| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.798-0500 c20012| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.800-0500 c20012| 2016-04-06T02:52:08.415-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:30.800-0500 c20012| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.801-0500 c20012| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.803-0500 c20012| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.805-0500 c20012| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.808-0500 c20012| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.812-0500 c20012| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.815-0500 c20012| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.816-0500 c20012| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.817-0500 c20012| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.821-0500 c20012| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.824-0500 c20012| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.824-0500 c20012| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.828-0500 c20012| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.829-0500 c20012| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.829-0500 c20012| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.830-0500 c20012| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.841-0500 c20012| 2016-04-06T02:52:08.416-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 332 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.416-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:30.845-0500 c20012| 2016-04-06T02:52:08.416-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 332 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:30.846-0500 c20012| 2016-04-06T02:52:08.418-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.848-0500 c20012| 2016-04-06T02:52:08.418-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:30.853-0500 c20012| 2016-04-06T02:52:08.418-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:30.863-0500 c20012| 2016-04-06T02:52:08.418-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 333 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:30.866-0500 c20012| 2016-04-06T02:52:08.418-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 333 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:30.868-0500 c20012| 2016-04-06T02:52:08.418-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 333 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.877-0500 c20012| 2016-04-06T02:52:08.419-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:30.899-0500 c20012| 2016-04-06T02:52:08.419-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 335 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:30.901-0500 c20012| 2016-04-06T02:52:08.419-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 335 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:30.905-0500 c20012| 2016-04-06T02:52:08.419-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 335 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.931-0500 c20012| 2016-04-06T02:52:08.419-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 332 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.931-0500 c20012| 2016-04-06T02:52:08.420-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.934-0500 c20012| 2016-04-06T02:52:08.420-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:30.940-0500 c20012| 2016-04-06T02:52:08.420-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 338 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.420-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:30.941-0500 c20012| 2016-04-06T02:52:08.420-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 338 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:30.942-0500 c20012| 2016-04-06T02:52:08.420-0500 D COMMAND [conn7] run command config.$cmd { count: "chunks", query: { ns: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|6, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.944-0500 c20012| 2016-04-06T02:52:08.420-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|6, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:30.952-0500 c20012| 2016-04-06T02:52:08.420-0500 D COMMAND [conn7] Using 'committed' snapshot. { count: "chunks", query: { ns: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|6, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.955-0500 c20012| 2016-04-06T02:52:08.420-0500 D QUERY [conn7] Relevant index 0 is kp: { ns: 1, min: 1 } unique name: 'ns_1_min_1' io: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:30.958-0500 c20012| 2016-04-06T02:52:08.421-0500 D QUERY [conn7] Relevant index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: 'ns_1_shard_1_min_1' io: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:30.960-0500 c20012| 2016-04-06T02:52:08.421-0500 D QUERY [conn7] Relevant index 2 is kp: { ns: 1, lastmod: 1 } unique name: 'ns_1_lastmod_1' io: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:30.961-0500 c20012| 2016-04-06T02:52:08.421-0500 D QUERY [conn7] Using fast count: query: { ns: "multidrop.coll" } sort: {} projection: {}, planSummary: COUNT_SCAN { ns: 1, min: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.965-0500 c20012| 2016-04-06T02:52:08.421-0500 I COMMAND [conn7] command config.chunks command: count { count: "chunks", query: { ns: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|6, t: 1 } }, maxTimeMS: 30000 } planSummary: COUNT_SCAN { ns: 1, min: 1 } numYields:0 reslen:313 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:30.972-0500 c20012| 2016-04-06T02:52:08.422-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 338 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|7, t: 1, h: 3619006086554899272, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.421-0500-5704c02806c33406d4d9c0bf", server: "mongovm16", clientAddr: "127.0.0.1:55066", time: new Date(1459929128421), what: "shardCollection.start", ns: "multidrop.coll", details: { shardKey: { _id: 1.0 }, collection: "multidrop.coll", primary: "shard0000:mongovm16:20010", initShards: [], numChunks: 1 } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:30.976-0500 c20012| 2016-04-06T02:52:08.422-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|7 and ending at ts: Timestamp 1459929128000|7 [js_test:multi_coll_drop] 2016-04-06T02:52:30.978-0500 c20012| 2016-04-06T02:52:08.422-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:30.979-0500 c20012| 2016-04-06T02:52:08.422-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.981-0500 c20012| 2016-04-06T02:52:08.422-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.985-0500 c20012| 2016-04-06T02:52:08.422-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.988-0500 c20012| 2016-04-06T02:52:08.422-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.988-0500 c20012| 2016-04-06T02:52:08.422-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.990-0500 c20012| 2016-04-06T02:52:08.422-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.992-0500 c20012| 2016-04-06T02:52:08.422-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.994-0500 c20012| 2016-04-06T02:52:08.422-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.995-0500 c20012| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:30.999-0500 c20012| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.006-0500 c20012| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.009-0500 c20012| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.009-0500 c20012| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.011-0500 c20012| 2016-04-06T02:52:08.423-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:31.013-0500 c20012| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.013-0500 c20012| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.013-0500 c20012| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.014-0500 c20012| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.015-0500 c20012| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.016-0500 c20012| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.017-0500 c20012| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.020-0500 c20012| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.022-0500 c20012| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.022-0500 c20012| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.027-0500 c20012| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.028-0500 c20012| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.028-0500 c20012| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.030-0500 c20012| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.033-0500 c20012| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.033-0500 c20012| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.037-0500 c20012| 2016-04-06T02:52:08.424-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.039-0500 c20012| 2016-04-06T02:52:08.424-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.040-0500 c20012| 2016-04-06T02:52:08.424-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.044-0500 c20012| 2016-04-06T02:52:08.424-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:31.053-0500 c20012| 2016-04-06T02:52:08.424-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:31.060-0500 c20012| 2016-04-06T02:52:08.424-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 340 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:31.060-0500 c20012| 2016-04-06T02:52:08.424-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 340 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:31.061-0500 c20012| 2016-04-06T02:52:08.424-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 340 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.064-0500 c20012| 2016-04-06T02:52:08.424-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 342 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.424-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:31.066-0500 c20012| 2016-04-06T02:52:08.424-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 342 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:31.073-0500 c20012| 2016-04-06T02:52:08.429-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:31.077-0500 c20012| 2016-04-06T02:52:08.429-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 343 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:31.077-0500 c20012| 2016-04-06T02:52:08.429-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 343 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:31.079-0500 c20012| 2016-04-06T02:52:08.429-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 343 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.082-0500 c20012| 2016-04-06T02:52:08.429-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 342 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.085-0500 c20012| 2016-04-06T02:52:08.429-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.088-0500 c20012| 2016-04-06T02:52:08.429-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:31.094-0500 c20012| 2016-04-06T02:52:08.429-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 346 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.429-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:31.095-0500 c20012| 2016-04-06T02:52:08.430-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 346 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:31.100-0500 c20012| 2016-04-06T02:52:08.431-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 346 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|8, t: 1, h: -9148699677568158286, v: 2, op: "i", ns: "config.chunks", o: { _id: "multidrop.coll-_id_MinKey", ns: "multidrop.coll", min: { _id: MinKey }, max: { _id: MaxKey }, shard: "shard0000", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.104-0500 c20012| 2016-04-06T02:52:08.431-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|8 and ending at ts: Timestamp 1459929128000|8 [js_test:multi_coll_drop] 2016-04-06T02:52:31.105-0500 c20013| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.107-0500 c20013| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.111-0500 c20013| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.114-0500 c20013| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.115-0500 c20013| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.116-0500 c20013| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.117-0500 c20011| 2016-04-06T02:52:08.572-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:31.132-0500 c20012| 2016-04-06T02:52:08.431-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:31.133-0500 c20011| 2016-04-06T02:52:08.572-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.134-0500 c20013| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.135-0500 c20013| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.136-0500 c20013| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.140-0500 c20013| 2016-04-06T02:52:08.397-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.141-0500 c20013| 2016-04-06T02:52:08.398-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.141-0500 c20013| 2016-04-06T02:52:08.398-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.143-0500 c20013| 2016-04-06T02:52:08.398-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.145-0500 c20013| 2016-04-06T02:52:08.398-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:31.150-0500 c20013| 2016-04-06T02:52:08.398-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:31.155-0500 c20013| 2016-04-06T02:52:08.398-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 326 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:31.156-0500 c20012| 2016-04-06T02:52:08.431-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.157-0500 c20011| 2016-04-06T02:52:08.572-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|20, t: 1 } and is durable through: { ts: Timestamp 1459929128000|20, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.158-0500 c20011| 2016-04-06T02:52:08.572-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|20, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.158-0500 c20012| 2016-04-06T02:52:08.431-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.159-0500 c20012| 2016-04-06T02:52:08.431-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.159-0500 c20012| 2016-04-06T02:52:08.431-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.160-0500 c20013| 2016-04-06T02:52:08.398-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 326 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:31.164-0500 c20011| 2016-04-06T02:52:08.572-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.167-0500 c20011| 2016-04-06T02:52:08.572-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|19, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.168-0500 c20013| 2016-04-06T02:52:08.398-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 327 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.398-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:31.170-0500 c20013| 2016-04-06T02:52:08.399-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 326 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.174-0500 c20011| 2016-04-06T02:52:08.572-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f17e') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:612 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 9ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.180-0500 c20011| 2016-04-06T02:52:08.572-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|19, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.183-0500 c20011| 2016-04-06T02:52:08.573-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|20, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.189-0500 c20011| 2016-04-06T02:52:08.573-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|20, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:31.190-0500 c20011| 2016-04-06T02:52:08.573-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|20, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.194-0500 c20011| 2016-04-06T02:52:08.573-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|20, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:31.199-0500 c20011| 2016-04-06T02:52:08.573-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:31.204-0500 c20011| 2016-04-06T02:52:08.573-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|20, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:31.217-0500 c20011| 2016-04-06T02:52:08.573-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|20, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.236-0500 c20011| 2016-04-06T02:52:08.575-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:31.236-0500 c20011| 2016-04-06T02:52:08.575-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:31.237-0500 c20011| 2016-04-06T02:52:08.575-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|20, t: 1 } and is durable through: { ts: Timestamp 1459929128000|20, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.238-0500 c20011| 2016-04-06T02:52:08.575-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.255-0500 c20011| 2016-04-06T02:52:08.575-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.260-0500 c20011| 2016-04-06T02:52:08.580-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f180'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128579), why: "splitting chunk [{ _id: -99.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.261-0500 c20011| 2016-04-06T02:52:08.580-0500 D QUERY [conn25] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:31.264-0500 c20011| 2016-04-06T02:52:08.580-0500 D QUERY [conn25] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:31.267-0500 c20011| 2016-04-06T02:52:08.580-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.270-0500 c20011| 2016-04-06T02:52:08.580-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|20, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.283-0500 c20011| 2016-04-06T02:52:08.580-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|20, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.285-0500 c20013| 2016-04-06T02:52:08.399-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 327 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:31.302-0500 c20013| 2016-04-06T02:52:08.399-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:31.310-0500 c20013| 2016-04-06T02:52:08.399-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 329 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:31.311-0500 c20013| 2016-04-06T02:52:08.399-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 329 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:31.315-0500 c20013| 2016-04-06T02:52:08.399-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 329 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.319-0500 c20013| 2016-04-06T02:52:08.400-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 327 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.320-0500 c20013| 2016-04-06T02:52:08.400-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.324-0500 c20013| 2016-04-06T02:52:08.400-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:31.342-0500 c20013| 2016-04-06T02:52:08.400-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 332 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.400-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:31.344-0500 c20013| 2016-04-06T02:52:08.400-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 332 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:31.353-0500 c20013| 2016-04-06T02:52:08.414-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 332 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|6, t: 1, h: -3361189010770049215, v: 2, op: "i", ns: "config.locks", o: { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c02806c33406d4d9c0be'), who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929128413), why: "shardCollection" } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.365-0500 c20013| 2016-04-06T02:52:08.414-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|6 and ending at ts: Timestamp 1459929128000|6 [js_test:multi_coll_drop] 2016-04-06T02:52:31.366-0500 c20013| 2016-04-06T02:52:08.414-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:31.367-0500 c20013| 2016-04-06T02:52:08.414-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.368-0500 c20013| 2016-04-06T02:52:08.414-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.368-0500 c20013| 2016-04-06T02:52:08.414-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.369-0500 c20013| 2016-04-06T02:52:08.414-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.369-0500 c20013| 2016-04-06T02:52:08.414-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.369-0500 c20013| 2016-04-06T02:52:08.414-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.370-0500 c20013| 2016-04-06T02:52:08.414-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.371-0500 c20013| 2016-04-06T02:52:08.414-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.373-0500 c20013| 2016-04-06T02:52:08.414-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.374-0500 c20013| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.378-0500 c20013| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.379-0500 c20013| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.381-0500 c20013| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.383-0500 c20013| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.384-0500 c20013| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.389-0500 c20013| 2016-04-06T02:52:08.415-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:31.390-0500 c20013| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.391-0500 c20013| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.393-0500 c20013| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.395-0500 c20013| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.397-0500 c20013| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.398-0500 c20013| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.399-0500 c20013| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.400-0500 c20013| 2016-04-06T02:52:08.415-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.402-0500 c20013| 2016-04-06T02:52:08.416-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 334 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.416-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:31.404-0500 c20013| 2016-04-06T02:52:08.416-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 334 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:31.407-0500 c20013| 2016-04-06T02:52:08.418-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.409-0500 c20013| 2016-04-06T02:52:08.418-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.411-0500 c20013| 2016-04-06T02:52:08.418-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.411-0500 c20013| 2016-04-06T02:52:08.418-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.414-0500 c20013| 2016-04-06T02:52:08.418-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.415-0500 c20013| 2016-04-06T02:52:08.418-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.416-0500 c20013| 2016-04-06T02:52:08.418-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.416-0500 c20013| 2016-04-06T02:52:08.418-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.417-0500 c20013| 2016-04-06T02:52:08.418-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.419-0500 c20013| 2016-04-06T02:52:08.418-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:31.421-0500 c20013| 2016-04-06T02:52:08.418-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:31.426-0500 c20013| 2016-04-06T02:52:08.418-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 335 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:31.426-0500 c20013| 2016-04-06T02:52:08.419-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 335 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:31.428-0500 c20013| 2016-04-06T02:52:08.419-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 335 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.430-0500 c20013| 2016-04-06T02:52:08.419-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:31.436-0500 c20013| 2016-04-06T02:52:08.419-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 337 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:31.438-0500 c20013| 2016-04-06T02:52:08.419-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 337 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:31.439-0500 c20013| 2016-04-06T02:52:08.419-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 337 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.441-0500 c20013| 2016-04-06T02:52:08.420-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 334 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.443-0500 c20013| 2016-04-06T02:52:08.420-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.447-0500 c20013| 2016-04-06T02:52:08.420-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:31.454-0500 c20013| 2016-04-06T02:52:08.420-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 340 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.420-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:31.462-0500 c20013| 2016-04-06T02:52:08.420-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 340 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:31.464-0500 c20013| 2016-04-06T02:52:08.422-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 340 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|7, t: 1, h: 3619006086554899272, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.421-0500-5704c02806c33406d4d9c0bf", server: "mongovm16", clientAddr: "127.0.0.1:55066", time: new Date(1459929128421), what: "shardCollection.start", ns: "multidrop.coll", details: { shardKey: { _id: 1.0 }, collection: "multidrop.coll", primary: "shard0000:mongovm16:20010", initShards: [], numChunks: 1 } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.468-0500 c20013| 2016-04-06T02:52:08.422-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|7 and ending at ts: Timestamp 1459929128000|7 [js_test:multi_coll_drop] 2016-04-06T02:52:31.470-0500 c20013| 2016-04-06T02:52:08.422-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:31.471-0500 c20013| 2016-04-06T02:52:08.422-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.474-0500 c20013| 2016-04-06T02:52:08.422-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.476-0500 c20013| 2016-04-06T02:52:08.422-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.477-0500 c20013| 2016-04-06T02:52:08.422-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.479-0500 c20013| 2016-04-06T02:52:08.422-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.481-0500 c20013| 2016-04-06T02:52:08.422-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.481-0500 c20013| 2016-04-06T02:52:08.422-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.484-0500 c20013| 2016-04-06T02:52:08.422-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.484-0500 c20013| 2016-04-06T02:52:08.422-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.489-0500 c20013| 2016-04-06T02:52:08.422-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.492-0500 c20013| 2016-04-06T02:52:08.422-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.493-0500 c20013| 2016-04-06T02:52:08.422-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.494-0500 c20013| 2016-04-06T02:52:08.422-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.497-0500 c20013| 2016-04-06T02:52:08.422-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.498-0500 c20013| 2016-04-06T02:52:08.422-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.501-0500 c20013| 2016-04-06T02:52:08.422-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:31.502-0500 c20013| 2016-04-06T02:52:08.422-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.503-0500 c20013| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.505-0500 c20013| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.506-0500 c20013| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.508-0500 c20013| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.509-0500 c20013| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.509-0500 c20013| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.511-0500 c20013| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.512-0500 c20013| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.515-0500 c20013| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.516-0500 c20013| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.518-0500 c20013| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.518-0500 c20013| 2016-04-06T02:52:08.423-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.519-0500 c20013| 2016-04-06T02:52:08.424-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.520-0500 c20013| 2016-04-06T02:52:08.424-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.521-0500 c20013| 2016-04-06T02:52:08.424-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.523-0500 c20013| 2016-04-06T02:52:08.424-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:31.530-0500 c20013| 2016-04-06T02:52:08.424-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:31.541-0500 c20013| 2016-04-06T02:52:08.424-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:31.548-0500 c20013| 2016-04-06T02:52:08.424-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 342 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:31.555-0500 c20013| 2016-04-06T02:52:08.424-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 342 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:31.560-0500 c20013| 2016-04-06T02:52:08.424-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 343 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.424-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:31.569-0500 c20013| 2016-04-06T02:52:08.424-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 343 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:31.574-0500 c20013| 2016-04-06T02:52:08.425-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 342 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.581-0500 c20013| 2016-04-06T02:52:08.429-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:31.590-0500 c20011| 2016-04-06T02:52:08.583-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:31.591-0500 c20011| 2016-04-06T02:52:08.583-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:31.596-0500 c20011| 2016-04-06T02:52:08.583-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:31.596-0500 c20011| 2016-04-06T02:52:08.583-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:31.601-0500 c20011| 2016-04-06T02:52:08.583-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|21, t: 1 } and is durable through: { ts: Timestamp 1459929128000|20, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.602-0500 c20011| 2016-04-06T02:52:08.583-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.607-0500 c20011| 2016-04-06T02:52:08.583-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.609-0500 c20011| 2016-04-06T02:52:08.583-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.611-0500 c20011| 2016-04-06T02:52:08.583-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|21, t: 1 } and is durable through: { ts: Timestamp 1459929128000|20, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.614-0500 c20011| 2016-04-06T02:52:08.583-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.615-0500 c20011| 2016-04-06T02:52:08.583-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|20, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:31.619-0500 c20011| 2016-04-06T02:52:08.583-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|21, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|20, t: 1 }, name-id: "100" } [js_test:multi_coll_drop] 2016-04-06T02:52:31.622-0500 c20011| 2016-04-06T02:52:08.583-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|20, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:31.627-0500 c20011| 2016-04-06T02:52:08.585-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:31.627-0500 c20011| 2016-04-06T02:52:08.585-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:31.631-0500 c20011| 2016-04-06T02:52:08.585-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|21, t: 1 } and is durable through: { ts: Timestamp 1459929128000|21, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.634-0500 c20011| 2016-04-06T02:52:08.585-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|21, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.639-0500 c20011| 2016-04-06T02:52:08.585-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:31.640-0500 c20011| 2016-04-06T02:52:08.585-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:31.644-0500 c20011| 2016-04-06T02:52:08.585-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.649-0500 c20011| 2016-04-06T02:52:08.585-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.652-0500 c20011| 2016-04-06T02:52:08.585-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.656-0500 c20011| 2016-04-06T02:52:08.585-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|21, t: 1 } and is durable through: { ts: Timestamp 1459929128000|21, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.662-0500 c20011| 2016-04-06T02:52:08.585-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.667-0500 c20011| 2016-04-06T02:52:08.585-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|20, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.675-0500 c20011| 2016-04-06T02:52:08.585-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f180'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128579), why: "splitting chunk [{ _id: -99.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c02865c17830b843f180'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128579), why: "splitting chunk [{ _id: -99.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.683-0500 c20011| 2016-04-06T02:52:08.585-0500 D COMMAND [conn25] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|21, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.689-0500 c20011| 2016-04-06T02:52:08.585-0500 D COMMAND [conn25] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|21, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:31.694-0500 c20011| 2016-04-06T02:52:08.585-0500 D COMMAND [conn25] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|21, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.694-0500 c20011| 2016-04-06T02:52:08.585-0500 D QUERY [conn25] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:31.702-0500 c20011| 2016-04-06T02:52:08.585-0500 I COMMAND [conn25] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|21, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:512 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.712-0500 c20011| 2016-04-06T02:52:08.585-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|21, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:31.716-0500 c20011| 2016-04-06T02:52:08.586-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|20, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.722-0500 c20011| 2016-04-06T02:52:08.586-0500 D COMMAND [conn25] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-99.0", lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -99.0 }, max: { _id: -98.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-99.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-98.0", lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -98.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-98.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|4 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.725-0500 c20011| 2016-04-06T02:52:08.586-0500 D QUERY [conn25] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:31.727-0500 c20011| 2016-04-06T02:52:08.586-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|21, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:31.733-0500 c20011| 2016-04-06T02:52:08.586-0500 D QUERY [conn25] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:31.736-0500 c20011| 2016-04-06T02:52:08.586-0500 I COMMAND [conn25] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.736-0500 c20011| 2016-04-06T02:52:08.586-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-99.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:31.739-0500 c20011| 2016-04-06T02:52:08.586-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-98.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:31.747-0500 c20011| 2016-04-06T02:52:08.587-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|21, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.756-0500 c20011| 2016-04-06T02:52:08.587-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|21, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 284 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.759-0500 c20011| 2016-04-06T02:52:08.589-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|21, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:31.764-0500 c20011| 2016-04-06T02:52:08.590-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|21, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:31.766-0500 c20011| 2016-04-06T02:52:08.590-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|22, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|21, t: 1 }, name-id: "101" } [js_test:multi_coll_drop] 2016-04-06T02:52:31.771-0500 c20011| 2016-04-06T02:52:08.591-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:31.772-0500 c20011| 2016-04-06T02:52:08.591-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:31.776-0500 c20011| 2016-04-06T02:52:08.591-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.778-0500 c20011| 2016-04-06T02:52:08.591-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|22, t: 1 } and is durable through: { ts: Timestamp 1459929128000|21, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.785-0500 c20011| 2016-04-06T02:52:08.591-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929128000|22, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|21, t: 1 }, name-id: "101" } [js_test:multi_coll_drop] 2016-04-06T02:52:31.801-0500 c20011| 2016-04-06T02:52:08.591-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.805-0500 c20011| 2016-04-06T02:52:08.595-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:31.809-0500 c20011| 2016-04-06T02:52:08.595-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:31.812-0500 c20011| 2016-04-06T02:52:08.595-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|22, t: 1 } and is durable through: { ts: Timestamp 1459929128000|21, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.814-0500 c20011| 2016-04-06T02:52:08.595-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|22, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|21, t: 1 }, name-id: "101" } [js_test:multi_coll_drop] 2016-04-06T02:52:31.820-0500 c20011| 2016-04-06T02:52:08.595-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.824-0500 c20011| 2016-04-06T02:52:08.595-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.834-0500 c20011| 2016-04-06T02:52:08.597-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:31.834-0500 c20011| 2016-04-06T02:52:08.597-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:31.844-0500 c20011| 2016-04-06T02:52:08.597-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|22, t: 1 } and is durable through: { ts: Timestamp 1459929128000|22, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.851-0500 c20011| 2016-04-06T02:52:08.597-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|22, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.862-0500 c20011| 2016-04-06T02:52:08.597-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:31.863-0500 c20011| 2016-04-06T02:52:08.597-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:31.870-0500 c20011| 2016-04-06T02:52:08.597-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.881-0500 c20011| 2016-04-06T02:52:08.597-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.888-0500 c20011| 2016-04-06T02:52:08.597-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.892-0500 c20011| 2016-04-06T02:52:08.597-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|22, t: 1 } and is durable through: { ts: Timestamp 1459929128000|22, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.895-0500 c20011| 2016-04-06T02:52:08.597-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.920-0500 c20011| 2016-04-06T02:52:08.597-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|21, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.942-0500 c20011| 2016-04-06T02:52:08.597-0500 I COMMAND [conn25] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-99.0", lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -99.0 }, max: { _id: -98.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-99.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-98.0", lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -98.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-98.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|4 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 11ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.948-0500 c20011| 2016-04-06T02:52:08.597-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|21, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 8ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.957-0500 c20011| 2016-04-06T02:52:08.598-0500 D COMMAND [conn25] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.598-0500-5704c02865c17830b843f181", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128598), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -99.0 }, max: { _id: MaxKey } }, left: { min: { _id: -99.0 }, max: { _id: -98.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -98.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.963-0500 c20011| 2016-04-06T02:52:08.598-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|22, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:31.965-0500 c20011| 2016-04-06T02:52:08.598-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|22, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.972-0500 c20011| 2016-04-06T02:52:08.598-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|22, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:31.974-0500 c20011| 2016-04-06T02:52:08.599-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|22, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:31.984-0500 c20011| 2016-04-06T02:52:08.600-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:31.985-0500 c20011| 2016-04-06T02:52:08.600-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:31.989-0500 c20011| 2016-04-06T02:52:08.600-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.991-0500 c20011| 2016-04-06T02:52:08.600-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|23, t: 1 } and is durable through: { ts: Timestamp 1459929128000|22, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:31.999-0500 c20011| 2016-04-06T02:52:08.600-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.017-0500 c20011| 2016-04-06T02:52:08.601-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|22, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:32.019-0500 c20011| 2016-04-06T02:52:08.601-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:32.020-0500 c20011| 2016-04-06T02:52:08.601-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:32.022-0500 c20011| 2016-04-06T02:52:08.601-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|23, t: 1 } and is durable through: { ts: Timestamp 1459929128000|22, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.022-0500 c20011| 2016-04-06T02:52:08.601-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.025-0500 c20011| 2016-04-06T02:52:08.601-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.028-0500 c20011| 2016-04-06T02:52:08.601-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|23, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|22, t: 1 }, name-id: "102" } [js_test:multi_coll_drop] 2016-04-06T02:52:32.032-0500 c20011| 2016-04-06T02:52:08.601-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|22, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:32.041-0500 c20011| 2016-04-06T02:52:08.602-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:32.042-0500 c20011| 2016-04-06T02:52:08.602-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:32.046-0500 c20011| 2016-04-06T02:52:08.602-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.050-0500 c20011| 2016-04-06T02:52:08.602-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|23, t: 1 } and is durable through: { ts: Timestamp 1459929128000|23, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.051-0500 c20011| 2016-04-06T02:52:08.602-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|23, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.058-0500 c20011| 2016-04-06T02:52:08.602-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.066-0500 c20011| 2016-04-06T02:52:08.602-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:32.067-0500 c20011| 2016-04-06T02:52:08.602-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:32.070-0500 c20011| 2016-04-06T02:52:08.602-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|23, t: 1 } and is durable through: { ts: Timestamp 1459929128000|23, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.072-0500 c20011| 2016-04-06T02:52:08.602-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.078-0500 c20011| 2016-04-06T02:52:08.602-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.086-0500 c20011| 2016-04-06T02:52:08.602-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|22, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.110-0500 c20011| 2016-04-06T02:52:08.602-0500 I COMMAND [conn25] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.598-0500-5704c02865c17830b843f181", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128598), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -99.0 }, max: { _id: MaxKey } }, left: { min: { _id: -99.0 }, max: { _id: -98.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -98.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.111-0500 c20011| 2016-04-06T02:52:08.602-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|23, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:32.115-0500 c20011| 2016-04-06T02:52:08.602-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|22, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.118-0500 c20011| 2016-04-06T02:52:08.602-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f180') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.123-0500 c20011| 2016-04-06T02:52:08.603-0500 D QUERY [conn25] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:32.128-0500 c20011| 2016-04-06T02:52:08.603-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c02865c17830b843f180') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.133-0500 c20011| 2016-04-06T02:52:08.603-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|23, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:32.135-0500 c20011| 2016-04-06T02:52:08.603-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|23, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.137-0500 c20011| 2016-04-06T02:52:08.603-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|23, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.140-0500 c20011| 2016-04-06T02:52:08.604-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|24, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|23, t: 1 }, name-id: "103" } [js_test:multi_coll_drop] 2016-04-06T02:52:32.142-0500 c20011| 2016-04-06T02:52:08.606-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|23, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:32.144-0500 c20011| 2016-04-06T02:52:08.606-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|23, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:32.147-0500 c20011| 2016-04-06T02:52:08.607-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:32.148-0500 c20011| 2016-04-06T02:52:08.607-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:32.150-0500 c20011| 2016-04-06T02:52:08.607-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|24, t: 1 } and is durable through: { ts: Timestamp 1459929128000|23, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.154-0500 c20011| 2016-04-06T02:52:08.607-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|24, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|23, t: 1 }, name-id: "103" } [js_test:multi_coll_drop] 2016-04-06T02:52:32.163-0500 c20011| 2016-04-06T02:52:08.607-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.180-0500 c20011| 2016-04-06T02:52:08.607-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.184-0500 c20011| 2016-04-06T02:52:08.608-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:32.184-0500 c20011| 2016-04-06T02:52:08.608-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:32.194-0500 c20011| 2016-04-06T02:52:08.608-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.197-0500 c20011| 2016-04-06T02:52:08.608-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|24, t: 1 } and is durable through: { ts: Timestamp 1459929128000|23, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.204-0500 c20011| 2016-04-06T02:52:08.608-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929128000|24, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|23, t: 1 }, name-id: "103" } [js_test:multi_coll_drop] 2016-04-06T02:52:32.212-0500 c20011| 2016-04-06T02:52:08.608-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.245-0500 c20011| 2016-04-06T02:52:08.609-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:32.245-0500 c20011| 2016-04-06T02:52:08.609-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:32.254-0500 c20011| 2016-04-06T02:52:08.609-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|24, t: 1 } and is durable through: { ts: Timestamp 1459929128000|24, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.255-0500 c20011| 2016-04-06T02:52:08.609-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|24, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.261-0500 c20011| 2016-04-06T02:52:08.609-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.267-0500 c20011| 2016-04-06T02:52:08.609-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.271-0500 c20011| 2016-04-06T02:52:08.609-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|23, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.280-0500 c20011| 2016-04-06T02:52:08.609-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f180') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.288-0500 c20011| 2016-04-06T02:52:08.609-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|23, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.290-0500 c20011| 2016-04-06T02:52:08.610-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|24, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:32.295-0500 c20011| 2016-04-06T02:52:08.610-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|24, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:32.555-0500 c20011| 2016-04-06T02:52:08.610-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:32.560-0500 c20011| 2016-04-06T02:52:08.610-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:32.562-0500 c20011| 2016-04-06T02:52:08.610-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.566-0500 c20011| 2016-04-06T02:52:08.610-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|24, t: 1 } and is durable through: { ts: Timestamp 1459929128000|24, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.568-0500 c20011| 2016-04-06T02:52:08.610-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.572-0500 c20011| 2016-04-06T02:52:08.615-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f182'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128615), why: "splitting chunk [{ _id: -98.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.573-0500 c20011| 2016-04-06T02:52:08.615-0500 D QUERY [conn25] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:32.576-0500 c20011| 2016-04-06T02:52:08.615-0500 D QUERY [conn25] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:32.579-0500 c20011| 2016-04-06T02:52:08.615-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.583-0500 c20011| 2016-04-06T02:52:08.615-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|24, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.587-0500 c20011| 2016-04-06T02:52:08.615-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|24, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.595-0500 c20011| 2016-04-06T02:52:08.618-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:32.596-0500 c20011| 2016-04-06T02:52:08.618-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:32.598-0500 c20011| 2016-04-06T02:52:08.618-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|25, t: 1 } and is durable through: { ts: Timestamp 1459929128000|24, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.600-0500 c20011| 2016-04-06T02:52:08.618-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.606-0500 c20011| 2016-04-06T02:52:08.618-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.608-0500 c20011| 2016-04-06T02:52:08.618-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|24, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:32.612-0500 c20011| 2016-04-06T02:52:08.618-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|24, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:32.615-0500 c20011| 2016-04-06T02:52:08.618-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|25, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|24, t: 1 }, name-id: "104" } [js_test:multi_coll_drop] 2016-04-06T02:52:32.620-0500 c20011| 2016-04-06T02:52:08.619-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:32.620-0500 c20011| 2016-04-06T02:52:08.619-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:32.624-0500 c20011| 2016-04-06T02:52:08.619-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.626-0500 c20011| 2016-04-06T02:52:08.619-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|25, t: 1 } and is durable through: { ts: Timestamp 1459929128000|24, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.630-0500 c20011| 2016-04-06T02:52:08.619-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929128000|25, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|24, t: 1 }, name-id: "104" } [js_test:multi_coll_drop] 2016-04-06T02:52:32.642-0500 c20011| 2016-04-06T02:52:08.619-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.647-0500 c20011| 2016-04-06T02:52:08.626-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:32.647-0500 c20011| 2016-04-06T02:52:08.626-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:32.652-0500 c20011| 2016-04-06T02:52:08.626-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.654-0500 c20011| 2016-04-06T02:52:08.626-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|25, t: 1 } and is durable through: { ts: Timestamp 1459929128000|25, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.655-0500 c20011| 2016-04-06T02:52:08.626-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|25, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.665-0500 c20011| 2016-04-06T02:52:08.626-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.666-0500 s20015| 2016-04-06T02:52:17.335-0500 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:32.668-0500 s20015| 2016-04-06T02:52:17.335-0500 D NETWORK [ReplicaSetMonitorWatcher] creating new connection to:mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:32.668-0500 s20015| 2016-04-06T02:52:17.335-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:52:32.669-0500 s20015| 2016-04-06T02:52:17.335-0500 D NETWORK [ReplicaSetMonitorWatcher] connected to server mongovm16:20013 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:52:32.671-0500 c20012| 2016-04-06T02:52:08.431-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:32.672-0500 c20012| 2016-04-06T02:52:08.431-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:32.675-0500 c20012| 2016-04-06T02:52:08.431-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:32.675-0500 c20012| 2016-04-06T02:52:08.431-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:32.676-0500 c20012| 2016-04-06T02:52:08.432-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:32.678-0500 d20010| 2016-04-06T02:52:17.059-0500 W NETWORK [conn5] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:32.684-0500 c20011| 2016-04-06T02:52:08.626-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:32.684-0500 c20011| 2016-04-06T02:52:08.626-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:32.688-0500 c20011| 2016-04-06T02:52:08.626-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|25, t: 1 } and is durable through: { ts: Timestamp 1459929128000|25, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.691-0500 c20011| 2016-04-06T02:52:08.626-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.696-0500 c20011| 2016-04-06T02:52:08.626-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.700-0500 c20011| 2016-04-06T02:52:08.626-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|24, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.706-0500 c20011| 2016-04-06T02:52:08.626-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f182'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128615), why: "splitting chunk [{ _id: -98.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c02865c17830b843f182'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128615), why: "splitting chunk [{ _id: -98.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 11ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.709-0500 c20011| 2016-04-06T02:52:08.626-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|24, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 8ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.711-0500 c20011| 2016-04-06T02:52:08.626-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|25, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:32.714-0500 c20011| 2016-04-06T02:52:08.627-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|25, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:32.716-0500 c20011| 2016-04-06T02:52:08.627-0500 D COMMAND [conn25] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|6 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|25, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.719-0500 c20011| 2016-04-06T02:52:08.627-0500 D COMMAND [conn25] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|25, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:32.722-0500 c20011| 2016-04-06T02:52:08.627-0500 D COMMAND [conn25] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|6 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|25, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.724-0500 c20011| 2016-04-06T02:52:08.627-0500 D QUERY [conn25] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:32.737-0500 c20011| 2016-04-06T02:52:08.627-0500 I COMMAND [conn25] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|6 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|25, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.747-0500 c20011| 2016-04-06T02:52:08.628-0500 D COMMAND [conn25] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-98.0", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -98.0 }, max: { _id: -97.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-98.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-97.0", lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -97.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-97.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|6 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.748-0500 c20011| 2016-04-06T02:52:08.628-0500 D QUERY [conn25] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:32.753-0500 c20011| 2016-04-06T02:52:08.628-0500 D QUERY [conn25] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:32.755-0500 c20011| 2016-04-06T02:52:08.628-0500 I COMMAND [conn25] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.758-0500 c20011| 2016-04-06T02:52:08.628-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-98.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:32.759-0500 c20011| 2016-04-06T02:52:08.628-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-97.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:32.763-0500 c20011| 2016-04-06T02:52:08.628-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|25, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.769-0500 c20011| 2016-04-06T02:52:08.628-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|25, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.773-0500 c20011| 2016-04-06T02:52:08.631-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|26, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|25, t: 1 }, name-id: "105" } [js_test:multi_coll_drop] 2016-04-06T02:52:32.775-0500 c20011| 2016-04-06T02:52:08.631-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|25, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:32.786-0500 c20011| 2016-04-06T02:52:08.632-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:32.787-0500 c20011| 2016-04-06T02:52:08.632-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:32.792-0500 c20011| 2016-04-06T02:52:08.632-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|26, t: 1 } and is durable through: { ts: Timestamp 1459929128000|25, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.795-0500 c20011| 2016-04-06T02:52:08.632-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|26, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|25, t: 1 }, name-id: "105" } [js_test:multi_coll_drop] 2016-04-06T02:52:32.799-0500 c20011| 2016-04-06T02:52:08.632-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.814-0500 c20011| 2016-04-06T02:52:08.632-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.819-0500 c20011| 2016-04-06T02:52:08.631-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|25, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:32.823-0500 c20011| 2016-04-06T02:52:08.633-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:32.823-0500 c20011| 2016-04-06T02:52:08.633-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:32.826-0500 c20011| 2016-04-06T02:52:08.633-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.829-0500 c20011| 2016-04-06T02:52:08.633-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|26, t: 1 } and is durable through: { ts: Timestamp 1459929128000|25, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.831-0500 c20011| 2016-04-06T02:52:08.633-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929128000|26, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|25, t: 1 }, name-id: "105" } [js_test:multi_coll_drop] 2016-04-06T02:52:32.835-0500 c20011| 2016-04-06T02:52:08.633-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:32.841-0500 c20011| 2016-04-06T02:52:08.634-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:32.841-0500 s20015| 2016-04-06T02:52:17.336-0500 D NETWORK [ReplicaSetMonitorWatcher] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:52:32.850-0500 s20015| 2016-04-06T02:52:17.436-0500 D ASIO [Balancer] startCommand: RemoteCommand 48 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:47.436-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929137435), up: 10, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.854-0500 s20015| 2016-04-06T02:52:17.436-0500 I ASIO [Balancer] dropping unhealthy pooled connection to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:32.855-0500 s20015| 2016-04-06T02:52:17.436-0500 I ASIO [Balancer] dropping unhealthy pooled connection to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:32.857-0500 s20015| 2016-04-06T02:52:17.436-0500 I ASIO [Balancer] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:52:32.862-0500 s20015| 2016-04-06T02:52:17.436-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:32.865-0500 s20015| 2016-04-06T02:52:17.436-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 49 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:32.867-0500 s20015| 2016-04-06T02:52:17.436-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:32.869-0500 s20015| 2016-04-06T02:52:17.436-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 49 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:52:32.874-0500 s20015| 2016-04-06T02:52:17.436-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 48 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:32.876-0500 s20015| 2016-04-06T02:52:17.437-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 48 finished with response: { ok: 0.0, errmsg: "not master", code: 10107 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.878-0500 s20015| 2016-04-06T02:52:17.437-0500 D NETWORK [Balancer] Marking host mongovm16:20011 as failed [js_test:multi_coll_drop] 2016-04-06T02:52:32.880-0500 s20015| 2016-04-06T02:52:17.437-0500 D SHARDING [Balancer] Command failed with retriable error and will be retried :: caused by :: NotMaster: not master [js_test:multi_coll_drop] 2016-04-06T02:52:32.884-0500 s20015| 2016-04-06T02:52:17.437-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:32.885-0500 s20015| 2016-04-06T02:52:17.437-0500 D NETWORK [Balancer] polling for status of connection to 192.168.100.28:20011, event detected [js_test:multi_coll_drop] 2016-04-06T02:52:32.888-0500 s20015| 2016-04-06T02:52:17.437-0500 I NETWORK [Balancer] Socket closed remotely, no longer connected (idle 10 secs, remote host 192.168.100.28:20011) [js_test:multi_coll_drop] 2016-04-06T02:52:32.888-0500 s20015| 2016-04-06T02:52:17.437-0500 D NETWORK [Balancer] creating new connection to:mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:32.890-0500 s20015| 2016-04-06T02:52:17.437-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:52:32.890-0500 s20015| 2016-04-06T02:52:17.437-0500 D NETWORK [Balancer] connected to server mongovm16:20011 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:52:32.893-0500 s20015| 2016-04-06T02:52:17.437-0500 D NETWORK [Balancer] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:52:32.896-0500 s20015| 2016-04-06T02:52:17.438-0500 D NETWORK [Balancer] polling for status of connection to 192.168.100.28:20012, no events [js_test:multi_coll_drop] 2016-04-06T02:52:32.897-0500 s20015| 2016-04-06T02:52:17.438-0500 W NETWORK [Balancer] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:32.901-0500 s20015| 2016-04-06T02:52:17.938-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:32.902-0500 s20015| 2016-04-06T02:52:17.939-0500 W NETWORK [Balancer] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:32.906-0500 s20015| 2016-04-06T02:52:18.439-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:32.910-0500 s20015| 2016-04-06T02:52:18.440-0500 W NETWORK [Balancer] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:32.919-0500 c20013| 2016-04-06T02:52:08.429-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 345 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:32.921-0500 c20013| 2016-04-06T02:52:08.429-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 345 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:32.922-0500 c20013| 2016-04-06T02:52:08.429-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 345 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.930-0500 c20013| 2016-04-06T02:52:08.429-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 343 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.936-0500 c20013| 2016-04-06T02:52:08.430-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.938-0500 c20013| 2016-04-06T02:52:08.430-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:32.944-0500 c20013| 2016-04-06T02:52:08.430-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 348 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.430-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:32.946-0500 c20013| 2016-04-06T02:52:08.430-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 348 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:32.955-0500 c20013| 2016-04-06T02:52:08.431-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 348 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|8, t: 1, h: -9148699677568158286, v: 2, op: "i", ns: "config.chunks", o: { _id: "multidrop.coll-_id_MinKey", ns: "multidrop.coll", min: { _id: MinKey }, max: { _id: MaxKey }, shard: "shard0000", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:32.960-0500 c20013| 2016-04-06T02:52:08.432-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|8 and ending at ts: Timestamp 1459929128000|8 [js_test:multi_coll_drop] 2016-04-06T02:52:32.962-0500 c20013| 2016-04-06T02:52:08.432-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:32.962-0500 c20013| 2016-04-06T02:52:08.432-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:32.964-0500 c20013| 2016-04-06T02:52:08.432-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:32.965-0500 c20013| 2016-04-06T02:52:08.432-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:32.966-0500 c20013| 2016-04-06T02:52:08.432-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:32.967-0500 c20013| 2016-04-06T02:52:08.432-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:32.967-0500 c20013| 2016-04-06T02:52:08.432-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:32.971-0500 c20013| 2016-04-06T02:52:08.432-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:32.973-0500 c20013| 2016-04-06T02:52:08.432-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:32.976-0500 c20013| 2016-04-06T02:52:08.432-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:32.976-0500 c20013| 2016-04-06T02:52:08.432-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:32.977-0500 c20013| 2016-04-06T02:52:08.432-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:32.978-0500 c20013| 2016-04-06T02:52:08.432-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:32.980-0500 c20013| 2016-04-06T02:52:08.432-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:32.980-0500 c20013| 2016-04-06T02:52:08.432-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:32.983-0500 c20013| 2016-04-06T02:52:08.432-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:32.984-0500 c20013| 2016-04-06T02:52:08.433-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:32.986-0500 c20013| 2016-04-06T02:52:08.433-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:32.988-0500 c20013| 2016-04-06T02:52:08.434-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 350 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.434-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:32.990-0500 c20013| 2016-04-06T02:52:08.434-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 350 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:32.993-0500 c20013| 2016-04-06T02:52:08.435-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:32.994-0500 c20013| 2016-04-06T02:52:08.435-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:32.996-0500 c20013| 2016-04-06T02:52:08.435-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:32.997-0500 c20013| 2016-04-06T02:52:08.435-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.021-0500 c20013| 2016-04-06T02:52:08.435-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.033-0500 c20013| 2016-04-06T02:52:08.435-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.040-0500 c20013| 2016-04-06T02:52:08.435-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.043-0500 c20013| 2016-04-06T02:52:08.435-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.045-0500 c20013| 2016-04-06T02:52:08.435-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.050-0500 c20013| 2016-04-06T02:52:08.435-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.051-0500 c20013| 2016-04-06T02:52:08.435-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.053-0500 c20013| 2016-04-06T02:52:08.435-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.055-0500 c20013| 2016-04-06T02:52:08.435-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.063-0500 c20013| 2016-04-06T02:52:08.435-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.063-0500 c20013| 2016-04-06T02:52:08.435-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.065-0500 c20013| 2016-04-06T02:52:08.435-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.071-0500 c20013| 2016-04-06T02:52:08.436-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:33.073-0500 c20013| 2016-04-06T02:52:08.436-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:33.084-0500 c20013| 2016-04-06T02:52:08.436-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 351 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:33.085-0500 c20013| 2016-04-06T02:52:08.436-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 351 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:33.099-0500 c20013| 2016-04-06T02:52:08.436-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 351 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.102-0500 c20013| 2016-04-06T02:52:08.446-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:33.110-0500 c20013| 2016-04-06T02:52:08.446-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 353 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:33.111-0500 c20013| 2016-04-06T02:52:08.446-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 353 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:33.114-0500 c20013| 2016-04-06T02:52:08.447-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 353 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.115-0500 c20013| 2016-04-06T02:52:08.447-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 350 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.116-0500 c20013| 2016-04-06T02:52:08.447-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.118-0500 c20013| 2016-04-06T02:52:08.447-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:33.123-0500 c20013| 2016-04-06T02:52:08.447-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 356 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.447-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:33.123-0500 c20013| 2016-04-06T02:52:08.447-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 356 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:33.126-0500 c20013| 2016-04-06T02:52:08.463-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 356 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|9, t: 1, h: -9131470462815342067, v: 2, op: "c", ns: "config.$cmd", o: { create: "collections" } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.128-0500 c20013| 2016-04-06T02:52:08.463-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|9 and ending at ts: Timestamp 1459929128000|9 [js_test:multi_coll_drop] 2016-04-06T02:52:33.132-0500 c20013| 2016-04-06T02:52:08.463-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:33.132-0500 c20013| 2016-04-06T02:52:08.463-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.134-0500 c20013| 2016-04-06T02:52:08.463-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.135-0500 c20013| 2016-04-06T02:52:08.463-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.137-0500 c20013| 2016-04-06T02:52:08.464-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.138-0500 c20013| 2016-04-06T02:52:08.463-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.140-0500 c20013| 2016-04-06T02:52:08.464-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.142-0500 c20013| 2016-04-06T02:52:08.464-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.143-0500 c20013| 2016-04-06T02:52:08.464-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.145-0500 c20013| 2016-04-06T02:52:08.464-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.146-0500 c20013| 2016-04-06T02:52:08.464-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.146-0500 c20013| 2016-04-06T02:52:08.464-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.148-0500 c20013| 2016-04-06T02:52:08.464-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.150-0500 c20013| 2016-04-06T02:52:08.464-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.152-0500 c20013| 2016-04-06T02:52:08.464-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:33.152-0500 c20013| 2016-04-06T02:52:08.464-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.156-0500 c20013| 2016-04-06T02:52:08.464-0500 D STORAGE [repl writer worker 1] create collection config.collections {} [js_test:multi_coll_drop] 2016-04-06T02:52:33.158-0500 c20013| 2016-04-06T02:52:08.464-0500 D STORAGE [repl writer worker 1] stored meta data for config.collections @ RecordId(17) [js_test:multi_coll_drop] 2016-04-06T02:52:33.167-0500 c20013| 2016-04-06T02:52:08.464-0500 D STORAGE [repl writer worker 1] WiredTigerKVEngine::createRecordStore uri: table:collection-39-751336887848580549 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:33.169-0500 c20013| 2016-04-06T02:52:08.464-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.170-0500 c20013| 2016-04-06T02:52:08.465-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.173-0500 c20013| 2016-04-06T02:52:08.465-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 358 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.465-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:33.176-0500 c20013| 2016-04-06T02:52:08.465-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 358 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:33.184-0500 c20013| 2016-04-06T02:52:08.466-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 358 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|10, t: 1, h: 7600279498637035863, v: 2, op: "i", ns: "config.collections", o: { _id: "multidrop.coll", lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), lastmod: new Date(4294967296), dropped: false, key: { _id: 1.0 }, unique: false } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.188-0500 c20013| 2016-04-06T02:52:08.466-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|10 and ending at ts: Timestamp 1459929128000|10 [js_test:multi_coll_drop] 2016-04-06T02:52:33.189-0500 c20013| 2016-04-06T02:52:08.468-0500 D STORAGE [repl writer worker 1] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-39-751336887848580549 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:33.191-0500 c20013| 2016-04-06T02:52:08.468-0500 D STORAGE [repl writer worker 1] config.collections: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:33.198-0500 c20013| 2016-04-06T02:52:08.468-0500 D STORAGE [repl writer worker 1] WiredTigerKVEngine::createSortedDataInterface ident: index-40-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.collections" }), [js_test:multi_coll_drop] 2016-04-06T02:52:33.210-0500 c20013| 2016-04-06T02:52:08.468-0500 D STORAGE [repl writer worker 1] create uri: table:index-40-751336887848580549 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.collections" }), [js_test:multi_coll_drop] 2016-04-06T02:52:33.213-0500 c20013| 2016-04-06T02:52:08.468-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 360 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.468-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:33.215-0500 c20013| 2016-04-06T02:52:08.468-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 360 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:33.216-0500 c20013| 2016-04-06T02:52:08.472-0500 D STORAGE [repl writer worker 1] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-40-751336887848580549 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:33.219-0500 c20013| 2016-04-06T02:52:08.472-0500 D STORAGE [repl writer worker 1] config.collections: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:33.220-0500 c20013| 2016-04-06T02:52:08.473-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.222-0500 c20013| 2016-04-06T02:52:08.473-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.224-0500 c20013| 2016-04-06T02:52:08.473-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.229-0500 c20013| 2016-04-06T02:52:08.473-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.229-0500 c20013| 2016-04-06T02:52:08.474-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.230-0500 c20013| 2016-04-06T02:52:08.474-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.231-0500 c20013| 2016-04-06T02:52:08.474-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.232-0500 c20013| 2016-04-06T02:52:08.474-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.237-0500 c20013| 2016-04-06T02:52:08.474-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.238-0500 c20013| 2016-04-06T02:52:08.474-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.241-0500 c20013| 2016-04-06T02:52:08.474-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.250-0500 c20013| 2016-04-06T02:52:08.474-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.260-0500 c20013| 2016-04-06T02:52:08.474-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.262-0500 c20013| 2016-04-06T02:52:08.474-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.262-0500 c20013| 2016-04-06T02:52:08.474-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.263-0500 c20013| 2016-04-06T02:52:08.474-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.266-0500 c20013| 2016-04-06T02:52:08.474-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:33.270-0500 c20013| 2016-04-06T02:52:08.474-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:33.276-0500 c20013| 2016-04-06T02:52:08.475-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.278-0500 c20013| 2016-04-06T02:52:08.475-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.281-0500 c20013| 2016-04-06T02:52:08.475-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.282-0500 c20013| 2016-04-06T02:52:08.475-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.287-0500 c20013| 2016-04-06T02:52:08.475-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.288-0500 c20013| 2016-04-06T02:52:08.475-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.290-0500 c20013| 2016-04-06T02:52:08.475-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.291-0500 c20013| 2016-04-06T02:52:08.475-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.291-0500 c20013| 2016-04-06T02:52:08.475-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.292-0500 c20013| 2016-04-06T02:52:08.475-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.294-0500 c20013| 2016-04-06T02:52:08.475-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.295-0500 c20013| 2016-04-06T02:52:08.475-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.298-0500 c20013| 2016-04-06T02:52:08.475-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.301-0500 c20013| 2016-04-06T02:52:08.475-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.305-0500 c20013| 2016-04-06T02:52:08.475-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.305-0500 c20013| 2016-04-06T02:52:08.476-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:33.306-0500 c20013| 2016-04-06T02:52:08.476-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.315-0500 c20013| 2016-04-06T02:52:08.476-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:33.323-0500 c20013| 2016-04-06T02:52:08.476-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 361 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:33.325-0500 c20013| 2016-04-06T02:52:08.476-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 361 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:33.326-0500 c20013| 2016-04-06T02:52:08.476-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.326-0500 c20013| 2016-04-06T02:52:08.476-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.329-0500 c20013| 2016-04-06T02:52:08.476-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.330-0500 c20013| 2016-04-06T02:52:08.476-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.330-0500 c20013| 2016-04-06T02:52:08.476-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.333-0500 c20013| 2016-04-06T02:52:08.476-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 361 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.335-0500 c20013| 2016-04-06T02:52:08.476-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.337-0500 c20013| 2016-04-06T02:52:08.476-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.338-0500 c20013| 2016-04-06T02:52:08.476-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.339-0500 c20013| 2016-04-06T02:52:08.476-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.341-0500 c20013| 2016-04-06T02:52:08.476-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.341-0500 c20013| 2016-04-06T02:52:08.476-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.342-0500 c20013| 2016-04-06T02:52:08.476-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.343-0500 c20013| 2016-04-06T02:52:08.476-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.344-0500 d20010| 2016-04-06T02:52:17.560-0500 W NETWORK [conn5] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:33.345-0500 d20010| 2016-04-06T02:52:18.061-0500 W NETWORK [conn5] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:33.353-0500 d20010| 2016-04-06T02:52:18.359-0500 W NETWORK [ReplicaSetMonitorWatcher] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:33.356-0500 c20011| 2016-04-06T02:52:08.634-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:33.360-0500 c20011| 2016-04-06T02:52:08.634-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:33.361-0500 c20011| 2016-04-06T02:52:08.634-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:33.365-0500 c20011| 2016-04-06T02:52:08.634-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|26, t: 1 } and is durable through: { ts: Timestamp 1459929128000|26, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.366-0500 c20011| 2016-04-06T02:52:08.634-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|26, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.369-0500 c20011| 2016-04-06T02:52:08.634-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.373-0500 c20011| 2016-04-06T02:52:08.634-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:33.379-0500 c20011| 2016-04-06T02:52:08.635-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.382-0500 c20011| 2016-04-06T02:52:08.635-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|26, t: 1 } and is durable through: { ts: Timestamp 1459929128000|26, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.397-0500 c20011| 2016-04-06T02:52:08.635-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:33.401-0500 c20011| 2016-04-06T02:52:08.635-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|25, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:33.408-0500 c20011| 2016-04-06T02:52:08.635-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|25, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:33.417-0500 c20011| 2016-04-06T02:52:08.635-0500 I COMMAND [conn25] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-98.0", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -98.0 }, max: { _id: -97.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-98.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-97.0", lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -97.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-97.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|6 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:52:33.420-0500 c20011| 2016-04-06T02:52:08.635-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|26, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:33.429-0500 c20011| 2016-04-06T02:52:08.635-0500 D COMMAND [conn25] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.635-0500-5704c02865c17830b843f183", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128635), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -98.0 }, max: { _id: MaxKey } }, left: { min: { _id: -98.0 }, max: { _id: -97.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -97.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.436-0500 c20011| 2016-04-06T02:52:08.636-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|26, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:33.439-0500 c20011| 2016-04-06T02:52:08.636-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|26, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:33.444-0500 c20011| 2016-04-06T02:52:08.636-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|26, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:33.459-0500 c20011| 2016-04-06T02:52:08.637-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|27, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|26, t: 1 }, name-id: "106" } [js_test:multi_coll_drop] 2016-04-06T02:52:33.465-0500 c20011| 2016-04-06T02:52:08.638-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:33.466-0500 c20011| 2016-04-06T02:52:08.638-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:33.474-0500 c20011| 2016-04-06T02:52:08.638-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|27, t: 1 } and is durable through: { ts: Timestamp 1459929128000|26, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.477-0500 c20011| 2016-04-06T02:52:08.638-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|27, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|26, t: 1 }, name-id: "106" } [js_test:multi_coll_drop] 2016-04-06T02:52:33.479-0500 c20011| 2016-04-06T02:52:08.638-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.486-0500 c20011| 2016-04-06T02:52:08.638-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:33.490-0500 c20011| 2016-04-06T02:52:08.638-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:33.493-0500 c20011| 2016-04-06T02:52:08.638-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|26, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:33.494-0500 c20011| 2016-04-06T02:52:08.638-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:33.498-0500 c20011| 2016-04-06T02:52:08.638-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.501-0500 c20011| 2016-04-06T02:52:08.638-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|27, t: 1 } and is durable through: { ts: Timestamp 1459929128000|26, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.503-0500 c20011| 2016-04-06T02:52:08.638-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929128000|27, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|26, t: 1 }, name-id: "106" } [js_test:multi_coll_drop] 2016-04-06T02:52:33.509-0500 c20011| 2016-04-06T02:52:08.638-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:33.514-0500 c20011| 2016-04-06T02:52:08.639-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:33.515-0500 c20011| 2016-04-06T02:52:08.639-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:33.521-0500 c20011| 2016-04-06T02:52:08.639-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|27, t: 1 } and is durable through: { ts: Timestamp 1459929128000|27, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.522-0500 c20011| 2016-04-06T02:52:08.639-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|27, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.524-0500 c20011| 2016-04-06T02:52:08.639-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.531-0500 c20011| 2016-04-06T02:52:08.639-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:33.536-0500 c20011| 2016-04-06T02:52:08.639-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:33.537-0500 c20011| 2016-04-06T02:52:08.639-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:33.541-0500 c20011| 2016-04-06T02:52:08.639-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.544-0500 c20011| 2016-04-06T02:52:08.640-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|27, t: 1 } and is durable through: { ts: Timestamp 1459929128000|27, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.547-0500 c20011| 2016-04-06T02:52:08.640-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:33.559-0500 c20011| 2016-04-06T02:52:08.645-0500 I COMMAND [conn25] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.635-0500-5704c02865c17830b843f183", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128635), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -98.0 }, max: { _id: MaxKey } }, left: { min: { _id: -98.0 }, max: { _id: -97.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -97.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 9ms [js_test:multi_coll_drop] 2016-04-06T02:52:33.564-0500 c20011| 2016-04-06T02:52:08.645-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|26, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:33.569-0500 c20011| 2016-04-06T02:52:08.645-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|26, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:33.572-0500 c20011| 2016-04-06T02:52:08.645-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|26, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:33.574-0500 c20011| 2016-04-06T02:52:08.645-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f182') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.589-0500 c20011| 2016-04-06T02:52:08.646-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|27, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:33.591-0500 c20011| 2016-04-06T02:52:08.646-0500 D QUERY [conn25] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:33.595-0500 c20011| 2016-04-06T02:52:08.646-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c02865c17830b843f182') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.601-0500 c20011| 2016-04-06T02:52:08.646-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|27, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:33.615-0500 c20011| 2016-04-06T02:52:08.646-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|27, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:33.624-0500 c20011| 2016-04-06T02:52:08.647-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|27, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:33.632-0500 c20011| 2016-04-06T02:52:08.648-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:33.633-0500 c20011| 2016-04-06T02:52:08.648-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:33.635-0500 c20011| 2016-04-06T02:52:08.648-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|28, t: 1 } and is durable through: { ts: Timestamp 1459929128000|27, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.637-0500 c20011| 2016-04-06T02:52:08.648-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.641-0500 c20011| 2016-04-06T02:52:08.649-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:33.645-0500 c20011| 2016-04-06T02:52:08.649-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|27, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:33.656-0500 c20011| 2016-04-06T02:52:08.649-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|28, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|27, t: 1 }, name-id: "107" } [js_test:multi_coll_drop] 2016-04-06T02:52:33.661-0500 c20011| 2016-04-06T02:52:08.650-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|27, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:33.666-0500 c20013| 2016-04-06T02:52:08.476-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.668-0500 c20013| 2016-04-06T02:52:08.476-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.668-0500 c20013| 2016-04-06T02:52:08.476-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.684-0500 c20013| 2016-04-06T02:52:08.476-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:33.690-0500 c20013| 2016-04-06T02:52:08.476-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 363 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:33.692-0500 c20013| 2016-04-06T02:52:08.476-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 363 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:33.694-0500 c20013| 2016-04-06T02:52:08.477-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 363 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.696-0500 c20013| 2016-04-06T02:52:08.477-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:33.700-0500 c20013| 2016-04-06T02:52:08.477-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:33.704-0500 c20013| 2016-04-06T02:52:08.477-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 365 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:33.705-0500 c20013| 2016-04-06T02:52:08.477-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 365 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:33.708-0500 c20013| 2016-04-06T02:52:08.477-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 365 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.710-0500 c20013| 2016-04-06T02:52:08.477-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 360 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.712-0500 c20013| 2016-04-06T02:52:08.477-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.713-0500 c20013| 2016-04-06T02:52:08.477-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:33.714-0500 c20013| 2016-04-06T02:52:08.478-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 368 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.478-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:33.715-0500 c20013| 2016-04-06T02:52:08.478-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 368 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:33.722-0500 c20013| 2016-04-06T02:52:08.482-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:33.727-0500 c20013| 2016-04-06T02:52:08.482-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 369 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:33.730-0500 c20013| 2016-04-06T02:52:08.482-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 369 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:33.736-0500 c20013| 2016-04-06T02:52:08.482-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 369 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.739-0500 c20013| 2016-04-06T02:52:08.482-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 368 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.742-0500 c20013| 2016-04-06T02:52:08.483-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.742-0500 c20013| 2016-04-06T02:52:08.483-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:33.747-0500 c20013| 2016-04-06T02:52:08.483-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 372 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.483-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:33.753-0500 c20013| 2016-04-06T02:52:08.483-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 372 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:33.762-0500 c20013| 2016-04-06T02:52:08.490-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 372 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|11, t: 1, h: 3457335805137684592, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.489-0500-5704c02806c33406d4d9c0c1", server: "mongovm16", clientAddr: "127.0.0.1:55066", time: new Date(1459929128489), what: "shardCollection.end", ns: "multidrop.coll", details: { version: "1|0||5704c02806c33406d4d9c0c0" } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.766-0500 c20013| 2016-04-06T02:52:08.492-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|11 and ending at ts: Timestamp 1459929128000|11 [js_test:multi_coll_drop] 2016-04-06T02:52:33.773-0500 c20013| 2016-04-06T02:52:08.492-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:33.777-0500 c20013| 2016-04-06T02:52:08.492-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.779-0500 c20013| 2016-04-06T02:52:08.492-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.782-0500 c20013| 2016-04-06T02:52:08.492-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.782-0500 c20013| 2016-04-06T02:52:08.492-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.784-0500 c20013| 2016-04-06T02:52:08.492-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.785-0500 c20013| 2016-04-06T02:52:08.492-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.786-0500 c20013| 2016-04-06T02:52:08.492-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.786-0500 c20013| 2016-04-06T02:52:08.493-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.790-0500 c20013| 2016-04-06T02:52:08.493-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.791-0500 c20013| 2016-04-06T02:52:08.493-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.791-0500 c20013| 2016-04-06T02:52:08.493-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.792-0500 c20013| 2016-04-06T02:52:08.493-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.793-0500 c20013| 2016-04-06T02:52:08.493-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.796-0500 c20013| 2016-04-06T02:52:08.493-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.797-0500 c20013| 2016-04-06T02:52:08.493-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.798-0500 c20013| 2016-04-06T02:52:08.493-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.800-0500 c20013| 2016-04-06T02:52:08.493-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:33.800-0500 c20013| 2016-04-06T02:52:08.493-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.802-0500 c20013| 2016-04-06T02:52:08.493-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.805-0500 c20013| 2016-04-06T02:52:08.493-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.808-0500 c20013| 2016-04-06T02:52:08.493-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.812-0500 c20013| 2016-04-06T02:52:08.493-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.814-0500 c20013| 2016-04-06T02:52:08.493-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.817-0500 c20013| 2016-04-06T02:52:08.493-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.817-0500 c20013| 2016-04-06T02:52:08.493-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.821-0500 c20013| 2016-04-06T02:52:08.493-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.827-0500 c20013| 2016-04-06T02:52:08.493-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.831-0500 c20013| 2016-04-06T02:52:08.493-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.834-0500 c20013| 2016-04-06T02:52:08.493-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.837-0500 c20013| 2016-04-06T02:52:08.494-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.837-0500 c20013| 2016-04-06T02:52:08.494-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.838-0500 c20013| 2016-04-06T02:52:08.494-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.840-0500 c20013| 2016-04-06T02:52:08.494-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:33.842-0500 c20013| 2016-04-06T02:52:08.494-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 374 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.494-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:33.844-0500 c20013| 2016-04-06T02:52:08.494-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 374 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:33.845-0500 c20013| 2016-04-06T02:52:08.495-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:33.856-0500 c20013| 2016-04-06T02:52:08.495-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:33.863-0500 c20013| 2016-04-06T02:52:08.495-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 375 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:33.864-0500 c20013| 2016-04-06T02:52:08.495-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 375 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:33.870-0500 c20013| 2016-04-06T02:52:08.496-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 375 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.878-0500 c20013| 2016-04-06T02:52:08.496-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 374 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.882-0500 c20013| 2016-04-06T02:52:08.496-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.884-0500 c20013| 2016-04-06T02:52:08.496-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:33.888-0500 c20013| 2016-04-06T02:52:08.496-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 378 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.496-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:33.889-0500 c20013| 2016-04-06T02:52:08.496-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 378 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:33.895-0500 c20011| 2016-04-06T02:52:08.655-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:33.897-0500 c20011| 2016-04-06T02:52:08.655-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:33.899-0500 c20011| 2016-04-06T02:52:08.655-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|28, t: 1 } and is durable through: { ts: Timestamp 1459929128000|28, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.900-0500 c20011| 2016-04-06T02:52:08.655-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|28, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.904-0500 d20010| 2016-04-06T02:52:18.562-0500 W NETWORK [conn5] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:33.905-0500 s20015| 2016-04-06T02:52:18.940-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:33.907-0500 s20014| 2016-04-06T02:52:17.199-0500 D ASIO [Balancer] startCommand: RemoteCommand 252 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:47.199-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929137199), up: 10, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.908-0500 s20014| 2016-04-06T02:52:17.199-0500 I ASIO [Balancer] dropping unhealthy pooled connection to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:33.911-0500 s20014| 2016-04-06T02:52:17.199-0500 I ASIO [Balancer] dropping unhealthy pooled connection to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:33.913-0500 s20014| 2016-04-06T02:52:17.199-0500 I ASIO [Balancer] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:52:33.915-0500 c20011| 2016-04-06T02:52:08.655-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.921-0500 c20011| 2016-04-06T02:52:08.655-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:33.927-0500 c20011| 2016-04-06T02:52:08.655-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f182') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 9ms [js_test:multi_coll_drop] 2016-04-06T02:52:33.933-0500 c20011| 2016-04-06T02:52:08.655-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|27, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:33.939-0500 c20011| 2016-04-06T02:52:08.655-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|27, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:33.940-0500 c20011| 2016-04-06T02:52:08.656-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|28, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:33.944-0500 c20011| 2016-04-06T02:52:08.656-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|28, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:33.948-0500 c20011| 2016-04-06T02:52:08.658-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:33.950-0500 c20011| 2016-04-06T02:52:08.658-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:33.953-0500 c20011| 2016-04-06T02:52:08.658-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.958-0500 c20011| 2016-04-06T02:52:08.658-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|28, t: 1 } and is durable through: { ts: Timestamp 1459929128000|28, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.970-0500 s20014| 2016-04-06T02:52:17.199-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:33.975-0500 s20014| 2016-04-06T02:52:17.199-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 253 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:33.976-0500 s20014| 2016-04-06T02:52:17.200-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:33.976-0500 s20014| 2016-04-06T02:52:17.200-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 253 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:52:33.977-0500 s20014| 2016-04-06T02:52:17.200-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 252 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:33.978-0500 d20010| 2016-04-06T02:52:19.076-0500 W NETWORK [conn5] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:33.979-0500 s20015| 2016-04-06T02:52:18.941-0500 W NETWORK [Balancer] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:33.982-0500 c20013| 2016-04-06T02:52:08.497-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:33.987-0500 c20013| 2016-04-06T02:52:08.497-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 379 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:33.990-0500 c20013| 2016-04-06T02:52:08.497-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 379 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:33.991-0500 c20013| 2016-04-06T02:52:08.498-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 379 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.997-0500 c20013| 2016-04-06T02:52:08.498-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 378 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|12, t: 1, h: 8307982106745841146, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:33.998-0500 c20013| 2016-04-06T02:52:08.498-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|12 and ending at ts: Timestamp 1459929128000|12 [js_test:multi_coll_drop] 2016-04-06T02:52:34.000-0500 c20013| 2016-04-06T02:52:08.498-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:34.009-0500 c20013| 2016-04-06T02:52:08.498-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.010-0500 c20013| 2016-04-06T02:52:08.498-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.010-0500 c20013| 2016-04-06T02:52:08.498-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.010-0500 c20013| 2016-04-06T02:52:08.498-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.011-0500 c20013| 2016-04-06T02:52:08.498-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.011-0500 c20013| 2016-04-06T02:52:08.498-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.012-0500 c20013| 2016-04-06T02:52:08.498-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.016-0500 c20013| 2016-04-06T02:52:08.498-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.018-0500 c20013| 2016-04-06T02:52:08.498-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.019-0500 c20013| 2016-04-06T02:52:08.498-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.020-0500 c20013| 2016-04-06T02:52:08.498-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.021-0500 c20013| 2016-04-06T02:52:08.499-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.021-0500 c20013| 2016-04-06T02:52:08.499-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:34.022-0500 c20013| 2016-04-06T02:52:08.499-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.024-0500 c20013| 2016-04-06T02:52:08.499-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.027-0500 c20013| 2016-04-06T02:52:08.499-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.028-0500 c20013| 2016-04-06T02:52:08.499-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.030-0500 c20013| 2016-04-06T02:52:08.499-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:34.030-0500 c20013| 2016-04-06T02:52:08.499-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.031-0500 c20013| 2016-04-06T02:52:08.499-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.031-0500 c20013| 2016-04-06T02:52:08.499-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.033-0500 c20013| 2016-04-06T02:52:08.499-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.033-0500 c20013| 2016-04-06T02:52:08.500-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.034-0500 c20013| 2016-04-06T02:52:08.500-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.034-0500 c20013| 2016-04-06T02:52:08.500-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.035-0500 c20013| 2016-04-06T02:52:08.500-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.036-0500 c20013| 2016-04-06T02:52:08.500-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.037-0500 c20013| 2016-04-06T02:52:08.500-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.037-0500 c20013| 2016-04-06T02:52:08.500-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.038-0500 c20013| 2016-04-06T02:52:08.500-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.038-0500 c20013| 2016-04-06T02:52:08.500-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.038-0500 c20013| 2016-04-06T02:52:08.500-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.039-0500 c20013| 2016-04-06T02:52:08.500-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.040-0500 c20013| 2016-04-06T02:52:08.500-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.041-0500 c20013| 2016-04-06T02:52:08.500-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 382 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.500-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:34.043-0500 c20013| 2016-04-06T02:52:08.500-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 382 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:34.050-0500 c20013| 2016-04-06T02:52:08.500-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:34.054-0500 c20013| 2016-04-06T02:52:08.500-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:34.057-0500 c20013| 2016-04-06T02:52:08.500-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 383 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:34.060-0500 c20013| 2016-04-06T02:52:08.500-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 383 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:34.063-0500 c20013| 2016-04-06T02:52:08.501-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 383 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.066-0500 c20013| 2016-04-06T02:52:08.502-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:34.071-0500 c20013| 2016-04-06T02:52:08.502-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 385 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:34.073-0500 c20013| 2016-04-06T02:52:08.502-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 385 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:34.074-0500 c20013| 2016-04-06T02:52:08.502-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 385 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.082-0500 c20011| 2016-04-06T02:52:08.658-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.087-0500 c20011| 2016-04-06T02:52:08.658-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:34.089-0500 c20011| 2016-04-06T02:52:08.658-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:34.090-0500 c20011| 2016-04-06T02:52:08.658-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.093-0500 c20011| 2016-04-06T02:52:08.658-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|28, t: 1 } and is durable through: { ts: Timestamp 1459929128000|28, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.101-0500 c20011| 2016-04-06T02:52:08.658-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.104-0500 c20011| 2016-04-06T02:52:08.659-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f184'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128658), why: "splitting chunk [{ _id: -97.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.110-0500 c20013| 2016-04-06T02:52:08.502-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 382 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.111-0500 c20013| 2016-04-06T02:52:08.503-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.114-0500 c20011| 2016-04-06T02:52:08.659-0500 D QUERY [conn25] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:34.123-0500 c20011| 2016-04-06T02:52:08.659-0500 D QUERY [conn25] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:34.133-0500 c20011| 2016-04-06T02:52:08.659-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.143-0500 c20011| 2016-04-06T02:52:08.659-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|28, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.152-0500 c20011| 2016-04-06T02:52:08.659-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|28, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.157-0500 c20011| 2016-04-06T02:52:08.661-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|29, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|28, t: 1 }, name-id: "108" } [js_test:multi_coll_drop] 2016-04-06T02:52:34.161-0500 c20011| 2016-04-06T02:52:08.661-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:34.162-0500 c20011| 2016-04-06T02:52:08.661-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:34.165-0500 c20011| 2016-04-06T02:52:08.661-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.169-0500 c20011| 2016-04-06T02:52:08.661-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|29, t: 1 } and is durable through: { ts: Timestamp 1459929128000|28, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.171-0500 c20011| 2016-04-06T02:52:08.661-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|29, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|28, t: 1 }, name-id: "108" } [js_test:multi_coll_drop] 2016-04-06T02:52:34.175-0500 c20011| 2016-04-06T02:52:08.661-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.181-0500 c20011| 2016-04-06T02:52:08.662-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|28, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:34.191-0500 c20011| 2016-04-06T02:52:08.662-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:34.191-0500 c20011| 2016-04-06T02:52:08.662-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:34.194-0500 c20011| 2016-04-06T02:52:08.663-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|29, t: 1 } and is durable through: { ts: Timestamp 1459929128000|28, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.196-0500 c20011| 2016-04-06T02:52:08.663-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|29, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|28, t: 1 }, name-id: "108" } [js_test:multi_coll_drop] 2016-04-06T02:52:34.198-0500 c20011| 2016-04-06T02:52:08.663-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.201-0500 c20011| 2016-04-06T02:52:08.663-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.203-0500 c20011| 2016-04-06T02:52:08.663-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|28, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:34.212-0500 c20011| 2016-04-06T02:52:08.664-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:34.214-0500 c20011| 2016-04-06T02:52:08.664-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:34.216-0500 c20011| 2016-04-06T02:52:08.664-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.218-0500 c20011| 2016-04-06T02:52:08.664-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|29, t: 1 } and is durable through: { ts: Timestamp 1459929128000|29, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.219-0500 c20011| 2016-04-06T02:52:08.664-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|29, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.229-0500 c20011| 2016-04-06T02:52:08.664-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.229-0500 s20014| 2016-04-06T02:52:17.200-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 252 finished with response: { ok: 0.0, errmsg: "not master", code: 10107 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.229-0500 s20014| 2016-04-06T02:52:17.200-0500 D NETWORK [Balancer] Marking host mongovm16:20011 as failed [js_test:multi_coll_drop] 2016-04-06T02:52:34.233-0500 s20014| 2016-04-06T02:52:17.200-0500 D SHARDING [Balancer] Command failed with retriable error and will be retried :: caused by :: NotMaster: not master [js_test:multi_coll_drop] 2016-04-06T02:52:34.237-0500 s20014| 2016-04-06T02:52:17.200-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:34.240-0500 s20014| 2016-04-06T02:52:17.200-0500 D NETWORK [Balancer] polling for status of connection to 192.168.100.28:20011, event detected [js_test:multi_coll_drop] 2016-04-06T02:52:34.242-0500 s20014| 2016-04-06T02:52:17.200-0500 I NETWORK [Balancer] Socket closed remotely, no longer connected (idle 14 secs, remote host 192.168.100.28:20011) [js_test:multi_coll_drop] 2016-04-06T02:52:34.246-0500 s20014| 2016-04-06T02:52:17.200-0500 D NETWORK [Balancer] creating new connection to:mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:34.246-0500 s20014| 2016-04-06T02:52:17.200-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:52:34.248-0500 s20014| 2016-04-06T02:52:17.201-0500 D NETWORK [Balancer] connected to server mongovm16:20011 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:52:34.249-0500 s20014| 2016-04-06T02:52:17.201-0500 D NETWORK [Balancer] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:52:34.253-0500 s20014| 2016-04-06T02:52:17.201-0500 D NETWORK [Balancer] polling for status of connection to 192.168.100.28:20012, no events [js_test:multi_coll_drop] 2016-04-06T02:52:34.255-0500 s20014| 2016-04-06T02:52:17.202-0500 D NETWORK [Balancer] polling for status of connection to 192.168.100.28:20013, no events [js_test:multi_coll_drop] 2016-04-06T02:52:34.260-0500 c20011| 2016-04-06T02:52:08.664-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f184'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128658), why: "splitting chunk [{ _id: -97.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c02865c17830b843f184'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128658), why: "splitting chunk [{ _id: -97.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.268-0500 c20011| 2016-04-06T02:52:08.664-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|28, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.271-0500 c20011| 2016-04-06T02:52:08.664-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|28, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.272-0500 c20011| 2016-04-06T02:52:08.665-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|29, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:34.279-0500 c20011| 2016-04-06T02:52:08.665-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|29, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:34.287-0500 c20011| 2016-04-06T02:52:08.666-0500 D COMMAND [conn25] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-97.0", lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -97.0 }, max: { _id: -96.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-97.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-96.0", lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -96.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-96.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|8 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.288-0500 c20011| 2016-04-06T02:52:08.666-0500 D QUERY [conn25] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:34.292-0500 c20011| 2016-04-06T02:52:08.667-0500 D QUERY [conn25] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:34.295-0500 c20011| 2016-04-06T02:52:08.667-0500 I COMMAND [conn25] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.296-0500 c20011| 2016-04-06T02:52:08.667-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-97.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:34.300-0500 c20011| 2016-04-06T02:52:08.667-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:34.301-0500 c20011| 2016-04-06T02:52:08.667-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:34.306-0500 c20011| 2016-04-06T02:52:08.667-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|29, t: 1 } and is durable through: { ts: Timestamp 1459929128000|29, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.310-0500 c20011| 2016-04-06T02:52:08.667-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.314-0500 c20011| 2016-04-06T02:52:08.667-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.316-0500 c20011| 2016-04-06T02:52:08.667-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-96.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:34.320-0500 c20011| 2016-04-06T02:52:08.667-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|29, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.321-0500 c20011| 2016-04-06T02:52:08.667-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|29, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.331-0500 c20011| 2016-04-06T02:52:08.670-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:34.334-0500 c20011| 2016-04-06T02:52:08.670-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:34.345-0500 c20011| 2016-04-06T02:52:08.670-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.349-0500 c20011| 2016-04-06T02:52:08.670-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|30, t: 1 } and is durable through: { ts: Timestamp 1459929128000|29, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.368-0500 c20011| 2016-04-06T02:52:08.670-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.373-0500 c20011| 2016-04-06T02:52:08.670-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|29, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:34.380-0500 c20011| 2016-04-06T02:52:08.670-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|29, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:34.388-0500 c20011| 2016-04-06T02:52:08.672-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:34.391-0500 c20011| 2016-04-06T02:52:08.672-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:34.397-0500 c20011| 2016-04-06T02:52:08.672-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|30, t: 1 } and is durable through: { ts: Timestamp 1459929128000|29, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.399-0500 c20011| 2016-04-06T02:52:08.672-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.407-0500 c20011| 2016-04-06T02:52:08.672-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.411-0500 c20011| 2016-04-06T02:52:08.672-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|30, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|29, t: 1 }, name-id: "109" } [js_test:multi_coll_drop] 2016-04-06T02:52:34.419-0500 c20011| 2016-04-06T02:52:08.673-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:34.420-0500 c20011| 2016-04-06T02:52:08.673-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:34.432-0500 c20011| 2016-04-06T02:52:08.673-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:34.433-0500 c20011| 2016-04-06T02:52:08.673-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:34.440-0500 c20011| 2016-04-06T02:52:08.673-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.444-0500 c20011| 2016-04-06T02:52:08.673-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|30, t: 1 } and is durable through: { ts: Timestamp 1459929128000|30, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.456-0500 c20011| 2016-04-06T02:52:08.673-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|30, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.466-0500 c20011| 2016-04-06T02:52:08.673-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.470-0500 c20011| 2016-04-06T02:52:08.673-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|30, t: 1 } and is durable through: { ts: Timestamp 1459929128000|30, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.473-0500 c20011| 2016-04-06T02:52:08.673-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.477-0500 c20011| 2016-04-06T02:52:08.673-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.479-0500 c20011| 2016-04-06T02:52:08.673-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|29, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.486-0500 c20011| 2016-04-06T02:52:08.673-0500 I COMMAND [conn25] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-97.0", lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -97.0 }, max: { _id: -96.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-97.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-96.0", lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -96.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-96.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|8 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.490-0500 c20011| 2016-04-06T02:52:08.673-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|29, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.494-0500 c20011| 2016-04-06T02:52:08.673-0500 D COMMAND [conn25] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.673-0500-5704c02865c17830b843f185", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128673), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -97.0 }, max: { _id: MaxKey } }, left: { min: { _id: -97.0 }, max: { _id: -96.0 }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -96.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.497-0500 c20011| 2016-04-06T02:52:08.674-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|30, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:34.499-0500 c20011| 2016-04-06T02:52:08.674-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|30, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:34.505-0500 c20011| 2016-04-06T02:52:08.674-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|30, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.508-0500 c20011| 2016-04-06T02:52:08.674-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|30, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.514-0500 c20011| 2016-04-06T02:52:08.677-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|31, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|30, t: 1 }, name-id: "110" } [js_test:multi_coll_drop] 2016-04-06T02:52:34.516-0500 c20011| 2016-04-06T02:52:08.677-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|30, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:34.520-0500 c20011| 2016-04-06T02:52:08.677-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|30, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:34.548-0500 c20011| 2016-04-06T02:52:08.678-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:34.548-0500 c20011| 2016-04-06T02:52:08.678-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:34.552-0500 c20011| 2016-04-06T02:52:08.678-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|31, t: 1 } and is durable through: { ts: Timestamp 1459929128000|30, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.555-0500 c20011| 2016-04-06T02:52:08.678-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|31, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|30, t: 1 }, name-id: "110" } [js_test:multi_coll_drop] 2016-04-06T02:52:34.559-0500 c20011| 2016-04-06T02:52:08.678-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.562-0500 c20011| 2016-04-06T02:52:08.678-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.567-0500 c20011| 2016-04-06T02:52:08.684-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:34.567-0500 c20011| 2016-04-06T02:52:08.684-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:34.573-0500 c20011| 2016-04-06T02:52:08.684-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.576-0500 c20011| 2016-04-06T02:52:08.684-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|31, t: 1 } and is durable through: { ts: Timestamp 1459929128000|30, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.579-0500 c20011| 2016-04-06T02:52:08.684-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|31, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|30, t: 1 }, name-id: "110" } [js_test:multi_coll_drop] 2016-04-06T02:52:34.584-0500 c20011| 2016-04-06T02:52:08.684-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.588-0500 c20011| 2016-04-06T02:52:08.684-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:34.590-0500 c20012| 2016-04-06T02:52:08.432-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.591-0500 c20012| 2016-04-06T02:52:08.432-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.591-0500 c20012| 2016-04-06T02:52:08.432-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.594-0500 c20012| 2016-04-06T02:52:08.432-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.594-0500 c20012| 2016-04-06T02:52:08.432-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.597-0500 c20012| 2016-04-06T02:52:08.432-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:34.600-0500 c20012| 2016-04-06T02:52:08.432-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.601-0500 c20012| 2016-04-06T02:52:08.433-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.603-0500 c20012| 2016-04-06T02:52:08.433-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.606-0500 c20012| 2016-04-06T02:52:08.433-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.607-0500 c20012| 2016-04-06T02:52:08.433-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.608-0500 c20012| 2016-04-06T02:52:08.433-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.609-0500 c20012| 2016-04-06T02:52:08.433-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.610-0500 c20012| 2016-04-06T02:52:08.433-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.612-0500 c20012| 2016-04-06T02:52:08.433-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.612-0500 c20012| 2016-04-06T02:52:08.433-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.616-0500 c20012| 2016-04-06T02:52:08.433-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.616-0500 c20012| 2016-04-06T02:52:08.433-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.617-0500 c20012| 2016-04-06T02:52:08.433-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.618-0500 c20011| 2016-04-06T02:52:08.684-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:34.621-0500 c20011| 2016-04-06T02:52:08.684-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|31, t: 1 } and is durable through: { ts: Timestamp 1459929128000|31, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.624-0500 c20011| 2016-04-06T02:52:08.684-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|31, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.632-0500 c20011| 2016-04-06T02:52:08.684-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.641-0500 c20011| 2016-04-06T02:52:08.684-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.647-0500 c20011| 2016-04-06T02:52:08.684-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|30, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.658-0500 c20011| 2016-04-06T02:52:08.684-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|30, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.671-0500 c20011| 2016-04-06T02:52:08.685-0500 I COMMAND [conn25] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.673-0500-5704c02865c17830b843f185", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128673), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -97.0 }, max: { _id: MaxKey } }, left: { min: { _id: -97.0 }, max: { _id: -96.0 }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -96.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 11ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.674-0500 c20011| 2016-04-06T02:52:08.685-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f184') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.676-0500 c20011| 2016-04-06T02:52:08.685-0500 D QUERY [conn25] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:34.679-0500 c20011| 2016-04-06T02:52:08.685-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c02865c17830b843f184') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.685-0500 c20011| 2016-04-06T02:52:08.685-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|31, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:34.689-0500 c20011| 2016-04-06T02:52:08.686-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|31, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.690-0500 c20011| 2016-04-06T02:52:08.686-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|31, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:34.693-0500 c20011| 2016-04-06T02:52:08.686-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|31, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.696-0500 c20011| 2016-04-06T02:52:08.688-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:34.697-0500 c20011| 2016-04-06T02:52:08.688-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:34.705-0500 c20011| 2016-04-06T02:52:08.688-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.707-0500 c20011| 2016-04-06T02:52:08.688-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|31, t: 1 } and is durable through: { ts: Timestamp 1459929128000|31, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.711-0500 c20011| 2016-04-06T02:52:08.688-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.720-0500 c20011| 2016-04-06T02:52:08.688-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:34.721-0500 c20013| 2016-04-06T02:52:08.503-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:34.724-0500 c20013| 2016-04-06T02:52:08.503-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 388 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.503-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:34.727-0500 c20013| 2016-04-06T02:52:08.503-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 388 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:34.731-0500 c20013| 2016-04-06T02:52:08.504-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 0|0 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|12, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.732-0500 c20013| 2016-04-06T02:52:08.504-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|12, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:34.734-0500 c20013| 2016-04-06T02:52:08.504-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 0|0 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|12, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.741-0500 c20013| 2016-04-06T02:52:08.504-0500 D QUERY [conn10] Relevant index 0 is kp: { ns: 1, min: 1 } unique name: 'ns_1_min_1' io: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:34.745-0500 c20013| 2016-04-06T02:52:08.504-0500 D QUERY [conn10] Relevant index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: 'ns_1_shard_1_min_1' io: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:34.749-0500 c20013| 2016-04-06T02:52:08.504-0500 D QUERY [conn10] Relevant index 2 is kp: { ns: 1, lastmod: 1 } unique name: 'ns_1_lastmod_1' io: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:34.751-0500 c20013| 2016-04-06T02:52:08.504-0500 D QUERY [conn10] Relevant index 0 is kp: { lastmod: 1 } multikey name: 'doesnt_matter' [js_test:multi_coll_drop] 2016-04-06T02:52:34.754-0500 c20013| 2016-04-06T02:52:08.504-0500 D QUERY [conn10] Relevant index 0 is kp: { lastmod: 1 } multikey name: 'doesnt_matter' [js_test:multi_coll_drop] 2016-04-06T02:52:34.755-0500 c20013| 2016-04-06T02:52:08.504-0500 D QUERY [conn10] Scoring query plan: IXSCAN { ns: 1, lastmod: 1 } planHitEOF=1 [js_test:multi_coll_drop] 2016-04-06T02:52:34.756-0500 c20013| 2016-04-06T02:52:08.504-0500 D QUERY [conn10] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:34.758-0500 c20013| 2016-04-06T02:52:08.504-0500 D QUERY [conn10] Scoring query plan: IXSCAN { ns: 1, shard: 1, min: 1 } planHitEOF=0 [js_test:multi_coll_drop] 2016-04-06T02:52:34.760-0500 c20013| 2016-04-06T02:52:08.504-0500 D QUERY [conn10] score(1.0002) = baseScore(1) + productivity((0 advanced)/(2 works) = 0) + tieBreakers(0.0001 noFetchBonus + 0 noSortBonus + 0.0001 noIxisectBonus = 0.0002) [js_test:multi_coll_drop] 2016-04-06T02:52:34.762-0500 c20013| 2016-04-06T02:52:08.504-0500 D QUERY [conn10] Scoring query plan: IXSCAN { ns: 1, min: 1 } planHitEOF=0 [js_test:multi_coll_drop] 2016-04-06T02:52:34.764-0500 c20013| 2016-04-06T02:52:08.504-0500 D QUERY [conn10] score(1.0002) = baseScore(1) + productivity((0 advanced)/(2 works) = 0) + tieBreakers(0.0001 noFetchBonus + 0 noSortBonus + 0.0001 noIxisectBonus = 0.0002) [js_test:multi_coll_drop] 2016-04-06T02:52:34.765-0500 c20013| 2016-04-06T02:52:08.504-0500 D QUERY [conn10] Winning plan: IXSCAN { ns: 1, lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.770-0500 c20013| 2016-04-06T02:52:08.505-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 0|0 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|12, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 fromMultiPlanner:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:530 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:34.780-0500 c20013| 2016-04-06T02:52:08.514-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 388 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|13, t: 1, h: -7456382829225788614, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f17c'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128507), why: "splitting chunk [{ _id: MinKey }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.782-0500 c20013| 2016-04-06T02:52:08.515-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|13 and ending at ts: Timestamp 1459929128000|13 [js_test:multi_coll_drop] 2016-04-06T02:52:34.783-0500 c20013| 2016-04-06T02:52:08.515-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:34.784-0500 c20013| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.786-0500 c20013| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.787-0500 c20013| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.789-0500 c20013| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.789-0500 c20013| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.793-0500 c20013| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.795-0500 c20013| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.795-0500 c20013| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.796-0500 c20013| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.796-0500 c20013| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.800-0500 c20013| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.802-0500 c20013| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.804-0500 c20013| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.805-0500 c20013| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.805-0500 c20013| 2016-04-06T02:52:08.516-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:34.805-0500 c20013| 2016-04-06T02:52:08.516-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.806-0500 c20013| 2016-04-06T02:52:08.516-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:34.807-0500 c20013| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.808-0500 c20013| 2016-04-06T02:52:08.517-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.808-0500 c20013| 2016-04-06T02:52:08.517-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.809-0500 c20013| 2016-04-06T02:52:08.517-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.809-0500 c20013| 2016-04-06T02:52:08.517-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.812-0500 c20013| 2016-04-06T02:52:08.517-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.813-0500 c20013| 2016-04-06T02:52:08.517-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.815-0500 c20013| 2016-04-06T02:52:08.517-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.818-0500 c20013| 2016-04-06T02:52:08.517-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.819-0500 c20013| 2016-04-06T02:52:08.517-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.819-0500 c20013| 2016-04-06T02:52:08.517-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.820-0500 c20013| 2016-04-06T02:52:08.517-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.822-0500 c20013| 2016-04-06T02:52:08.517-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 390 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.517-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:34.823-0500 c20013| 2016-04-06T02:52:08.517-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.827-0500 c20013| 2016-04-06T02:52:08.517-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.829-0500 c20013| 2016-04-06T02:52:08.517-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.830-0500 c20013| 2016-04-06T02:52:08.517-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.831-0500 c20013| 2016-04-06T02:52:08.517-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 390 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:34.832-0500 c20013| 2016-04-06T02:52:08.518-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.836-0500 c20013| 2016-04-06T02:52:08.518-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:34.844-0500 c20013| 2016-04-06T02:52:08.518-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:34.850-0500 c20013| 2016-04-06T02:52:08.518-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 391 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:34.853-0500 c20013| 2016-04-06T02:52:08.518-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 391 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:34.854-0500 c20013| 2016-04-06T02:52:08.518-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 391 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.856-0500 c20013| 2016-04-06T02:52:08.520-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 390 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.862-0500 c20013| 2016-04-06T02:52:08.520-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|13, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.863-0500 c20013| 2016-04-06T02:52:08.520-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:34.868-0500 c20013| 2016-04-06T02:52:08.520-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 394 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.520-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|13, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:34.871-0500 c20013| 2016-04-06T02:52:08.520-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 394 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:34.880-0500 c20013| 2016-04-06T02:52:08.521-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:34.887-0500 c20013| 2016-04-06T02:52:08.521-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 395 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:34.893-0500 c20013| 2016-04-06T02:52:08.521-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 395 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:34.894-0500 c20013| 2016-04-06T02:52:08.521-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 395 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.901-0500 c20013| 2016-04-06T02:52:08.524-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 394 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|14, t: 1, h: -6429269363497138108, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: MinKey }, max: { _id: -100.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_MinKey" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-100.0", lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -100.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-100.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:34.907-0500 c20013| 2016-04-06T02:52:08.524-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|14 and ending at ts: Timestamp 1459929128000|14 [js_test:multi_coll_drop] 2016-04-06T02:52:34.909-0500 c20013| 2016-04-06T02:52:08.524-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:34.912-0500 c20013| 2016-04-06T02:52:08.524-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.915-0500 c20013| 2016-04-06T02:52:08.524-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.917-0500 c20013| 2016-04-06T02:52:08.524-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.919-0500 c20013| 2016-04-06T02:52:08.524-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.922-0500 c20013| 2016-04-06T02:52:08.524-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.924-0500 c20013| 2016-04-06T02:52:08.524-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.926-0500 c20013| 2016-04-06T02:52:08.524-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.927-0500 c20013| 2016-04-06T02:52:08.524-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.929-0500 c20013| 2016-04-06T02:52:08.524-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.931-0500 c20013| 2016-04-06T02:52:08.524-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.933-0500 c20013| 2016-04-06T02:52:08.524-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.934-0500 c20013| 2016-04-06T02:52:08.524-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.936-0500 c20013| 2016-04-06T02:52:08.524-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.939-0500 c20013| 2016-04-06T02:52:08.524-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.939-0500 c20013| 2016-04-06T02:52:08.524-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.940-0500 c20013| 2016-04-06T02:52:08.524-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:34.941-0500 c20013| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.943-0500 c20013| 2016-04-06T02:52:08.525-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_MinKey" } [js_test:multi_coll_drop] 2016-04-06T02:52:34.943-0500 c20013| 2016-04-06T02:52:08.525-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-100.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:34.945-0500 c20013| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.945-0500 c20013| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.947-0500 c20013| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.948-0500 c20013| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.950-0500 c20013| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.950-0500 c20013| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.952-0500 c20013| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.953-0500 c20013| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.955-0500 c20013| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.956-0500 c20013| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.958-0500 c20013| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.959-0500 c20013| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.961-0500 c20013| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.962-0500 c20013| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.963-0500 c20013| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.964-0500 c20013| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.964-0500 c20013| 2016-04-06T02:52:08.525-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:34.966-0500 c20012| 2016-04-06T02:52:08.433-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.968-0500 c20012| 2016-04-06T02:52:08.433-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.971-0500 c20012| 2016-04-06T02:52:08.433-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.984-0500 c20012| 2016-04-06T02:52:08.433-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 348 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.433-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:34.985-0500 c20012| 2016-04-06T02:52:08.433-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 348 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:34.986-0500 c20012| 2016-04-06T02:52:08.436-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.987-0500 c20012| 2016-04-06T02:52:08.436-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:34.990-0500 c20012| 2016-04-06T02:52:08.436-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:34.996-0500 c20012| 2016-04-06T02:52:08.436-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.021-0500 c20012| 2016-04-06T02:52:08.437-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 349 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.024-0500 c20012| 2016-04-06T02:52:08.437-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 349 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:35.027-0500 c20012| 2016-04-06T02:52:08.437-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 349 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.031-0500 c20012| 2016-04-06T02:52:08.446-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.033-0500 s20015| 2016-04-06T02:52:19.441-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:35.038-0500 c20013| 2016-04-06T02:52:08.526-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.043-0500 c20013| 2016-04-06T02:52:08.526-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 398 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.044-0500 c20013| 2016-04-06T02:52:08.526-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 398 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:35.046-0500 c20013| 2016-04-06T02:52:08.526-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 398 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.048-0500 c20013| 2016-04-06T02:52:08.526-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 400 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.526-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|13, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:35.053-0500 c20013| 2016-04-06T02:52:08.526-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 400 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:35.057-0500 c20013| 2016-04-06T02:52:08.527-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.062-0500 c20013| 2016-04-06T02:52:08.527-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 401 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.065-0500 c20013| 2016-04-06T02:52:08.527-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 401 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:35.065-0500 c20013| 2016-04-06T02:52:08.528-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 401 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.070-0500 c20013| 2016-04-06T02:52:08.528-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 400 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.075-0500 c20013| 2016-04-06T02:52:08.528-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|14, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.075-0500 c20013| 2016-04-06T02:52:08.528-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:35.085-0500 c20013| 2016-04-06T02:52:08.528-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 404 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.528-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:35.089-0500 c20013| 2016-04-06T02:52:08.528-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 404 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:35.093-0500 c20013| 2016-04-06T02:52:08.529-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 404 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|15, t: 1, h: 7753166607224067281, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.528-0500-5704c02865c17830b843f17d", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128528), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey } }, left: { min: { _id: MinKey }, max: { _id: -100.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -100.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.102-0500 c20013| 2016-04-06T02:52:08.529-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|15 and ending at ts: Timestamp 1459929128000|15 [js_test:multi_coll_drop] 2016-04-06T02:52:35.112-0500 c20013| 2016-04-06T02:52:08.529-0500 D REPL [rsBackgroundSync-0] bgsync buffer has 0 bytes [js_test:multi_coll_drop] 2016-04-06T02:52:35.128-0500 c20013| 2016-04-06T02:52:08.529-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:35.132-0500 c20013| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.138-0500 c20013| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.138-0500 c20013| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.140-0500 c20013| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.142-0500 c20013| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.145-0500 c20013| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.146-0500 c20013| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.146-0500 c20013| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.149-0500 c20013| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.151-0500 c20013| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.154-0500 c20013| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.155-0500 c20013| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.155-0500 c20013| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.160-0500 c20013| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.160-0500 c20013| 2016-04-06T02:52:08.529-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:35.161-0500 c20013| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.164-0500 c20013| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.165-0500 c20013| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.170-0500 c20013| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.170-0500 c20013| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.172-0500 c20013| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.173-0500 c20013| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.173-0500 c20013| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.175-0500 c20013| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.176-0500 c20013| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.179-0500 c20013| 2016-04-06T02:52:08.530-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.180-0500 c20013| 2016-04-06T02:52:08.530-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.180-0500 c20013| 2016-04-06T02:52:08.530-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.182-0500 c20013| 2016-04-06T02:52:08.530-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.183-0500 c20013| 2016-04-06T02:52:08.530-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.184-0500 c20013| 2016-04-06T02:52:08.530-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.186-0500 c20013| 2016-04-06T02:52:08.530-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.189-0500 c20013| 2016-04-06T02:52:08.530-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.192-0500 c20013| 2016-04-06T02:52:08.530-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:35.203-0500 c20013| 2016-04-06T02:52:08.530-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.208-0500 c20013| 2016-04-06T02:52:08.530-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 406 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.213-0500 c20013| 2016-04-06T02:52:08.530-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 406 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:35.215-0500 c20013| 2016-04-06T02:52:08.530-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 406 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.218-0500 c20013| 2016-04-06T02:52:08.531-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 408 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.531-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:35.222-0500 c20013| 2016-04-06T02:52:08.531-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 408 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:35.226-0500 c20013| 2016-04-06T02:52:08.531-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.234-0500 c20013| 2016-04-06T02:52:08.531-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 409 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.236-0500 c20013| 2016-04-06T02:52:08.531-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 409 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:35.237-0500 c20013| 2016-04-06T02:52:08.531-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 409 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.242-0500 c20013| 2016-04-06T02:52:08.531-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 408 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.244-0500 c20013| 2016-04-06T02:52:08.532-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|15, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.245-0500 c20013| 2016-04-06T02:52:08.532-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:35.248-0500 c20013| 2016-04-06T02:52:08.532-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 412 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.532-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|15, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:35.251-0500 c20013| 2016-04-06T02:52:08.532-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 412 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:35.253-0500 c20013| 2016-04-06T02:52:08.532-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 412 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|16, t: 1, h: 1691968072355252476, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.255-0500 c20013| 2016-04-06T02:52:08.533-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|16 and ending at ts: Timestamp 1459929128000|16 [js_test:multi_coll_drop] 2016-04-06T02:52:35.257-0500 c20013| 2016-04-06T02:52:08.533-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:35.258-0500 c20013| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.259-0500 c20013| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.260-0500 c20013| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.261-0500 c20013| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.261-0500 c20013| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.263-0500 c20013| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.263-0500 c20013| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.269-0500 c20013| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.271-0500 c20013| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.276-0500 c20013| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.276-0500 c20013| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.278-0500 c20013| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.279-0500 c20013| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.281-0500 c20013| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.283-0500 c20013| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.285-0500 c20013| 2016-04-06T02:52:08.533-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:35.289-0500 c20013| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.290-0500 c20013| 2016-04-06T02:52:08.533-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:35.291-0500 c20013| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.294-0500 c20013| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.297-0500 c20013| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.298-0500 c20013| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.299-0500 c20013| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.303-0500 c20013| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.305-0500 c20013| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.310-0500 c20013| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.313-0500 c20013| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.317-0500 c20013| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.318-0500 c20013| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.319-0500 c20013| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.320-0500 c20013| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.320-0500 c20013| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.322-0500 c20013| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.324-0500 c20013| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.326-0500 c20013| 2016-04-06T02:52:08.534-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:35.357-0500 c20013| 2016-04-06T02:52:08.534-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.359-0500 c20013| 2016-04-06T02:52:08.534-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 414 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.361-0500 c20013| 2016-04-06T02:52:08.534-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 414 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:35.363-0500 c20013| 2016-04-06T02:52:08.534-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 414 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.368-0500 c20013| 2016-04-06T02:52:08.535-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 416 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.535-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|15, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:35.370-0500 c20013| 2016-04-06T02:52:08.535-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 416 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:35.372-0500 c20013| 2016-04-06T02:52:08.539-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 416 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.373-0500 c20013| 2016-04-06T02:52:08.539-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.373-0500 c20013| 2016-04-06T02:52:08.539-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:35.375-0500 c20013| 2016-04-06T02:52:08.539-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 418 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.539-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|16, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:35.377-0500 c20013| 2016-04-06T02:52:08.539-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 418 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:35.385-0500 c20013| 2016-04-06T02:52:08.540-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.396-0500 c20013| 2016-04-06T02:52:08.540-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 419 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.402-0500 c20013| 2016-04-06T02:52:08.540-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 419 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:35.408-0500 c20013| 2016-04-06T02:52:08.541-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 419 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.424-0500 c20013| 2016-04-06T02:52:08.542-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 418 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|17, t: 1, h: -503423693469934212, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f17e'), state: 2, when: new Date(1459929128542), why: "splitting chunk [{ _id: -100.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.425-0500 c20013| 2016-04-06T02:52:08.542-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|17 and ending at ts: Timestamp 1459929128000|17 [js_test:multi_coll_drop] 2016-04-06T02:52:35.425-0500 c20013| 2016-04-06T02:52:08.542-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:35.430-0500 c20013| 2016-04-06T02:52:08.542-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.432-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.434-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.439-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.441-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.442-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.443-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.446-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.448-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.451-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.454-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.458-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.461-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.461-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.464-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.464-0500 c20013| 2016-04-06T02:52:08.543-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:35.465-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.466-0500 c20013| 2016-04-06T02:52:08.543-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:35.467-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.479-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.479-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.481-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.481-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.482-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.482-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.482-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.482-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.483-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.484-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.484-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.485-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.486-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.487-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.489-0500 c20013| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.491-0500 c20013| 2016-04-06T02:52:08.544-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:35.520-0500 c20013| 2016-04-06T02:52:08.544-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.524-0500 c20013| 2016-04-06T02:52:08.544-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 422 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.539-0500 c20013| 2016-04-06T02:52:08.544-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 422 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:35.540-0500 c20013| 2016-04-06T02:52:08.544-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 422 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.552-0500 c20013| 2016-04-06T02:52:08.545-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 424 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.545-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|16, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:35.557-0500 c20013| 2016-04-06T02:52:08.545-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 424 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:35.563-0500 c20013| 2016-04-06T02:52:08.546-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.569-0500 c20013| 2016-04-06T02:52:08.546-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 425 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.571-0500 c20013| 2016-04-06T02:52:08.546-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 425 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:35.577-0500 c20013| 2016-04-06T02:52:08.547-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 425 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.581-0500 c20013| 2016-04-06T02:52:08.547-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 424 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.585-0500 c20013| 2016-04-06T02:52:08.548-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|17, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.586-0500 c20013| 2016-04-06T02:52:08.548-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:35.588-0500 c20013| 2016-04-06T02:52:08.548-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 428 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.548-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|17, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:35.589-0500 c20013| 2016-04-06T02:52:08.548-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 428 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:35.597-0500 c20013| 2016-04-06T02:52:08.549-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 428 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|18, t: 1, h: -6620679516550812391, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-100.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -100.0 }, max: { _id: -99.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-100.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-99.0", lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -99.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-99.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.602-0500 c20013| 2016-04-06T02:52:08.549-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|18 and ending at ts: Timestamp 1459929128000|18 [js_test:multi_coll_drop] 2016-04-06T02:52:35.605-0500 c20013| 2016-04-06T02:52:08.549-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:35.607-0500 c20013| 2016-04-06T02:52:08.550-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.608-0500 c20013| 2016-04-06T02:52:08.550-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.610-0500 c20013| 2016-04-06T02:52:08.550-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.630-0500 c20013| 2016-04-06T02:52:08.550-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.633-0500 c20013| 2016-04-06T02:52:08.550-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.636-0500 c20013| 2016-04-06T02:52:08.550-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.637-0500 c20013| 2016-04-06T02:52:08.550-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.644-0500 c20013| 2016-04-06T02:52:08.550-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.644-0500 c20013| 2016-04-06T02:52:08.550-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.645-0500 c20013| 2016-04-06T02:52:08.550-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.645-0500 c20013| 2016-04-06T02:52:08.550-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.684-0500 c20013| 2016-04-06T02:52:08.550-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.692-0500 c20013| 2016-04-06T02:52:08.550-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.694-0500 c20013| 2016-04-06T02:52:08.550-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.702-0500 c20013| 2016-04-06T02:52:08.550-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:35.707-0500 c20013| 2016-04-06T02:52:08.550-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.712-0500 c20013| 2016-04-06T02:52:08.550-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.715-0500 c20013| 2016-04-06T02:52:08.550-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-100.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:35.716-0500 c20013| 2016-04-06T02:52:08.550-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-99.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:35.717-0500 c20013| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.720-0500 c20013| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.721-0500 c20013| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.723-0500 c20013| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.723-0500 c20013| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.725-0500 c20013| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.725-0500 c20013| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.727-0500 c20013| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.729-0500 c20013| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.729-0500 c20013| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.733-0500 c20013| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.735-0500 c20013| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.737-0500 c20013| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.738-0500 c20013| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.739-0500 c20013| 2016-04-06T02:52:08.552-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 430 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.552-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|17, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:35.744-0500 c20013| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.752-0500 c20013| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.758-0500 c20013| 2016-04-06T02:52:08.552-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:35.765-0500 c20013| 2016-04-06T02:52:08.552-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|18, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.774-0500 c20013| 2016-04-06T02:52:08.552-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 431 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|18, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.776-0500 c20013| 2016-04-06T02:52:08.552-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 431 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:35.777-0500 c20013| 2016-04-06T02:52:08.552-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 431 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.780-0500 c20013| 2016-04-06T02:52:08.554-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|18, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|18, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.787-0500 c20013| 2016-04-06T02:52:08.554-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 433 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|18, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|18, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.788-0500 c20013| 2016-04-06T02:52:08.554-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 433 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:35.792-0500 c20013| 2016-04-06T02:52:08.554-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 433 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.793-0500 c20013| 2016-04-06T02:52:08.554-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 430 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:35.806-0500 c20013| 2016-04-06T02:52:08.556-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 430 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|19, t: 1, h: 6809334556305798525, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.554-0500-5704c02865c17830b843f17f", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128554), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -100.0 }, max: { _id: MaxKey } }, left: { min: { _id: -100.0 }, max: { _id: -99.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -99.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.808-0500 c20013| 2016-04-06T02:52:08.556-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|18, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.814-0500 c20013| 2016-04-06T02:52:08.556-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|19 and ending at ts: Timestamp 1459929128000|19 [js_test:multi_coll_drop] 2016-04-06T02:52:35.820-0500 c20013| 2016-04-06T02:52:08.557-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:35.820-0500 c20013| 2016-04-06T02:52:08.557-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.820-0500 c20013| 2016-04-06T02:52:08.557-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.822-0500 c20013| 2016-04-06T02:52:08.557-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.825-0500 c20013| 2016-04-06T02:52:08.557-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.826-0500 c20013| 2016-04-06T02:52:08.557-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.827-0500 c20013| 2016-04-06T02:52:08.557-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.829-0500 c20013| 2016-04-06T02:52:08.557-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.831-0500 c20013| 2016-04-06T02:52:08.557-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.834-0500 c20013| 2016-04-06T02:52:08.557-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.835-0500 c20013| 2016-04-06T02:52:08.557-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.836-0500 c20013| 2016-04-06T02:52:08.557-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.837-0500 c20013| 2016-04-06T02:52:08.557-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.839-0500 c20013| 2016-04-06T02:52:08.557-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.840-0500 c20013| 2016-04-06T02:52:08.557-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.842-0500 c20013| 2016-04-06T02:52:08.557-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.842-0500 c20013| 2016-04-06T02:52:08.557-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:35.844-0500 c20013| 2016-04-06T02:52:08.557-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.846-0500 c20013| 2016-04-06T02:52:08.558-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.849-0500 c20013| 2016-04-06T02:52:08.558-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.850-0500 c20013| 2016-04-06T02:52:08.558-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.853-0500 c20013| 2016-04-06T02:52:08.558-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.853-0500 c20013| 2016-04-06T02:52:08.558-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.855-0500 c20013| 2016-04-06T02:52:08.558-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.857-0500 c20013| 2016-04-06T02:52:08.558-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.858-0500 c20013| 2016-04-06T02:52:08.558-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.859-0500 c20013| 2016-04-06T02:52:08.558-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.860-0500 c20013| 2016-04-06T02:52:08.558-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.862-0500 c20013| 2016-04-06T02:52:08.558-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.862-0500 c20013| 2016-04-06T02:52:08.558-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.864-0500 c20013| 2016-04-06T02:52:08.558-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.865-0500 c20013| 2016-04-06T02:52:08.558-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.867-0500 c20013| 2016-04-06T02:52:08.558-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.868-0500 c20013| 2016-04-06T02:52:08.558-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.872-0500 c20013| 2016-04-06T02:52:08.558-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:35.877-0500 c20013| 2016-04-06T02:52:08.558-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|18, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.884-0500 c20013| 2016-04-06T02:52:08.558-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 436 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|18, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.885-0500 c20013| 2016-04-06T02:52:08.558-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 436 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:35.886-0500 c20013| 2016-04-06T02:52:08.559-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 436 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.889-0500 c20013| 2016-04-06T02:52:08.559-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 438 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.559-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|18, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:35.890-0500 c20013| 2016-04-06T02:52:08.559-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 438 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:35.893-0500 c20013| 2016-04-06T02:52:08.562-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.896-0500 c20013| 2016-04-06T02:52:08.562-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 439 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:35.901-0500 c20013| 2016-04-06T02:52:08.562-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 439 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:35.904-0500 c20013| 2016-04-06T02:52:08.563-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 439 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.912-0500 c20013| 2016-04-06T02:52:08.563-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 438 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.914-0500 c20013| 2016-04-06T02:52:08.563-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|19, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.929-0500 c20013| 2016-04-06T02:52:08.563-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:35.931-0500 c20013| 2016-04-06T02:52:08.563-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 442 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.563-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|19, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:35.934-0500 c20013| 2016-04-06T02:52:08.563-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 442 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:35.937-0500 c20013| 2016-04-06T02:52:08.564-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 442 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|20, t: 1, h: -3904568443163544586, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:35.939-0500 c20013| 2016-04-06T02:52:08.564-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|20 and ending at ts: Timestamp 1459929128000|20 [js_test:multi_coll_drop] 2016-04-06T02:52:35.942-0500 c20013| 2016-04-06T02:52:08.564-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:35.944-0500 c20013| 2016-04-06T02:52:08.564-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.947-0500 c20013| 2016-04-06T02:52:08.564-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.949-0500 c20013| 2016-04-06T02:52:08.564-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.950-0500 c20013| 2016-04-06T02:52:08.564-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.950-0500 c20013| 2016-04-06T02:52:08.564-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.952-0500 c20013| 2016-04-06T02:52:08.564-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.953-0500 c20013| 2016-04-06T02:52:08.564-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.954-0500 c20013| 2016-04-06T02:52:08.564-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.956-0500 c20013| 2016-04-06T02:52:08.564-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.956-0500 c20013| 2016-04-06T02:52:08.564-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.957-0500 c20013| 2016-04-06T02:52:08.564-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.960-0500 c20013| 2016-04-06T02:52:08.564-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.961-0500 c20013| 2016-04-06T02:52:08.564-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.962-0500 c20013| 2016-04-06T02:52:08.564-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:35.962-0500 c20013| 2016-04-06T02:52:08.564-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.963-0500 c20013| 2016-04-06T02:52:08.564-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:35.965-0500 c20013| 2016-04-06T02:52:08.564-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.965-0500 c20013| 2016-04-06T02:52:08.565-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.966-0500 c20013| 2016-04-06T02:52:08.565-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.966-0500 c20013| 2016-04-06T02:52:08.565-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.967-0500 c20013| 2016-04-06T02:52:08.565-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.968-0500 c20013| 2016-04-06T02:52:08.565-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.969-0500 c20013| 2016-04-06T02:52:08.565-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.970-0500 c20013| 2016-04-06T02:52:08.565-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.971-0500 c20013| 2016-04-06T02:52:08.565-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.973-0500 c20013| 2016-04-06T02:52:08.565-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.976-0500 c20013| 2016-04-06T02:52:08.565-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.980-0500 c20013| 2016-04-06T02:52:08.565-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.980-0500 c20013| 2016-04-06T02:52:08.565-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.980-0500 c20013| 2016-04-06T02:52:08.565-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.982-0500 c20013| 2016-04-06T02:52:08.565-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.982-0500 c20013| 2016-04-06T02:52:08.565-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.984-0500 c20013| 2016-04-06T02:52:08.565-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:35.985-0500 c20013| 2016-04-06T02:52:08.565-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.006-0500 c20013| 2016-04-06T02:52:08.565-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:36.023-0500 c20013| 2016-04-06T02:52:08.565-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.030-0500 c20013| 2016-04-06T02:52:08.565-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 444 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.031-0500 c20013| 2016-04-06T02:52:08.565-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 444 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.032-0500 c20013| 2016-04-06T02:52:08.566-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 444 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.052-0500 c20013| 2016-04-06T02:52:08.566-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 446 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.566-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|19, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:36.057-0500 c20013| 2016-04-06T02:52:08.566-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 446 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.063-0500 c20013| 2016-04-06T02:52:08.572-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.079-0500 c20013| 2016-04-06T02:52:08.572-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 447 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.082-0500 c20013| 2016-04-06T02:52:08.572-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 447 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.084-0500 c20013| 2016-04-06T02:52:08.572-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 447 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.089-0500 c20013| 2016-04-06T02:52:08.572-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 446 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.092-0500 c20013| 2016-04-06T02:52:08.572-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|20, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.095-0500 c20013| 2016-04-06T02:52:08.572-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:36.098-0500 c20013| 2016-04-06T02:52:08.573-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 450 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.573-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|20, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:36.099-0500 c20013| 2016-04-06T02:52:08.573-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 450 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.102-0500 c20013| 2016-04-06T02:52:08.574-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|2 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|20, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.105-0500 c20013| 2016-04-06T02:52:08.574-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|20, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:36.109-0500 c20013| 2016-04-06T02:52:08.574-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|2 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|20, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.112-0500 c20013| 2016-04-06T02:52:08.574-0500 D QUERY [conn10] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:36.132-0500 c20013| 2016-04-06T02:52:08.576-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|2 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|20, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:713 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:36.140-0500 c20013| 2016-04-06T02:52:08.580-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 450 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|21, t: 1, h: -7910042500719648602, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f180'), state: 2, when: new Date(1459929128579), why: "splitting chunk [{ _id: -99.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.141-0500 c20013| 2016-04-06T02:52:08.581-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|21 and ending at ts: Timestamp 1459929128000|21 [js_test:multi_coll_drop] 2016-04-06T02:52:36.145-0500 c20013| 2016-04-06T02:52:08.581-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:36.147-0500 c20013| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.160-0500 c20013| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.169-0500 c20013| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.172-0500 c20013| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.174-0500 c20013| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.177-0500 c20013| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.178-0500 c20013| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.180-0500 c20013| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.181-0500 c20013| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.181-0500 c20013| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.185-0500 c20013| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.187-0500 c20013| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.192-0500 c20013| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.192-0500 c20013| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.195-0500 c20013| 2016-04-06T02:52:08.581-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:36.196-0500 c20013| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.201-0500 c20013| 2016-04-06T02:52:08.581-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:36.203-0500 c20013| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.206-0500 c20013| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.207-0500 c20013| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.208-0500 c20013| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.211-0500 c20013| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.212-0500 c20013| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.213-0500 c20013| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.217-0500 c20013| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.219-0500 c20013| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.222-0500 c20013| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.222-0500 c20013| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.222-0500 c20013| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.225-0500 c20013| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.228-0500 c20013| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.230-0500 c20013| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.231-0500 c20013| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.237-0500 c20013| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.237-0500 c20013| 2016-04-06T02:52:08.582-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:36.241-0500 c20013| 2016-04-06T02:52:08.582-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.250-0500 c20013| 2016-04-06T02:52:08.582-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 452 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.254-0500 c20013| 2016-04-06T02:52:08.582-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 452 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.256-0500 c20013| 2016-04-06T02:52:08.583-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 452 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.259-0500 c20013| 2016-04-06T02:52:08.583-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 454 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.583-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|20, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:36.261-0500 c20013| 2016-04-06T02:52:08.583-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 454 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.266-0500 c20013| 2016-04-06T02:52:08.584-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.272-0500 c20013| 2016-04-06T02:52:08.584-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 455 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.275-0500 c20013| 2016-04-06T02:52:08.585-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 455 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.276-0500 c20013| 2016-04-06T02:52:08.585-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 455 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.278-0500 c20013| 2016-04-06T02:52:08.586-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 454 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.281-0500 c20013| 2016-04-06T02:52:08.586-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|21, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.283-0500 c20013| 2016-04-06T02:52:08.586-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:36.294-0500 c20013| 2016-04-06T02:52:08.586-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 458 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.586-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|21, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:36.298-0500 c20013| 2016-04-06T02:52:08.586-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 458 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.308-0500 c20013| 2016-04-06T02:52:08.587-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 458 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|22, t: 1, h: 8266891418716651152, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-99.0", lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -99.0 }, max: { _id: -98.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-99.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-98.0", lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -98.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-98.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.311-0500 c20013| 2016-04-06T02:52:08.587-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|22 and ending at ts: Timestamp 1459929128000|22 [js_test:multi_coll_drop] 2016-04-06T02:52:36.312-0500 c20013| 2016-04-06T02:52:08.588-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:36.317-0500 c20013| 2016-04-06T02:52:08.588-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.319-0500 c20013| 2016-04-06T02:52:08.588-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.321-0500 c20013| 2016-04-06T02:52:08.588-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.321-0500 c20013| 2016-04-06T02:52:08.588-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.324-0500 c20013| 2016-04-06T02:52:08.588-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.327-0500 c20013| 2016-04-06T02:52:08.588-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.329-0500 c20013| 2016-04-06T02:52:08.588-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.329-0500 c20013| 2016-04-06T02:52:08.588-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.333-0500 c20013| 2016-04-06T02:52:08.588-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.337-0500 c20013| 2016-04-06T02:52:08.588-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.337-0500 c20013| 2016-04-06T02:52:08.588-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.339-0500 c20013| 2016-04-06T02:52:08.588-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.341-0500 c20013| 2016-04-06T02:52:08.588-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.341-0500 c20013| 2016-04-06T02:52:08.588-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.342-0500 c20013| 2016-04-06T02:52:08.588-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.343-0500 c20013| 2016-04-06T02:52:08.588-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:36.343-0500 c20013| 2016-04-06T02:52:08.588-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.344-0500 c20013| 2016-04-06T02:52:08.588-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-99.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:36.345-0500 c20013| 2016-04-06T02:52:08.588-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-98.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:36.347-0500 c20013| 2016-04-06T02:52:08.589-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 460 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.589-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|21, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:36.348-0500 c20013| 2016-04-06T02:52:08.589-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 460 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.348-0500 c20013| 2016-04-06T02:52:08.590-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.350-0500 c20013| 2016-04-06T02:52:08.590-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.350-0500 c20013| 2016-04-06T02:52:08.590-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.351-0500 c20013| 2016-04-06T02:52:08.590-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.352-0500 c20013| 2016-04-06T02:52:08.590-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.353-0500 c20013| 2016-04-06T02:52:08.590-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.353-0500 c20013| 2016-04-06T02:52:08.590-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.358-0500 c20013| 2016-04-06T02:52:08.590-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.359-0500 c20013| 2016-04-06T02:52:08.590-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.361-0500 c20013| 2016-04-06T02:52:08.590-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.363-0500 c20013| 2016-04-06T02:52:08.590-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.364-0500 c20013| 2016-04-06T02:52:08.590-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.368-0500 c20013| 2016-04-06T02:52:08.590-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.369-0500 c20013| 2016-04-06T02:52:08.590-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.373-0500 c20013| 2016-04-06T02:52:08.590-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.375-0500 c20013| 2016-04-06T02:52:08.590-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.382-0500 c20013| 2016-04-06T02:52:08.591-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:36.388-0500 c20013| 2016-04-06T02:52:08.591-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.393-0500 c20013| 2016-04-06T02:52:08.591-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 461 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.393-0500 c20013| 2016-04-06T02:52:08.591-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 461 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.395-0500 c20013| 2016-04-06T02:52:08.591-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 461 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.407-0500 c20013| 2016-04-06T02:52:08.597-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.411-0500 c20013| 2016-04-06T02:52:08.597-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 463 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.413-0500 c20013| 2016-04-06T02:52:08.597-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 463 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.414-0500 c20013| 2016-04-06T02:52:08.597-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 463 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.415-0500 c20013| 2016-04-06T02:52:08.597-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 460 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.417-0500 c20013| 2016-04-06T02:52:08.597-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|22, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.417-0500 c20013| 2016-04-06T02:52:08.598-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:36.422-0500 c20013| 2016-04-06T02:52:08.598-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 466 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.598-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|22, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:36.425-0500 c20013| 2016-04-06T02:52:08.598-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 466 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.431-0500 c20013| 2016-04-06T02:52:08.598-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 466 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|23, t: 1, h: 6062546662183075299, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.598-0500-5704c02865c17830b843f181", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128598), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -99.0 }, max: { _id: MaxKey } }, left: { min: { _id: -99.0 }, max: { _id: -98.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -98.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.438-0500 c20013| 2016-04-06T02:52:08.598-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|23 and ending at ts: Timestamp 1459929128000|23 [js_test:multi_coll_drop] 2016-04-06T02:52:36.438-0500 c20013| 2016-04-06T02:52:08.599-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:36.439-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.440-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.442-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.444-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.447-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.448-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.449-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.451-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.451-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.453-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.456-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.457-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.458-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.460-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.461-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.462-0500 c20013| 2016-04-06T02:52:08.599-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:36.462-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.463-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.466-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.467-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.469-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.470-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.470-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.473-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.473-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.474-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.475-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.478-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.479-0500 c20013| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.481-0500 c20013| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.483-0500 c20013| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.484-0500 c20013| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.485-0500 c20013| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.485-0500 c20013| 2016-04-06T02:52:08.600-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:36.490-0500 c20013| 2016-04-06T02:52:08.600-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.496-0500 c20013| 2016-04-06T02:52:08.600-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 468 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.497-0500 c20013| 2016-04-06T02:52:08.600-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 468 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.499-0500 c20013| 2016-04-06T02:52:08.600-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 468 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.503-0500 c20013| 2016-04-06T02:52:08.601-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 470 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.601-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|22, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:36.504-0500 c20013| 2016-04-06T02:52:08.601-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 470 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.511-0500 c20013| 2016-04-06T02:52:08.602-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.515-0500 c20013| 2016-04-06T02:52:08.602-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 471 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.516-0500 c20013| 2016-04-06T02:52:08.602-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 471 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.517-0500 c20013| 2016-04-06T02:52:08.602-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 471 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.520-0500 c20013| 2016-04-06T02:52:08.602-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 470 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.520-0500 c20013| 2016-04-06T02:52:08.602-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|23, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.522-0500 c20013| 2016-04-06T02:52:08.602-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:36.524-0500 c20013| 2016-04-06T02:52:08.602-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 474 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.602-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|23, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:36.526-0500 c20013| 2016-04-06T02:52:08.602-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 474 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.529-0500 c20013| 2016-04-06T02:52:08.603-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 474 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|24, t: 1, h: 3786699700518885231, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.530-0500 c20013| 2016-04-06T02:52:08.604-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|24 and ending at ts: Timestamp 1459929128000|24 [js_test:multi_coll_drop] 2016-04-06T02:52:36.531-0500 c20013| 2016-04-06T02:52:08.605-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:36.533-0500 c20013| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.534-0500 c20013| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.536-0500 c20013| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.536-0500 c20013| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.537-0500 c20013| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.538-0500 c20013| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.539-0500 c20013| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.541-0500 c20013| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.543-0500 c20013| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.545-0500 c20013| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.545-0500 c20013| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.548-0500 c20013| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.558-0500 c20013| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.559-0500 c20013| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.561-0500 c20013| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.563-0500 c20013| 2016-04-06T02:52:08.606-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 476 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.606-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|23, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:36.567-0500 c20013| 2016-04-06T02:52:08.606-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 476 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.568-0500 c20013| 2016-04-06T02:52:08.607-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:36.570-0500 c20013| 2016-04-06T02:52:08.607-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.575-0500 c20013| 2016-04-06T02:52:08.607-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:36.575-0500 c20013| 2016-04-06T02:52:08.607-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.577-0500 c20013| 2016-04-06T02:52:08.607-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.579-0500 c20013| 2016-04-06T02:52:08.607-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.579-0500 c20013| 2016-04-06T02:52:08.607-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.580-0500 c20013| 2016-04-06T02:52:08.607-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.581-0500 c20013| 2016-04-06T02:52:08.607-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.581-0500 c20013| 2016-04-06T02:52:08.607-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.582-0500 c20013| 2016-04-06T02:52:08.607-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.585-0500 c20013| 2016-04-06T02:52:08.607-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.586-0500 c20013| 2016-04-06T02:52:08.607-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.589-0500 c20013| 2016-04-06T02:52:08.607-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.591-0500 c20013| 2016-04-06T02:52:08.607-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.591-0500 c20013| 2016-04-06T02:52:08.607-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.595-0500 c20013| 2016-04-06T02:52:08.607-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.596-0500 c20013| 2016-04-06T02:52:08.607-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.598-0500 c20013| 2016-04-06T02:52:08.608-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.599-0500 c20013| 2016-04-06T02:52:08.608-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:36.603-0500 c20013| 2016-04-06T02:52:08.608-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.607-0500 c20013| 2016-04-06T02:52:08.608-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 477 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.608-0500 c20013| 2016-04-06T02:52:08.608-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 477 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.609-0500 c20013| 2016-04-06T02:52:08.608-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 477 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.610-0500 c20013| 2016-04-06T02:52:08.609-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 476 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.611-0500 c20013| 2016-04-06T02:52:08.609-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|24, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.617-0500 c20013| 2016-04-06T02:52:08.609-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:36.621-0500 c20013| 2016-04-06T02:52:08.610-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 480 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.610-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|24, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:36.629-0500 c20013| 2016-04-06T02:52:08.610-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 480 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.647-0500 c20013| 2016-04-06T02:52:08.610-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.661-0500 c20013| 2016-04-06T02:52:08.610-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 481 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.663-0500 c20013| 2016-04-06T02:52:08.610-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 481 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.664-0500 c20013| 2016-04-06T02:52:08.610-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 481 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.667-0500 c20013| 2016-04-06T02:52:08.613-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|4 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|24, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.668-0500 c20013| 2016-04-06T02:52:08.613-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|24, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:36.674-0500 c20013| 2016-04-06T02:52:08.613-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|4 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|24, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.677-0500 c20013| 2016-04-06T02:52:08.613-0500 D QUERY [conn10] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:36.683-0500 c20013| 2016-04-06T02:52:08.614-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|4 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|24, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:712 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:36.688-0500 c20013| 2016-04-06T02:52:08.616-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 480 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|25, t: 1, h: -2372094527379662980, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f182'), state: 2, when: new Date(1459929128615), why: "splitting chunk [{ _id: -98.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.689-0500 c20013| 2016-04-06T02:52:08.616-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|25 and ending at ts: Timestamp 1459929128000|25 [js_test:multi_coll_drop] 2016-04-06T02:52:36.691-0500 c20013| 2016-04-06T02:52:08.617-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:36.691-0500 c20013| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.692-0500 c20013| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.693-0500 c20013| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.695-0500 c20013| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.697-0500 c20013| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.700-0500 c20013| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.701-0500 c20013| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.705-0500 c20013| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.706-0500 c20013| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.710-0500 c20013| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.711-0500 c20013| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.734-0500 c20013| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.735-0500 c20013| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.736-0500 c20013| 2016-04-06T02:52:08.618-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:36.736-0500 c20013| 2016-04-06T02:52:08.618-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.739-0500 c20013| 2016-04-06T02:52:08.618-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:36.740-0500 c20013| 2016-04-06T02:52:08.618-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.742-0500 c20013| 2016-04-06T02:52:08.618-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.747-0500 c20013| 2016-04-06T02:52:08.618-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 484 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.618-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|24, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:36.750-0500 c20013| 2016-04-06T02:52:08.618-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.751-0500 c20013| 2016-04-06T02:52:08.618-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.753-0500 c20013| 2016-04-06T02:52:08.618-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 484 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.754-0500 c20013| 2016-04-06T02:52:08.618-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.755-0500 c20013| 2016-04-06T02:52:08.618-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.757-0500 c20013| 2016-04-06T02:52:08.618-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.759-0500 c20013| 2016-04-06T02:52:08.618-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.759-0500 c20013| 2016-04-06T02:52:08.618-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.760-0500 c20013| 2016-04-06T02:52:08.618-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.761-0500 c20013| 2016-04-06T02:52:08.618-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.762-0500 c20013| 2016-04-06T02:52:08.618-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.763-0500 c20013| 2016-04-06T02:52:08.618-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.764-0500 c20013| 2016-04-06T02:52:08.618-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.767-0500 c20013| 2016-04-06T02:52:08.618-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.769-0500 c20013| 2016-04-06T02:52:08.618-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.770-0500 c20013| 2016-04-06T02:52:08.618-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.771-0500 c20013| 2016-04-06T02:52:08.618-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.773-0500 c20013| 2016-04-06T02:52:08.619-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:36.778-0500 c20013| 2016-04-06T02:52:08.619-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.783-0500 c20013| 2016-04-06T02:52:08.619-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 485 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.785-0500 c20013| 2016-04-06T02:52:08.619-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 485 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.786-0500 c20013| 2016-04-06T02:52:08.619-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 485 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.792-0500 c20013| 2016-04-06T02:52:08.625-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.797-0500 c20013| 2016-04-06T02:52:08.625-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 487 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.804-0500 c20013| 2016-04-06T02:52:08.625-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 487 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.809-0500 c20013| 2016-04-06T02:52:08.626-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 487 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.810-0500 c20013| 2016-04-06T02:52:08.626-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 484 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.812-0500 c20013| 2016-04-06T02:52:08.627-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|25, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.812-0500 c20013| 2016-04-06T02:52:08.627-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:36.815-0500 c20013| 2016-04-06T02:52:08.627-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 490 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.627-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|25, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:36.817-0500 c20013| 2016-04-06T02:52:08.627-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 490 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.822-0500 c20013| 2016-04-06T02:52:08.629-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 490 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|26, t: 1, h: 4415888972038189494, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-98.0", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -98.0 }, max: { _id: -97.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-98.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-97.0", lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -97.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-97.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.825-0500 c20013| 2016-04-06T02:52:08.629-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|26 and ending at ts: Timestamp 1459929128000|26 [js_test:multi_coll_drop] 2016-04-06T02:52:36.829-0500 c20013| 2016-04-06T02:52:08.629-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:36.830-0500 c20013| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.832-0500 c20013| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.835-0500 c20013| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.837-0500 c20013| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.839-0500 c20013| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.839-0500 c20013| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.840-0500 c20013| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.843-0500 c20013| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.844-0500 c20013| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.845-0500 c20013| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.846-0500 c20013| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.861-0500 c20013| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.863-0500 c20013| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.872-0500 c20013| 2016-04-06T02:52:08.629-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:36.873-0500 c20013| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.873-0500 c20013| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.882-0500 c20013| 2016-04-06T02:52:08.630-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll-_id_-98.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:36.883-0500 c20013| 2016-04-06T02:52:08.630-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.885-0500 c20013| 2016-04-06T02:52:08.630-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll-_id_-97.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:36.888-0500 c20013| 2016-04-06T02:52:08.630-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.889-0500 c20013| 2016-04-06T02:52:08.630-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.889-0500 c20013| 2016-04-06T02:52:08.630-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.894-0500 c20013| 2016-04-06T02:52:08.630-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.895-0500 c20013| 2016-04-06T02:52:08.631-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.897-0500 c20013| 2016-04-06T02:52:08.631-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.900-0500 c20013| 2016-04-06T02:52:08.631-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.904-0500 c20013| 2016-04-06T02:52:08.631-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.904-0500 c20013| 2016-04-06T02:52:08.631-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.907-0500 c20013| 2016-04-06T02:52:08.631-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.908-0500 c20013| 2016-04-06T02:52:08.631-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.909-0500 c20013| 2016-04-06T02:52:08.631-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.911-0500 c20013| 2016-04-06T02:52:08.631-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.911-0500 c20013| 2016-04-06T02:52:08.631-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 492 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.631-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|25, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:36.915-0500 c20013| 2016-04-06T02:52:08.631-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 492 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.915-0500 c20013| 2016-04-06T02:52:08.631-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.920-0500 c20013| 2016-04-06T02:52:08.631-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.921-0500 c20013| 2016-04-06T02:52:08.631-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.922-0500 c20013| 2016-04-06T02:52:08.633-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:36.927-0500 c20013| 2016-04-06T02:52:08.633-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.932-0500 c20013| 2016-04-06T02:52:08.633-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 493 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.933-0500 c20013| 2016-04-06T02:52:08.633-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 493 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.935-0500 c20013| 2016-04-06T02:52:08.633-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 493 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.940-0500 c20013| 2016-04-06T02:52:08.634-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.943-0500 c20013| 2016-04-06T02:52:08.634-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 495 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:36.944-0500 c20013| 2016-04-06T02:52:08.634-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 495 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.945-0500 c20013| 2016-04-06T02:52:08.635-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 495 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.948-0500 c20013| 2016-04-06T02:52:08.635-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 492 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.950-0500 c20013| 2016-04-06T02:52:08.635-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|26, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.954-0500 c20013| 2016-04-06T02:52:08.635-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:36.957-0500 c20013| 2016-04-06T02:52:08.636-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 498 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.636-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|26, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:36.958-0500 c20013| 2016-04-06T02:52:08.636-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 498 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:36.965-0500 c20013| 2016-04-06T02:52:08.636-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 498 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|27, t: 1, h: -3202951646012415608, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.635-0500-5704c02865c17830b843f183", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128635), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -98.0 }, max: { _id: MaxKey } }, left: { min: { _id: -98.0 }, max: { _id: -97.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -97.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:36.973-0500 c20013| 2016-04-06T02:52:08.636-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|27 and ending at ts: Timestamp 1459929128000|27 [js_test:multi_coll_drop] 2016-04-06T02:52:36.976-0500 c20013| 2016-04-06T02:52:08.636-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:36.978-0500 c20013| 2016-04-06T02:52:08.636-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.979-0500 c20013| 2016-04-06T02:52:08.636-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.980-0500 c20013| 2016-04-06T02:52:08.636-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.983-0500 c20013| 2016-04-06T02:52:08.636-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.983-0500 c20013| 2016-04-06T02:52:08.636-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.985-0500 c20013| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.987-0500 c20013| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.989-0500 c20013| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.989-0500 c20013| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.990-0500 c20013| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.991-0500 c20013| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.992-0500 c20013| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.993-0500 c20013| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.993-0500 c20013| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.994-0500 c20013| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.995-0500 c20013| 2016-04-06T02:52:08.637-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:36.996-0500 c20013| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:36.999-0500 c20013| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.011-0500 c20013| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.014-0500 c20013| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.015-0500 c20013| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.018-0500 c20013| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.019-0500 c20013| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.020-0500 c20013| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.022-0500 c20013| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.023-0500 c20013| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.024-0500 c20013| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.024-0500 c20013| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.026-0500 c20013| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.028-0500 c20013| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.029-0500 c20013| 2016-04-06T02:52:08.638-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.032-0500 c20013| 2016-04-06T02:52:08.638-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.032-0500 c20013| 2016-04-06T02:52:08.638-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.033-0500 c20013| 2016-04-06T02:52:08.638-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:37.040-0500 c20013| 2016-04-06T02:52:08.638-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:37.048-0500 c20013| 2016-04-06T02:52:08.638-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 500 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:37.051-0500 c20013| 2016-04-06T02:52:08.638-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 500 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:37.056-0500 c20013| 2016-04-06T02:52:08.638-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 500 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.065-0500 c20013| 2016-04-06T02:52:08.639-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:37.074-0500 c20013| 2016-04-06T02:52:08.639-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 502 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:37.078-0500 c20013| 2016-04-06T02:52:08.639-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 502 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:37.079-0500 c20013| 2016-04-06T02:52:08.640-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 502 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.085-0500 c20013| 2016-04-06T02:52:08.645-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 504 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.645-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|26, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:37.087-0500 c20013| 2016-04-06T02:52:08.645-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 504 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:37.088-0500 c20013| 2016-04-06T02:52:08.645-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 504 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.090-0500 c20013| 2016-04-06T02:52:08.645-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|27, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.092-0500 c20013| 2016-04-06T02:52:08.645-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:37.099-0500 c20013| 2016-04-06T02:52:08.646-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 506 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.645-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|27, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:37.103-0500 c20013| 2016-04-06T02:52:08.646-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 506 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:37.107-0500 c20013| 2016-04-06T02:52:08.647-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 506 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|28, t: 1, h: -3132328473915241474, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.109-0500 c20013| 2016-04-06T02:52:08.648-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|28 and ending at ts: Timestamp 1459929128000|28 [js_test:multi_coll_drop] 2016-04-06T02:52:37.111-0500 c20013| 2016-04-06T02:52:08.650-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 508 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.650-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|27, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:37.113-0500 c20013| 2016-04-06T02:52:08.650-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 508 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:37.116-0500 c20013| 2016-04-06T02:52:08.652-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:37.117-0500 c20013| 2016-04-06T02:52:08.654-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.121-0500 c20013| 2016-04-06T02:52:08.655-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.122-0500 c20013| 2016-04-06T02:52:08.655-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.123-0500 c20013| 2016-04-06T02:52:08.654-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.123-0500 c20013| 2016-04-06T02:52:08.655-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.125-0500 c20013| 2016-04-06T02:52:08.655-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.126-0500 c20013| 2016-04-06T02:52:08.655-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.128-0500 c20013| 2016-04-06T02:52:08.655-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.128-0500 c20013| 2016-04-06T02:52:08.655-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.131-0500 c20013| 2016-04-06T02:52:08.655-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.134-0500 c20013| 2016-04-06T02:52:08.655-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.134-0500 c20013| 2016-04-06T02:52:08.655-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.135-0500 c20013| 2016-04-06T02:52:08.655-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.136-0500 c20013| 2016-04-06T02:52:08.655-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.136-0500 c20013| 2016-04-06T02:52:08.655-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.139-0500 c20013| 2016-04-06T02:52:08.655-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.143-0500 c20013| 2016-04-06T02:52:08.655-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:37.144-0500 c20013| 2016-04-06T02:52:08.655-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:37.146-0500 c20013| 2016-04-06T02:52:08.655-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.147-0500 c20013| 2016-04-06T02:52:08.655-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.149-0500 c20013| 2016-04-06T02:52:08.655-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.152-0500 c20013| 2016-04-06T02:52:08.655-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.153-0500 c20013| 2016-04-06T02:52:08.655-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.154-0500 c20013| 2016-04-06T02:52:08.655-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.157-0500 c20013| 2016-04-06T02:52:08.656-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.157-0500 c20013| 2016-04-06T02:52:08.656-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.158-0500 c20013| 2016-04-06T02:52:08.655-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.159-0500 c20013| 2016-04-06T02:52:08.656-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.162-0500 c20013| 2016-04-06T02:52:08.656-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 508 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.162-0500 c20013| 2016-04-06T02:52:08.656-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.163-0500 c20013| 2016-04-06T02:52:08.655-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.164-0500 c20013| 2016-04-06T02:52:08.656-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.166-0500 c20013| 2016-04-06T02:52:08.656-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|28, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.167-0500 c20013| 2016-04-06T02:52:08.656-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.167-0500 c20013| 2016-04-06T02:52:08.656-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:37.168-0500 c20013| 2016-04-06T02:52:08.655-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.170-0500 c20013| 2016-04-06T02:52:08.656-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.181-0500 c20013| 2016-04-06T02:52:08.656-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 510 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.656-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|28, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:37.183-0500 c20013| 2016-04-06T02:52:08.656-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:37.186-0500 c20013| 2016-04-06T02:52:08.656-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 510 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:37.187-0500 c20013| 2016-04-06T02:52:08.657-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|6 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|28, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.190-0500 c20013| 2016-04-06T02:52:08.657-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|28, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:37.194-0500 c20013| 2016-04-06T02:52:08.657-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|6 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|28, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.199-0500 c20013| 2016-04-06T02:52:08.657-0500 D QUERY [conn10] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:37.209-0500 c20013| 2016-04-06T02:52:08.657-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|6 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|28, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:712 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:37.214-0500 c20013| 2016-04-06T02:52:08.658-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:37.221-0500 c20013| 2016-04-06T02:52:08.658-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 511 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:37.221-0500 c20013| 2016-04-06T02:52:08.658-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 511 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:37.223-0500 c20013| 2016-04-06T02:52:08.658-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 511 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.226-0500 c20013| 2016-04-06T02:52:08.658-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:37.232-0500 c20013| 2016-04-06T02:52:08.658-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 512 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:37.234-0500 c20013| 2016-04-06T02:52:08.658-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 512 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:37.236-0500 c20013| 2016-04-06T02:52:08.658-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 512 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.240-0500 c20013| 2016-04-06T02:52:08.659-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 510 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|29, t: 1, h: -150120968679180590, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f184'), state: 2, when: new Date(1459929128658), why: "splitting chunk [{ _id: -97.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.244-0500 c20013| 2016-04-06T02:52:08.659-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|29 and ending at ts: Timestamp 1459929128000|29 [js_test:multi_coll_drop] 2016-04-06T02:52:37.246-0500 c20013| 2016-04-06T02:52:08.659-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:37.247-0500 c20013| 2016-04-06T02:52:08.659-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.248-0500 c20013| 2016-04-06T02:52:08.659-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.249-0500 c20013| 2016-04-06T02:52:08.659-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.250-0500 c20013| 2016-04-06T02:52:08.659-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.251-0500 c20013| 2016-04-06T02:52:08.659-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.253-0500 c20013| 2016-04-06T02:52:08.659-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.256-0500 c20013| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.257-0500 c20013| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.259-0500 c20013| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.259-0500 c20013| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.260-0500 c20013| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.260-0500 c20013| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.261-0500 c20013| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.261-0500 c20013| 2016-04-06T02:52:08.660-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:37.263-0500 c20013| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.264-0500 c20013| 2016-04-06T02:52:08.660-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:37.266-0500 c20013| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.267-0500 c20013| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.271-0500 c20013| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.272-0500 c20013| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.273-0500 c20013| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.276-0500 c20013| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.278-0500 c20013| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.280-0500 c20013| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.280-0500 c20013| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.285-0500 c20013| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.285-0500 c20013| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.288-0500 c20013| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.289-0500 c20013| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.295-0500 c20013| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.298-0500 c20013| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.299-0500 c20013| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.299-0500 c20013| 2016-04-06T02:52:08.661-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.302-0500 c20013| 2016-04-06T02:52:08.661-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.302-0500 c20013| 2016-04-06T02:52:08.661-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:37.309-0500 c20013| 2016-04-06T02:52:08.661-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:37.314-0500 c20013| 2016-04-06T02:52:08.661-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 516 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:37.315-0500 c20013| 2016-04-06T02:52:08.661-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 516 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:37.318-0500 c20013| 2016-04-06T02:52:08.661-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 516 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.321-0500 c20013| 2016-04-06T02:52:08.662-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 518 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.662-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|28, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:37.322-0500 c20013| 2016-04-06T02:52:08.662-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 518 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:37.329-0500 c20013| 2016-04-06T02:52:08.664-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:37.333-0500 c20013| 2016-04-06T02:52:08.664-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 519 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:37.334-0500 c20013| 2016-04-06T02:52:08.664-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 519 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:37.342-0500 c20013| 2016-04-06T02:52:08.664-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 519 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.344-0500 c20013| 2016-04-06T02:52:08.664-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 518 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.345-0500 c20013| 2016-04-06T02:52:08.664-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|29, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.349-0500 c20013| 2016-04-06T02:52:08.665-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:37.351-0500 c20013| 2016-04-06T02:52:08.665-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 522 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.665-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|29, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:37.353-0500 c20013| 2016-04-06T02:52:08.665-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 522 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:37.359-0500 c20013| 2016-04-06T02:52:08.667-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 522 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|30, t: 1, h: -3082120306973010549, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-97.0", lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -97.0 }, max: { _id: -96.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-97.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-96.0", lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -96.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-96.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.361-0500 c20013| 2016-04-06T02:52:08.667-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|30 and ending at ts: Timestamp 1459929128000|30 [js_test:multi_coll_drop] 2016-04-06T02:52:37.363-0500 c20013| 2016-04-06T02:52:08.667-0500 D REPL [rsBackgroundSync-0] bgsync buffer has 0 bytes [js_test:multi_coll_drop] 2016-04-06T02:52:37.380-0500 c20013| 2016-04-06T02:52:08.667-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:37.381-0500 c20013| 2016-04-06T02:52:08.667-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.383-0500 c20013| 2016-04-06T02:52:08.667-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.384-0500 c20013| 2016-04-06T02:52:08.667-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.384-0500 c20013| 2016-04-06T02:52:08.667-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.387-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.387-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.387-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.389-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.389-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.392-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.392-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.394-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.398-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.398-0500 c20013| 2016-04-06T02:52:08.668-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:37.400-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.404-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.404-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.416-0500 c20013| 2016-04-06T02:52:08.668-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll-_id_-97.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:37.417-0500 c20013| 2016-04-06T02:52:08.668-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll-_id_-96.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:37.419-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.421-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.423-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.425-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.428-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.428-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.430-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.432-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.434-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.435-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.436-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.437-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.437-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.439-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.442-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.444-0500 c20013| 2016-04-06T02:52:08.668-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.464-0500 c20013| 2016-04-06T02:52:08.669-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:37.467-0500 c20013| 2016-04-06T02:52:08.669-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 524 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.669-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|29, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:37.475-0500 c20013| 2016-04-06T02:52:08.669-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:37.479-0500 c20013| 2016-04-06T02:52:08.669-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 525 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:37.482-0500 c20013| 2016-04-06T02:52:08.669-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 525 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:37.484-0500 c20013| 2016-04-06T02:52:08.670-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 525 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.490-0500 c20013| 2016-04-06T02:52:08.670-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 524 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:37.498-0500 c20013| 2016-04-06T02:52:08.673-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:37.505-0500 c20013| 2016-04-06T02:52:08.673-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 527 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:37.506-0500 c20013| 2016-04-06T02:52:08.673-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 527 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:37.508-0500 c20013| 2016-04-06T02:52:08.673-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 527 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.511-0500 c20013| 2016-04-06T02:52:08.673-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 524 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.513-0500 c20013| 2016-04-06T02:52:08.673-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|30, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.513-0500 c20013| 2016-04-06T02:52:08.673-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:37.516-0500 c20013| 2016-04-06T02:52:08.673-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 530 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.673-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|30, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:37.519-0500 c20013| 2016-04-06T02:52:08.674-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 530 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:37.521-0500 c20013| 2016-04-06T02:52:08.674-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 530 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|31, t: 1, h: -5487869586575022175, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.673-0500-5704c02865c17830b843f185", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128673), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -97.0 }, max: { _id: MaxKey } }, left: { min: { _id: -97.0 }, max: { _id: -96.0 }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -96.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.523-0500 c20013| 2016-04-06T02:52:08.674-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|31 and ending at ts: Timestamp 1459929128000|31 [js_test:multi_coll_drop] 2016-04-06T02:52:37.527-0500 c20013| 2016-04-06T02:52:08.674-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:37.528-0500 c20013| 2016-04-06T02:52:08.675-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.529-0500 c20013| 2016-04-06T02:52:08.675-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.529-0500 c20013| 2016-04-06T02:52:08.675-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.530-0500 c20013| 2016-04-06T02:52:08.675-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.531-0500 c20013| 2016-04-06T02:52:08.675-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.532-0500 c20013| 2016-04-06T02:52:08.675-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.568-0500 c20013| 2016-04-06T02:52:08.675-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.569-0500 c20013| 2016-04-06T02:52:08.675-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.569-0500 c20013| 2016-04-06T02:52:08.675-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.569-0500 c20013| 2016-04-06T02:52:08.675-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.573-0500 c20013| 2016-04-06T02:52:08.675-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.575-0500 c20013| 2016-04-06T02:52:08.675-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.577-0500 c20013| 2016-04-06T02:52:08.675-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.580-0500 c20013| 2016-04-06T02:52:08.675-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:37.581-0500 c20013| 2016-04-06T02:52:08.675-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.582-0500 c20013| 2016-04-06T02:52:08.676-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.583-0500 c20013| 2016-04-06T02:52:08.676-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.585-0500 c20013| 2016-04-06T02:52:08.676-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.587-0500 c20013| 2016-04-06T02:52:08.677-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.590-0500 c20013| 2016-04-06T02:52:08.677-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.591-0500 c20013| 2016-04-06T02:52:08.677-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.595-0500 c20013| 2016-04-06T02:52:08.677-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.597-0500 c20013| 2016-04-06T02:52:08.677-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.598-0500 c20013| 2016-04-06T02:52:08.677-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.599-0500 c20013| 2016-04-06T02:52:08.677-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 532 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.677-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|30, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:37.600-0500 c20013| 2016-04-06T02:52:08.677-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.601-0500 c20013| 2016-04-06T02:52:08.677-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.602-0500 c20013| 2016-04-06T02:52:08.677-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.602-0500 c20013| 2016-04-06T02:52:08.677-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.604-0500 c20013| 2016-04-06T02:52:08.677-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.605-0500 c20013| 2016-04-06T02:52:08.677-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.607-0500 c20013| 2016-04-06T02:52:08.677-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.609-0500 c20013| 2016-04-06T02:52:08.677-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 532 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:37.610-0500 c20013| 2016-04-06T02:52:08.683-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.612-0500 c20013| 2016-04-06T02:52:08.683-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.613-0500 c20013| 2016-04-06T02:52:08.684-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:37.619-0500 c20013| 2016-04-06T02:52:08.684-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:37.622-0500 c20013| 2016-04-06T02:52:08.684-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 533 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:37.625-0500 c20013| 2016-04-06T02:52:08.684-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 533 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:37.627-0500 c20013| 2016-04-06T02:52:08.684-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 533 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.629-0500 c20013| 2016-04-06T02:52:08.684-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 532 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.630-0500 c20013| 2016-04-06T02:52:08.685-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|31, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.634-0500 c20013| 2016-04-06T02:52:08.685-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:37.639-0500 c20013| 2016-04-06T02:52:08.685-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 536 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.685-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|31, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:37.640-0500 c20013| 2016-04-06T02:52:08.686-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 536 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:37.673-0500 c20013| 2016-04-06T02:52:08.686-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 536 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|32, t: 1, h: -2637408664367781023, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.675-0500 c20013| 2016-04-06T02:52:08.686-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|32 and ending at ts: Timestamp 1459929128000|32 [js_test:multi_coll_drop] 2016-04-06T02:52:37.677-0500 c20013| 2016-04-06T02:52:08.687-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:37.682-0500 c20013| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.683-0500 c20013| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.685-0500 c20013| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.685-0500 c20013| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.695-0500 c20013| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.698-0500 c20013| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.700-0500 c20013| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.702-0500 c20013| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.703-0500 c20013| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.704-0500 c20013| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.705-0500 c20013| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.707-0500 c20013| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.709-0500 c20013| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.714-0500 c20013| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.716-0500 c20013| 2016-04-06T02:52:08.687-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:37.718-0500 c20013| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.722-0500 c20013| 2016-04-06T02:52:08.687-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:37.723-0500 c20013| 2016-04-06T02:52:08.687-0500 D QUERY [repl writer worker 3] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:37.731-0500 c20013| 2016-04-06T02:52:08.687-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 538 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:37.733-0500 c20013| 2016-04-06T02:52:08.687-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 538 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:37.737-0500 c20013| 2016-04-06T02:52:08.688-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.738-0500 c20013| 2016-04-06T02:52:08.688-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 538 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.740-0500 c20013| 2016-04-06T02:52:08.688-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.740-0500 c20013| 2016-04-06T02:52:08.688-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.741-0500 c20013| 2016-04-06T02:52:08.688-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.742-0500 c20013| 2016-04-06T02:52:08.688-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.742-0500 c20013| 2016-04-06T02:52:08.688-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.746-0500 c20013| 2016-04-06T02:52:08.688-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.747-0500 c20013| 2016-04-06T02:52:08.688-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.747-0500 c20013| 2016-04-06T02:52:08.688-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.749-0500 c20013| 2016-04-06T02:52:08.688-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.750-0500 c20013| 2016-04-06T02:52:08.688-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.750-0500 c20013| 2016-04-06T02:52:08.688-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.751-0500 c20013| 2016-04-06T02:52:08.689-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.753-0500 c20013| 2016-04-06T02:52:08.689-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.756-0500 c20013| 2016-04-06T02:52:08.689-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.758-0500 c20013| 2016-04-06T02:52:08.689-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.759-0500 c20013| 2016-04-06T02:52:08.689-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.763-0500 c20013| 2016-04-06T02:52:08.689-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 540 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.689-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|31, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:37.766-0500 c20013| 2016-04-06T02:52:08.689-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 540 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:37.771-0500 c20013| 2016-04-06T02:52:08.689-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:37.778-0500 c20013| 2016-04-06T02:52:08.689-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:37.789-0500 c20013| 2016-04-06T02:52:08.689-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 541 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:37.790-0500 c20013| 2016-04-06T02:52:08.689-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 541 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:37.790-0500 c20013| 2016-04-06T02:52:08.689-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 541 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.797-0500 c20013| 2016-04-06T02:52:08.690-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:37.803-0500 c20013| 2016-04-06T02:52:08.690-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 543 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:37.809-0500 c20013| 2016-04-06T02:52:08.690-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 543 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:37.811-0500 c20013| 2016-04-06T02:52:08.690-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 543 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.813-0500 c20013| 2016-04-06T02:52:08.690-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 540 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.818-0500 c20013| 2016-04-06T02:52:08.690-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|32, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.821-0500 c20013| 2016-04-06T02:52:08.690-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:37.827-0500 c20013| 2016-04-06T02:52:08.691-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 546 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.691-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|32, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:37.830-0500 c20013| 2016-04-06T02:52:08.691-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 546 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:37.841-0500 c20013| 2016-04-06T02:52:08.692-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|32, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.856-0500 c20013| 2016-04-06T02:52:08.692-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|32, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:37.871-0500 c20013| 2016-04-06T02:52:08.692-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|32, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.876-0500 c20013| 2016-04-06T02:52:08.692-0500 D QUERY [conn10] Relevant index 0 is kp: { ns: 1, min: 1 } unique name: 'ns_1_min_1' io: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:37.896-0500 c20013| 2016-04-06T02:52:08.692-0500 D QUERY [conn10] Relevant index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: 'ns_1_shard_1_min_1' io: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:37.901-0500 c20013| 2016-04-06T02:52:08.692-0500 D QUERY [conn10] Relevant index 2 is kp: { ns: 1, lastmod: 1 } unique name: 'ns_1_lastmod_1' io: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:37.913-0500 c20013| 2016-04-06T02:52:08.693-0500 D QUERY [conn10] Scoring query plan: IXSCAN { ns: 1, min: 1 } planHitEOF=0 [js_test:multi_coll_drop] 2016-04-06T02:52:37.923-0500 c20013| 2016-04-06T02:52:08.693-0500 D QUERY [conn10] score(1.0002) = baseScore(1) + productivity((0 advanced)/(1 works) = 0) + tieBreakers(0.0001 noFetchBonus + 0 noSortBonus + 0.0001 noIxisectBonus = 0.0002) [js_test:multi_coll_drop] 2016-04-06T02:52:37.928-0500 c20013| 2016-04-06T02:52:08.693-0500 D QUERY [conn10] Scoring query plan: IXSCAN { ns: 1, shard: 1, min: 1 } planHitEOF=0 [js_test:multi_coll_drop] 2016-04-06T02:52:37.929-0500 c20013| 2016-04-06T02:52:08.693-0500 D QUERY [conn10] score(1.0002) = baseScore(1) + productivity((0 advanced)/(1 works) = 0) + tieBreakers(0.0001 noFetchBonus + 0 noSortBonus + 0.0001 noIxisectBonus = 0.0002) [js_test:multi_coll_drop] 2016-04-06T02:52:37.931-0500 c20013| 2016-04-06T02:52:08.693-0500 D QUERY [conn10] Scoring query plan: IXSCAN { ns: 1, lastmod: 1 } planHitEOF=1 [js_test:multi_coll_drop] 2016-04-06T02:52:37.932-0500 c20013| 2016-04-06T02:52:08.693-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:37.935-0500 c20013| 2016-04-06T02:52:08.693-0500 D QUERY [conn10] Winning plan: IXSCAN { ns: 1, lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.939-0500 c20013| 2016-04-06T02:52:08.693-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|32, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 fromMultiPlanner:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:37.943-0500 c20013| 2016-04-06T02:52:08.694-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 546 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|33, t: 1, h: 1320383207073572803, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f186'), state: 2, when: new Date(1459929128693), why: "splitting chunk [{ _id: -96.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:37.945-0500 c20013| 2016-04-06T02:52:08.694-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|33 and ending at ts: Timestamp 1459929128000|33 [js_test:multi_coll_drop] 2016-04-06T02:52:37.947-0500 c20013| 2016-04-06T02:52:08.695-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:37.948-0500 c20013| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.956-0500 c20013| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.956-0500 c20013| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.957-0500 c20013| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.958-0500 c20013| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.959-0500 c20013| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.960-0500 c20013| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.967-0500 c20013| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.972-0500 c20013| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.976-0500 c20013| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.978-0500 c20013| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.978-0500 c20013| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.981-0500 c20013| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.982-0500 c20013| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.985-0500 c20013| 2016-04-06T02:52:08.695-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:37.985-0500 c20013| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.986-0500 c20013| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.986-0500 c20013| 2016-04-06T02:52:08.696-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:37.987-0500 c20013| 2016-04-06T02:52:08.696-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.987-0500 c20013| 2016-04-06T02:52:08.696-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.988-0500 c20013| 2016-04-06T02:52:08.696-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.988-0500 c20013| 2016-04-06T02:52:08.696-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.988-0500 c20013| 2016-04-06T02:52:08.696-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.990-0500 c20013| 2016-04-06T02:52:08.696-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.990-0500 c20013| 2016-04-06T02:52:08.696-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.992-0500 c20013| 2016-04-06T02:52:08.696-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.992-0500 c20013| 2016-04-06T02:52:08.696-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.993-0500 c20013| 2016-04-06T02:52:08.696-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:37.995-0500 c20013| 2016-04-06T02:52:08.696-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.009-0500 c20013| 2016-04-06T02:52:08.696-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 548 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.696-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|32, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:38.010-0500 c20013| 2016-04-06T02:52:08.696-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.011-0500 c20013| 2016-04-06T02:52:08.696-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 548 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:38.011-0500 c20013| 2016-04-06T02:52:08.696-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.018-0500 c20013| 2016-04-06T02:52:08.696-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.020-0500 c20013| 2016-04-06T02:52:08.696-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.020-0500 c20013| 2016-04-06T02:52:08.696-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.021-0500 c20013| 2016-04-06T02:52:08.697-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:38.025-0500 c20013| 2016-04-06T02:52:08.697-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:38.026-0500 c20013| 2016-04-06T02:52:08.697-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 549 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:38.027-0500 c20013| 2016-04-06T02:52:08.697-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 549 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:38.029-0500 c20013| 2016-04-06T02:52:08.697-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 549 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.031-0500 c20013| 2016-04-06T02:52:08.700-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:38.035-0500 c20013| 2016-04-06T02:52:08.700-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 551 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:38.038-0500 c20013| 2016-04-06T02:52:08.700-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 551 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:38.040-0500 c20013| 2016-04-06T02:52:08.700-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 551 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.042-0500 c20013| 2016-04-06T02:52:08.700-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 548 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.045-0500 c20013| 2016-04-06T02:52:08.700-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|33, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.049-0500 c20013| 2016-04-06T02:52:08.700-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:38.052-0500 c20013| 2016-04-06T02:52:08.700-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 554 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.700-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|33, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:38.054-0500 c20013| 2016-04-06T02:52:08.700-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 554 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:38.063-0500 c20013| 2016-04-06T02:52:08.706-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 554 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|34, t: 1, h: 1358286614020305507, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-96.0", lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -96.0 }, max: { _id: -95.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-96.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-95.0", lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -95.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-95.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.067-0500 c20013| 2016-04-06T02:52:08.706-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|34 and ending at ts: Timestamp 1459929128000|34 [js_test:multi_coll_drop] 2016-04-06T02:52:38.070-0500 c20013| 2016-04-06T02:52:08.708-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:38.072-0500 c20013| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.073-0500 c20013| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.075-0500 c20013| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.077-0500 c20013| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.079-0500 c20013| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.081-0500 c20013| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.086-0500 c20013| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.093-0500 c20013| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.093-0500 c20013| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.094-0500 c20013| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.096-0500 c20013| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.097-0500 c20013| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.100-0500 c20013| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.101-0500 c20013| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.103-0500 c20013| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.104-0500 c20013| 2016-04-06T02:52:08.708-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:38.105-0500 c20013| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.107-0500 c20013| 2016-04-06T02:52:08.709-0500 D QUERY [repl writer worker 3] Using idhack: { _id: "multidrop.coll-_id_-96.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:38.108-0500 c20013| 2016-04-06T02:52:08.709-0500 D QUERY [repl writer worker 3] Using idhack: { _id: "multidrop.coll-_id_-95.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:38.118-0500 c20013| 2016-04-06T02:52:08.709-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 556 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.709-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|33, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:38.122-0500 c20013| 2016-04-06T02:52:08.715-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 556 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:38.124-0500 c20013| 2016-04-06T02:52:08.716-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.135-0500 c20013| 2016-04-06T02:52:08.716-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.135-0500 c20013| 2016-04-06T02:52:08.716-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.139-0500 c20013| 2016-04-06T02:52:08.716-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.141-0500 c20013| 2016-04-06T02:52:08.716-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.147-0500 c20013| 2016-04-06T02:52:08.716-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.147-0500 c20013| 2016-04-06T02:52:08.716-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.155-0500 c20013| 2016-04-06T02:52:08.716-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 556 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|35, t: 1, h: 2198379315137148602, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.713-0500-5704c02865c17830b843f187", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128713), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -96.0 }, max: { _id: MaxKey } }, left: { min: { _id: -96.0 }, max: { _id: -95.0 }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -95.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.156-0500 c20013| 2016-04-06T02:52:08.716-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.157-0500 c20013| 2016-04-06T02:52:08.716-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.158-0500 c20013| 2016-04-06T02:52:08.716-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.159-0500 c20013| 2016-04-06T02:52:08.716-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.160-0500 c20013| 2016-04-06T02:52:08.716-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|34, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.162-0500 c20013| 2016-04-06T02:52:08.716-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|35 and ending at ts: Timestamp 1459929128000|35 [js_test:multi_coll_drop] 2016-04-06T02:52:38.162-0500 c20013| 2016-04-06T02:52:08.716-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.162-0500 c20013| 2016-04-06T02:52:08.716-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.163-0500 c20013| 2016-04-06T02:52:08.716-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.163-0500 c20013| 2016-04-06T02:52:08.717-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.168-0500 c20013| 2016-04-06T02:52:08.717-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.187-0500 c20013| 2016-04-06T02:52:08.717-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:38.192-0500 c20013| 2016-04-06T02:52:08.717-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|34, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:38.205-0500 c20013| 2016-04-06T02:52:08.717-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 558 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|34, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:38.211-0500 c20013| 2016-04-06T02:52:08.717-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 558 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:38.216-0500 c20013| 2016-04-06T02:52:08.717-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:38.219-0500 c20013| 2016-04-06T02:52:08.717-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.221-0500 c20013| 2016-04-06T02:52:08.717-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.221-0500 c20013| 2016-04-06T02:52:08.718-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.231-0500 c20013| 2016-04-06T02:52:08.718-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.234-0500 c20013| 2016-04-06T02:52:08.718-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.244-0500 c20013| 2016-04-06T02:52:08.718-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.254-0500 c20013| 2016-04-06T02:52:08.718-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.256-0500 c20013| 2016-04-06T02:52:08.718-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.269-0500 c20013| 2016-04-06T02:52:08.718-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.269-0500 c20013| 2016-04-06T02:52:08.718-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.273-0500 c20013| 2016-04-06T02:52:08.718-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.286-0500 c20013| 2016-04-06T02:52:08.718-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.293-0500 c20013| 2016-04-06T02:52:08.718-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.294-0500 c20013| 2016-04-06T02:52:08.718-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.294-0500 c20013| 2016-04-06T02:52:08.718-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:38.300-0500 c20013| 2016-04-06T02:52:08.718-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 558 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.302-0500 c20013| 2016-04-06T02:52:08.718-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.303-0500 c20013| 2016-04-06T02:52:08.718-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.305-0500 c20013| 2016-04-06T02:52:08.718-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.306-0500 c20013| 2016-04-06T02:52:08.718-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.306-0500 c20013| 2016-04-06T02:52:08.718-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.310-0500 c20013| 2016-04-06T02:52:08.719-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 560 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.719-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|34, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:38.311-0500 c20013| 2016-04-06T02:52:08.719-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 560 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:38.312-0500 c20013| 2016-04-06T02:52:08.719-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.314-0500 c20013| 2016-04-06T02:52:08.719-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.315-0500 c20013| 2016-04-06T02:52:08.719-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.315-0500 c20013| 2016-04-06T02:52:08.719-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.320-0500 c20013| 2016-04-06T02:52:08.719-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.334-0500 c20013| 2016-04-06T02:52:08.719-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.336-0500 c20013| 2016-04-06T02:52:08.719-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.339-0500 c20013| 2016-04-06T02:52:08.719-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.341-0500 c20013| 2016-04-06T02:52:08.719-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.344-0500 c20013| 2016-04-06T02:52:08.719-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.352-0500 c20013| 2016-04-06T02:52:08.719-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.356-0500 c20013| 2016-04-06T02:52:08.719-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.356-0500 c20013| 2016-04-06T02:52:08.719-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.363-0500 c20013| 2016-04-06T02:52:08.719-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 560 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|36, t: 1, h: 3351989292470422809, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.375-0500 c20013| 2016-04-06T02:52:08.719-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|35, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.387-0500 c20013| 2016-04-06T02:52:08.719-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|36 and ending at ts: Timestamp 1459929128000|36 [js_test:multi_coll_drop] 2016-04-06T02:52:38.389-0500 c20013| 2016-04-06T02:52:08.719-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:38.390-0500 c20013| 2016-04-06T02:52:08.719-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:38.391-0500 c20013| 2016-04-06T02:52:08.719-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.393-0500 c20013| 2016-04-06T02:52:08.719-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.393-0500 c20013| 2016-04-06T02:52:08.719-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.394-0500 c20013| 2016-04-06T02:52:08.719-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.395-0500 c20013| 2016-04-06T02:52:08.719-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.396-0500 c20013| 2016-04-06T02:52:08.719-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.396-0500 c20013| 2016-04-06T02:52:08.719-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.396-0500 c20013| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.397-0500 c20013| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.399-0500 c20013| 2016-04-06T02:52:08.719-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.401-0500 c20013| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.405-0500 c20013| 2016-04-06T02:52:08.720-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|34, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:38.406-0500 c20013| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.407-0500 c20013| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.410-0500 c20013| 2016-04-06T02:52:08.720-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 562 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|34, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:38.413-0500 c20013| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.413-0500 c20013| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.413-0500 c20013| 2016-04-06T02:52:08.720-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:38.415-0500 c20013| 2016-04-06T02:52:08.720-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 562 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:38.417-0500 c20013| 2016-04-06T02:52:08.720-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:38.417-0500 c20013| 2016-04-06T02:52:08.720-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 562 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.423-0500 c20013| 2016-04-06T02:52:08.720-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:38.426-0500 c20013| 2016-04-06T02:52:08.720-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 563 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:38.430-0500 c20013| 2016-04-06T02:52:08.720-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 563 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:38.433-0500 c20013| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.437-0500 c20013| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.440-0500 c20013| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.442-0500 c20013| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.446-0500 c20013| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.449-0500 c20013| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.449-0500 c20013| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.451-0500 c20013| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.454-0500 c20013| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.455-0500 c20013| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.456-0500 c20013| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.461-0500 c20013| 2016-04-06T02:52:08.720-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 563 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.463-0500 c20013| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.464-0500 c20013| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.466-0500 c20013| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.467-0500 c20013| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.468-0500 c20013| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.468-0500 c20013| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:38.469-0500 c20013| 2016-04-06T02:52:08.721-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:38.474-0500 c20013| 2016-04-06T02:52:08.721-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 566 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.721-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|35, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:38.476-0500 c20013| 2016-04-06T02:52:08.721-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 566 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:38.481-0500 c20013| 2016-04-06T02:52:08.721-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:38.485-0500 c20013| 2016-04-06T02:52:08.721-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 567 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:38.487-0500 c20013| 2016-04-06T02:52:08.721-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 567 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:38.492-0500 c20013| 2016-04-06T02:52:08.722-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 566 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.494-0500 c20013| 2016-04-06T02:52:08.722-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|36, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.495-0500 c20013| 2016-04-06T02:52:08.722-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:38.497-0500 c20013| 2016-04-06T02:52:08.722-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 567 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.509-0500 s20015| 2016-04-06T02:52:19.442-0500 W NETWORK [Balancer] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:38.513-0500 c20011| 2016-04-06T02:52:08.688-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:38.516-0500 c20011| 2016-04-06T02:52:08.688-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|32, t: 1 } and is durable through: { ts: Timestamp 1459929128000|31, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.517-0500 c20011| 2016-04-06T02:52:08.688-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.524-0500 c20011| 2016-04-06T02:52:08.688-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:38.529-0500 c20011| 2016-04-06T02:52:08.689-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|32, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|31, t: 1 }, name-id: "111" } [js_test:multi_coll_drop] 2016-04-06T02:52:38.532-0500 c20011| 2016-04-06T02:52:08.689-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|31, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:38.534-0500 d20010| 2016-04-06T02:52:21.641-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -81.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c03365c17830b843f1a5 [js_test:multi_coll_drop] 2016-04-06T02:52:38.537-0500 d20010| 2016-04-06T02:52:21.641-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|40||5704c02806c33406d4d9c0c0, current metadata version is 1|40||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:38.538-0500 d20010| 2016-04-06T02:52:21.645-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:38.539-0500 d20010| 2016-04-06T02:52:22.562-0500 I ASIO [conn5] dropping unhealthy pooled connection to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:38.539-0500 d20010| 2016-04-06T02:52:22.562-0500 I ASIO [conn5] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:52:38.541-0500 d20010| 2016-04-06T02:52:22.563-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:38.542-0500 d20010| 2016-04-06T02:52:22.563-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|40||5704c02806c33406d4d9c0c0, took 922ms) [js_test:multi_coll_drop] 2016-04-06T02:52:38.543-0500 d20010| 2016-04-06T02:52:22.563-0500 I SHARDING [conn5] splitChunk accepted at version 1|40||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:38.561-0500 d20010| 2016-04-06T02:52:22.591-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:22.591-0500-5704c03665c17830b843f1a6", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929142591), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -81.0 }, max: { _id: MaxKey } }, left: { min: { _id: -81.0 }, max: { _id: -80.0 }, lastmod: Timestamp 1000|41, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -80.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:52:38.561-0500 d20010| 2016-04-06T02:52:22.653-0500 I SHARDING [conn5] distributed lock with ts: 5704c03365c17830b843f1a5' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:38.566-0500 d20010| 2016-04-06T02:52:22.653-0500 I COMMAND [conn5] command admin.$cmd command: splitChunk { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -81.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -80.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|40, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } numYields:0 reslen:74 locks:{ Global: { acquireCount: { r: 6, w: 2 } }, Database: { acquireCount: { r: 2, w: 2 } }, Collection: { acquireCount: { r: 2, W: 2 } } } protocol:op_command 8609ms [js_test:multi_coll_drop] 2016-04-06T02:52:38.570-0500 d20010| 2016-04-06T02:52:22.656-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -80.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -79.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|42, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:52:38.580-0500 d20010| 2016-04-06T02:52:22.664-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -80.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c03665c17830b843f1a7 [js_test:multi_coll_drop] 2016-04-06T02:52:38.584-0500 d20010| 2016-04-06T02:52:22.664-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|42||5704c02806c33406d4d9c0c0, current metadata version is 1|42||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:38.592-0500 d20010| 2016-04-06T02:52:22.666-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|42||5704c02806c33406d4d9c0c0, took 1ms) [js_test:multi_coll_drop] 2016-04-06T02:52:38.594-0500 d20010| 2016-04-06T02:52:22.666-0500 I SHARDING [conn5] splitChunk accepted at version 1|42||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:38.598-0500 d20010| 2016-04-06T02:52:22.676-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:22.676-0500-5704c03665c17830b843f1a8", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929142676), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -80.0 }, max: { _id: MaxKey } }, left: { min: { _id: -80.0 }, max: { _id: -79.0 }, lastmod: Timestamp 1000|43, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -79.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:52:38.599-0500 d20010| 2016-04-06T02:52:22.699-0500 I SHARDING [conn5] distributed lock with ts: 5704c03665c17830b843f1a7' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:38.600-0500 d20010| 2016-04-06T02:52:22.702-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -79.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -78.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|44, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:52:38.603-0500 d20010| 2016-04-06T02:52:22.709-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -79.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c03665c17830b843f1a9 [js_test:multi_coll_drop] 2016-04-06T02:52:38.608-0500 d20010| 2016-04-06T02:52:22.709-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|44||5704c02806c33406d4d9c0c0, current metadata version is 1|44||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:38.612-0500 d20010| 2016-04-06T02:52:22.710-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|44||5704c02806c33406d4d9c0c0, took 1ms) [js_test:multi_coll_drop] 2016-04-06T02:52:38.613-0500 d20010| 2016-04-06T02:52:22.710-0500 I SHARDING [conn5] splitChunk accepted at version 1|44||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:38.619-0500 d20010| 2016-04-06T02:52:22.727-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:22.727-0500-5704c03665c17830b843f1aa", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929142727), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -79.0 }, max: { _id: MaxKey } }, left: { min: { _id: -79.0 }, max: { _id: -78.0 }, lastmod: Timestamp 1000|45, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -78.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|46, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:52:38.621-0500 d20010| 2016-04-06T02:52:22.747-0500 I SHARDING [conn5] distributed lock with ts: 5704c03665c17830b843f1a9' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:38.622-0500 s20014| 2016-04-06T02:52:17.202-0500 W NETWORK [Balancer] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:38.622-0500 s20014| 2016-04-06T02:52:17.702-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:38.623-0500 s20014| 2016-04-06T02:52:17.703-0500 W NETWORK [Balancer] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:38.624-0500 s20014| 2016-04-06T02:52:18.203-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:38.624-0500 s20014| 2016-04-06T02:52:18.204-0500 W NETWORK [Balancer] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:38.625-0500 s20014| 2016-04-06T02:52:18.705-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:38.625-0500 s20014| 2016-04-06T02:52:18.708-0500 W NETWORK [Balancer] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:38.627-0500 s20014| 2016-04-06T02:52:19.208-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:38.628-0500 s20014| 2016-04-06T02:52:19.209-0500 W NETWORK [Balancer] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:38.636-0500 s20014| 2016-04-06T02:52:19.709-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:38.639-0500 s20014| 2016-04-06T02:52:19.710-0500 D ASIO [Balancer] startCommand: RemoteCommand 255 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:49.710-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929137199), up: 10, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.643-0500 s20014| 2016-04-06T02:52:19.710-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 255 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:38.646-0500 s20014| 2016-04-06T02:52:21.641-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 255 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929139000|4, t: 2 }, electionId: ObjectId('7fffffff0000000000000002') } [js_test:multi_coll_drop] 2016-04-06T02:52:38.650-0500 s20014| 2016-04-06T02:52:21.641-0500 D ASIO [Balancer] startCommand: RemoteCommand 257 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:51.641-0500 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929139000|5, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.653-0500 s20014| 2016-04-06T02:52:21.641-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 257 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:38.654-0500 s20014| 2016-04-06T02:52:21.645-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 257 finished with response: { waitedMS: 3, cursor: { firstBatch: [ { _id: "shard0000", host: "mongovm16:20010" } ], id: 0, ns: "config.shards" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.655-0500 s20014| 2016-04-06T02:52:21.645-0500 D SHARDING [Balancer] found 1 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp 1459929139000|5, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.661-0500 s20014| 2016-04-06T02:52:21.645-0500 D ASIO [Balancer] startCommand: RemoteCommand 259 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:51.645-0500 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929139000|5, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.662-0500 s20014| 2016-04-06T02:52:21.645-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 259 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:38.666-0500 s20014| 2016-04-06T02:52:21.645-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 259 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "chunksize", value: 50 } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.667-0500 s20014| 2016-04-06T02:52:21.645-0500 D SHARDING [Balancer] Refreshing MaxChunkSize: 50MB [js_test:multi_coll_drop] 2016-04-06T02:52:38.669-0500 s20014| 2016-04-06T02:52:21.645-0500 D ASIO [Balancer] startCommand: RemoteCommand 261 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:51.645-0500 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929139000|5, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.670-0500 s20014| 2016-04-06T02:52:21.645-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 261 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:38.671-0500 s20014| 2016-04-06T02:52:21.645-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 261 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "balancer", stopped: true } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.672-0500 s20014| 2016-04-06T02:52:21.645-0500 D SHARDING [Balancer] skipping balancing round because balancing is disabled [js_test:multi_coll_drop] 2016-04-06T02:52:38.675-0500 s20014| 2016-04-06T02:52:21.645-0500 D ASIO [Balancer] startCommand: RemoteCommand 263 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:51.645-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929141645), up: 14, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.676-0500 s20014| 2016-04-06T02:52:21.646-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 263 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:38.680-0500 s20014| 2016-04-06T02:52:21.652-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 263 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929141000|1, t: 2 }, electionId: ObjectId('7fffffff0000000000000002') } [js_test:multi_coll_drop] 2016-04-06T02:52:38.682-0500 s20014| 2016-04-06T02:52:22.653-0500 D ASIO [conn1] startCommand: RemoteCommand 265 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:52:52.653-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|4, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.927-0500 s20014| 2016-04-06T02:52:22.654-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 265 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:38.930-0500 s20014| 2016-04-06T02:52:22.654-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 265 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-80.0", lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -80.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.934-0500 s20014| 2016-04-06T02:52:22.654-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|40||5704c02806c33406d4d9c0c0 and 21 chunks [js_test:multi_coll_drop] 2016-04-06T02:52:38.936-0500 s20014| 2016-04-06T02:52:22.654-0500 D SHARDING [conn1] major version query from 1|40||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|40 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.938-0500 s20014| 2016-04-06T02:52:22.654-0500 D ASIO [conn1] startCommand: RemoteCommand 267 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:52.654-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|40 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|4, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.939-0500 s20014| 2016-04-06T02:52:22.654-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 267 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:38.941-0500 s20014| 2016-04-06T02:52:22.655-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 267 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-81.0", lastmod: Timestamp 1000|41, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -81.0 }, max: { _id: -80.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-80.0", lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -80.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.942-0500 s20014| 2016-04-06T02:52:22.655-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|42||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:38.946-0500 s20014| 2016-04-06T02:52:22.655-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 24 version: 1|42||5704c02806c33406d4d9c0c0 based on: 1|40||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:38.948-0500 s20014| 2016-04-06T02:52:22.655-0500 D ASIO [conn1] startCommand: RemoteCommand 269 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:52.655-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|4, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.949-0500 s20014| 2016-04-06T02:52:22.656-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 269 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:38.954-0500 s20014| 2016-04-06T02:52:22.656-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 269 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-80.0", lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -80.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.959-0500 s20014| 2016-04-06T02:52:22.656-0500 I COMMAND [conn1] splitting chunk [{ _id: -80.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:38.961-0500 s20014| 2016-04-06T02:52:22.656-0500 D NETWORK [conn1] polling for status of connection to 192.168.100.28:20010, no events [js_test:multi_coll_drop] 2016-04-06T02:52:38.964-0500 s20014| 2016-04-06T02:52:22.699-0500 D ASIO [conn1] startCommand: RemoteCommand 271 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:52.699-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|8, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.965-0500 s20014| 2016-04-06T02:52:22.700-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 271 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:38.969-0500 s20014| 2016-04-06T02:52:22.701-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 271 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-79.0", lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -79.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.970-0500 s20014| 2016-04-06T02:52:22.701-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|42||5704c02806c33406d4d9c0c0 and 22 chunks [js_test:multi_coll_drop] 2016-04-06T02:52:38.977-0500 s20014| 2016-04-06T02:52:22.701-0500 D SHARDING [conn1] major version query from 1|42||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|42 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.987-0500 s20014| 2016-04-06T02:52:22.701-0500 D ASIO [conn1] startCommand: RemoteCommand 273 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:52.701-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|42 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|8, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.987-0500 s20014| 2016-04-06T02:52:22.701-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 273 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:38.995-0500 s20014| 2016-04-06T02:52:22.701-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 273 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-80.0", lastmod: Timestamp 1000|43, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -80.0 }, max: { _id: -79.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-79.0", lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -79.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:38.998-0500 s20014| 2016-04-06T02:52:22.701-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|44||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:38.999-0500 s20014| 2016-04-06T02:52:22.701-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 25 version: 1|44||5704c02806c33406d4d9c0c0 based on: 1|42||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:39.008-0500 s20014| 2016-04-06T02:52:22.702-0500 D ASIO [conn1] startCommand: RemoteCommand 275 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:52:52.702-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|8, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.009-0500 s20014| 2016-04-06T02:52:22.702-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 275 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:39.013-0500 s20014| 2016-04-06T02:52:22.702-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 275 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-79.0", lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -79.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.013-0500 s20014| 2016-04-06T02:52:22.702-0500 I COMMAND [conn1] splitting chunk [{ _id: -79.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:39.018-0500 s20014| 2016-04-06T02:52:22.747-0500 D ASIO [conn1] startCommand: RemoteCommand 277 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:52.747-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|12, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.019-0500 s20014| 2016-04-06T02:52:22.747-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 277 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:39.019-0500 s20014| 2016-04-06T02:52:23.719-0500 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:39.024-0500 s20014| 2016-04-06T02:52:23.719-0500 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.100.28:20013, no events [js_test:multi_coll_drop] 2016-04-06T02:52:39.034-0500 c20012| 2016-04-06T02:52:08.446-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 351 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:39.036-0500 c20012| 2016-04-06T02:52:08.446-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 351 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:39.037-0500 c20012| 2016-04-06T02:52:08.446-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 351 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.038-0500 c20012| 2016-04-06T02:52:08.446-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 348 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.043-0500 c20012| 2016-04-06T02:52:08.446-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.044-0500 c20012| 2016-04-06T02:52:08.446-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:39.050-0500 c20012| 2016-04-06T02:52:08.447-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 354 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.447-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:39.055-0500 c20012| 2016-04-06T02:52:08.447-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 354 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:39.059-0500 c20012| 2016-04-06T02:52:08.447-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 0|0 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|8, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.067-0500 c20012| 2016-04-06T02:52:08.447-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|8, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:39.070-0500 c20012| 2016-04-06T02:52:08.447-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 0|0 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|8, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.074-0500 c20012| 2016-04-06T02:52:08.447-0500 D QUERY [conn7] Relevant index 0 is kp: { ns: 1, min: 1 } unique name: 'ns_1_min_1' io: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:39.080-0500 c20012| 2016-04-06T02:52:08.447-0500 D QUERY [conn7] Relevant index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: 'ns_1_shard_1_min_1' io: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:39.082-0500 c20012| 2016-04-06T02:52:08.447-0500 D QUERY [conn7] Relevant index 2 is kp: { ns: 1, lastmod: 1 } unique name: 'ns_1_lastmod_1' io: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:39.084-0500 c20012| 2016-04-06T02:52:08.447-0500 D QUERY [conn7] Relevant index 0 is kp: { lastmod: 1 } multikey name: 'doesnt_matter' [js_test:multi_coll_drop] 2016-04-06T02:52:39.084-0500 c20012| 2016-04-06T02:52:08.447-0500 D QUERY [conn7] Relevant index 0 is kp: { lastmod: 1 } multikey name: 'doesnt_matter' [js_test:multi_coll_drop] 2016-04-06T02:52:39.085-0500 c20012| 2016-04-06T02:52:08.447-0500 D QUERY [conn7] Scoring query plan: IXSCAN { ns: 1, lastmod: 1 } planHitEOF=1 [js_test:multi_coll_drop] 2016-04-06T02:52:39.088-0500 c20012| 2016-04-06T02:52:08.447-0500 D QUERY [conn7] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:39.090-0500 c20012| 2016-04-06T02:52:08.447-0500 D QUERY [conn7] Scoring query plan: IXSCAN { ns: 1, shard: 1, min: 1 } planHitEOF=0 [js_test:multi_coll_drop] 2016-04-06T02:52:39.094-0500 c20012| 2016-04-06T02:52:08.447-0500 D QUERY [conn7] score(1.0002) = baseScore(1) + productivity((0 advanced)/(2 works) = 0) + tieBreakers(0.0001 noFetchBonus + 0 noSortBonus + 0.0001 noIxisectBonus = 0.0002) [js_test:multi_coll_drop] 2016-04-06T02:52:39.094-0500 c20012| 2016-04-06T02:52:08.447-0500 D QUERY [conn7] Scoring query plan: IXSCAN { ns: 1, min: 1 } planHitEOF=0 [js_test:multi_coll_drop] 2016-04-06T02:52:39.096-0500 c20012| 2016-04-06T02:52:08.447-0500 D QUERY [conn7] score(1.0002) = baseScore(1) + productivity((0 advanced)/(2 works) = 0) + tieBreakers(0.0001 noFetchBonus + 0 noSortBonus + 0.0001 noIxisectBonus = 0.0002) [js_test:multi_coll_drop] 2016-04-06T02:52:39.097-0500 c20012| 2016-04-06T02:52:08.447-0500 D QUERY [conn7] Winning plan: IXSCAN { ns: 1, lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.102-0500 c20012| 2016-04-06T02:52:08.447-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 0|0 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|8, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 fromMultiPlanner:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:530 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:39.107-0500 c20012| 2016-04-06T02:52:08.463-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 354 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|9, t: 1, h: -9131470462815342067, v: 2, op: "c", ns: "config.$cmd", o: { create: "collections" } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.115-0500 c20012| 2016-04-06T02:52:08.463-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|9 and ending at ts: Timestamp 1459929128000|9 [js_test:multi_coll_drop] 2016-04-06T02:52:39.120-0500 c20012| 2016-04-06T02:52:08.464-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:39.122-0500 c20012| 2016-04-06T02:52:08.465-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.123-0500 c20012| 2016-04-06T02:52:08.465-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.125-0500 c20012| 2016-04-06T02:52:08.465-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.126-0500 c20012| 2016-04-06T02:52:08.465-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.128-0500 c20012| 2016-04-06T02:52:08.465-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.128-0500 c20012| 2016-04-06T02:52:08.465-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.136-0500 c20012| 2016-04-06T02:52:08.465-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.137-0500 c20012| 2016-04-06T02:52:08.465-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.140-0500 c20012| 2016-04-06T02:52:08.465-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.143-0500 c20012| 2016-04-06T02:52:08.465-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.144-0500 c20012| 2016-04-06T02:52:08.465-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.149-0500 c20012| 2016-04-06T02:52:08.465-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.149-0500 c20012| 2016-04-06T02:52:08.465-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.152-0500 c20012| 2016-04-06T02:52:08.465-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.153-0500 c20012| 2016-04-06T02:52:08.465-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:39.154-0500 c20011| 2016-04-06T02:52:08.689-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|31, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:39.160-0500 c20011| 2016-04-06T02:52:08.689-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:39.168-0500 c20013| 2016-04-06T02:52:08.722-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:39.171-0500 c20013| 2016-04-06T02:52:08.722-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 569 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.722-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|36, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:39.179-0500 c20013| 2016-04-06T02:52:08.722-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 570 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:39.179-0500 c20013| 2016-04-06T02:52:08.722-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 569 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:39.201-0500 c20013| 2016-04-06T02:52:08.722-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 570 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:39.202-0500 c20013| 2016-04-06T02:52:08.722-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 570 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.219-0500 c20013| 2016-04-06T02:52:08.726-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 569 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|37, t: 1, h: 8332631665531795890, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f188'), state: 2, when: new Date(1459929128725), why: "splitting chunk [{ _id: -95.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.231-0500 c20013| 2016-04-06T02:52:08.726-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|37 and ending at ts: Timestamp 1459929128000|37 [js_test:multi_coll_drop] 2016-04-06T02:52:39.231-0500 c20013| 2016-04-06T02:52:08.726-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:39.231-0500 c20013| 2016-04-06T02:52:08.726-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.232-0500 c20013| 2016-04-06T02:52:08.726-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.232-0500 c20013| 2016-04-06T02:52:08.726-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.232-0500 c20013| 2016-04-06T02:52:08.726-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.232-0500 c20013| 2016-04-06T02:52:08.726-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.232-0500 c20011| 2016-04-06T02:52:08.689-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:39.238-0500 c20011| 2016-04-06T02:52:08.689-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.240-0500 c20011| 2016-04-06T02:52:08.689-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|32, t: 1 } and is durable through: { ts: Timestamp 1459929128000|31, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.246-0500 c20011| 2016-04-06T02:52:08.689-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|32, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|31, t: 1 }, name-id: "111" } [js_test:multi_coll_drop] 2016-04-06T02:52:39.251-0500 c20011| 2016-04-06T02:52:08.689-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:39.257-0500 c20011| 2016-04-06T02:52:08.690-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:39.259-0500 c20011| 2016-04-06T02:52:08.690-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:39.267-0500 c20011| 2016-04-06T02:52:08.690-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|32, t: 1 } and is durable through: { ts: Timestamp 1459929128000|32, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.268-0500 c20011| 2016-04-06T02:52:08.690-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|32, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.272-0500 c20011| 2016-04-06T02:52:08.690-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.278-0500 c20011| 2016-04-06T02:52:08.690-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:39.282-0500 c20011| 2016-04-06T02:52:08.690-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:39.282-0500 c20011| 2016-04-06T02:52:08.690-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:39.286-0500 c20011| 2016-04-06T02:52:08.690-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|31, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:39.288-0500 c20011| 2016-04-06T02:52:08.690-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.290-0500 c20011| 2016-04-06T02:52:08.690-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|32, t: 1 } and is durable through: { ts: Timestamp 1459929128000|32, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.300-0500 c20011| 2016-04-06T02:52:08.690-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:39.310-0500 c20011| 2016-04-06T02:52:08.690-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|31, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:39.314-0500 c20011| 2016-04-06T02:52:08.690-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f184') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:39.320-0500 c20011| 2016-04-06T02:52:08.691-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|32, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:39.321-0500 c20011| 2016-04-06T02:52:08.691-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|32, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:39.324-0500 c20011| 2016-04-06T02:52:08.691-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|8 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|32, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.325-0500 c20011| 2016-04-06T02:52:08.691-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|32, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:39.328-0500 c20011| 2016-04-06T02:52:08.691-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|8 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|32, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.330-0500 c20011| 2016-04-06T02:52:08.692-0500 D QUERY [conn10] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:39.331-0500 c20013| 2016-04-06T02:52:08.726-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.339-0500 c20013| 2016-04-06T02:52:08.726-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.342-0500 c20013| 2016-04-06T02:52:08.726-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.344-0500 c20013| 2016-04-06T02:52:08.726-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.346-0500 c20013| 2016-04-06T02:52:08.726-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.347-0500 c20013| 2016-04-06T02:52:08.726-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.348-0500 c20013| 2016-04-06T02:52:08.726-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.349-0500 s20015| 2016-04-06T02:52:19.942-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:39.350-0500 c20012| 2016-04-06T02:52:08.465-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.362-0500 c20011| 2016-04-06T02:52:08.692-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|8 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|32, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:732 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:39.377-0500 c20011| 2016-04-06T02:52:08.693-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f186'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128693), why: "splitting chunk [{ _id: -96.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.381-0500 s20015| 2016-04-06T02:52:19.943-0500 D ASIO [Balancer] startCommand: RemoteCommand 51 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:49.943-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929137435), up: 10, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.386-0500 s20015| 2016-04-06T02:52:19.944-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 51 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:39.393-0500 c20012| 2016-04-06T02:52:08.465-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 356 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.465-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:39.394-0500 c20012| 2016-04-06T02:52:08.465-0500 D STORAGE [repl writer worker 15] create collection config.collections {} [js_test:multi_coll_drop] 2016-04-06T02:52:39.395-0500 c20012| 2016-04-06T02:52:08.465-0500 D STORAGE [repl writer worker 15] stored meta data for config.collections @ RecordId(17) [js_test:multi_coll_drop] 2016-04-06T02:52:39.396-0500 c20012| 2016-04-06T02:52:08.465-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 356 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:39.399-0500 c20012| 2016-04-06T02:52:08.465-0500 D STORAGE [repl writer worker 15] WiredTigerKVEngine::createRecordStore uri: table:collection-39-6577373056560964212 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:52:39.401-0500 c20011| 2016-04-06T02:52:08.693-0500 D QUERY [conn25] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:39.401-0500 c20013| 2016-04-06T02:52:08.726-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.404-0500 c20013| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.405-0500 c20013| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.406-0500 c20013| 2016-04-06T02:52:08.727-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:39.408-0500 c20013| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.409-0500 c20013| 2016-04-06T02:52:08.727-0500 D QUERY [repl writer worker 3] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:39.411-0500 c20013| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.411-0500 c20013| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.413-0500 c20013| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.414-0500 c20011| 2016-04-06T02:52:08.693-0500 D QUERY [conn25] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:39.420-0500 c20011| 2016-04-06T02:52:08.693-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.423-0500 c20011| 2016-04-06T02:52:08.694-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|32, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:39.425-0500 c20011| 2016-04-06T02:52:08.694-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|32, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:39.427-0500 c20011| 2016-04-06T02:52:08.696-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:39.428-0500 c20011| 2016-04-06T02:52:08.696-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:39.430-0500 c20011| 2016-04-06T02:52:08.696-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|33, t: 1 } and is durable through: { ts: Timestamp 1459929128000|32, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.432-0500 c20011| 2016-04-06T02:52:08.696-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.438-0500 c20011| 2016-04-06T02:52:08.696-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:39.440-0500 c20011| 2016-04-06T02:52:08.696-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|32, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:39.444-0500 c20011| 2016-04-06T02:52:08.697-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:39.444-0500 c20011| 2016-04-06T02:52:08.697-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:39.447-0500 c20011| 2016-04-06T02:52:08.697-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.449-0500 s20015| 2016-04-06T02:52:21.641-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 51 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929139000|5, t: 2 }, electionId: ObjectId('7fffffff0000000000000002') } [js_test:multi_coll_drop] 2016-04-06T02:52:39.450-0500 s20015| 2016-04-06T02:52:21.642-0500 D ASIO [Balancer] startCommand: RemoteCommand 53 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:52:51.642-0500 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929139000|5, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.452-0500 s20015| 2016-04-06T02:52:21.642-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:39.453-0500 s20015| 2016-04-06T02:52:21.642-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 54 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:39.455-0500 c20013| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.457-0500 c20013| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.458-0500 c20013| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.459-0500 c20013| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.459-0500 c20013| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.460-0500 c20013| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.462-0500 c20013| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.462-0500 c20013| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.462-0500 c20013| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.464-0500 c20013| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.465-0500 c20013| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.466-0500 c20013| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.467-0500 c20013| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.468-0500 c20013| 2016-04-06T02:52:08.728-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:39.477-0500 c20013| 2016-04-06T02:52:08.728-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:39.481-0500 c20013| 2016-04-06T02:52:08.728-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 574 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:39.483-0500 c20011| 2016-04-06T02:52:08.697-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|33, t: 1 } and is durable through: { ts: Timestamp 1459929128000|32, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.488-0500 c20011| 2016-04-06T02:52:08.697-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:39.489-0500 c20012| 2016-04-06T02:52:08.465-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.492-0500 c20012| 2016-04-06T02:52:08.466-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 356 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|10, t: 1, h: 7600279498637035863, v: 2, op: "i", ns: "config.collections", o: { _id: "multidrop.coll", lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), lastmod: new Date(4294967296), dropped: false, key: { _id: 1.0 }, unique: false } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.493-0500 c20012| 2016-04-06T02:52:08.466-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|10 and ending at ts: Timestamp 1459929128000|10 [js_test:multi_coll_drop] 2016-04-06T02:52:39.496-0500 c20012| 2016-04-06T02:52:08.468-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 358 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.468-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:39.500-0500 c20012| 2016-04-06T02:52:08.468-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 358 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:39.501-0500 c20012| 2016-04-06T02:52:08.470-0500 D STORAGE [repl writer worker 15] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-39-6577373056560964212 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:39.503-0500 c20012| 2016-04-06T02:52:08.470-0500 D STORAGE [repl writer worker 15] config.collections: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:39.509-0500 c20012| 2016-04-06T02:52:08.470-0500 D STORAGE [repl writer worker 15] WiredTigerKVEngine::createSortedDataInterface ident: index-40-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.collections" }), [js_test:multi_coll_drop] 2016-04-06T02:52:39.512-0500 c20012| 2016-04-06T02:52:08.470-0500 D STORAGE [repl writer worker 15] create uri: table:index-40-6577373056560964212 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "config.collections" }), [js_test:multi_coll_drop] 2016-04-06T02:52:39.514-0500 c20012| 2016-04-06T02:52:08.476-0500 D STORAGE [repl writer worker 15] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-40-6577373056560964212 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:52:39.517-0500 c20012| 2016-04-06T02:52:08.476-0500 D STORAGE [repl writer worker 15] config.collections: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:52:39.520-0500 c20012| 2016-04-06T02:52:08.476-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.521-0500 c20012| 2016-04-06T02:52:08.476-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.522-0500 c20012| 2016-04-06T02:52:08.477-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.526-0500 c20012| 2016-04-06T02:52:08.477-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.528-0500 c20012| 2016-04-06T02:52:08.477-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.528-0500 c20012| 2016-04-06T02:52:08.477-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.529-0500 c20012| 2016-04-06T02:52:08.477-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.531-0500 c20012| 2016-04-06T02:52:08.477-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.532-0500 c20012| 2016-04-06T02:52:08.477-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.533-0500 c20012| 2016-04-06T02:52:08.477-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.534-0500 c20012| 2016-04-06T02:52:08.477-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 358 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.535-0500 s20015| 2016-04-06T02:52:21.643-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:39.536-0500 s20015| 2016-04-06T02:52:21.643-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 54 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:52:39.537-0500 s20015| 2016-04-06T02:52:21.643-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 53 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:39.538-0500 c20012| 2016-04-06T02:52:08.477-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.542-0500 s20015| 2016-04-06T02:52:22.561-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 53 finished with response: { waitedMS: 917, cursor: { firstBatch: [ { _id: "shard0000", host: "mongovm16:20010" } ], id: 0, ns: "config.shards" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.546-0500 s20015| 2016-04-06T02:52:22.562-0500 D SHARDING [Balancer] found 1 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp 1459929141000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.555-0500 2016-04-06T02:52:23.688-0500 I NETWORK [ReplicaSetMonitorWatcher] Socket closed remotely, no longer connected (idle 20 secs, remote host 192.168.100.28:20011) [js_test:multi_coll_drop] 2016-04-06T02:52:39.562-0500 s20015| 2016-04-06T02:52:22.562-0500 D ASIO [Balancer] startCommand: RemoteCommand 56 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:52.562-0500 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929141000|1, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.564-0500 s20015| 2016-04-06T02:52:22.562-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 56 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:39.568-0500 s20015| 2016-04-06T02:52:22.562-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 56 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "chunksize", value: 50 } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.570-0500 s20015| 2016-04-06T02:52:22.563-0500 D SHARDING [Balancer] Refreshing MaxChunkSize: 50MB [js_test:multi_coll_drop] 2016-04-06T02:52:39.572-0500 s20015| 2016-04-06T02:52:22.563-0500 D ASIO [Balancer] startCommand: RemoteCommand 58 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:52.563-0500 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929141000|1, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.574-0500 s20015| 2016-04-06T02:52:22.563-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 58 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:39.575-0500 s20015| 2016-04-06T02:52:22.563-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 58 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "balancer", stopped: true } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.576-0500 s20015| 2016-04-06T02:52:22.564-0500 D SHARDING [Balancer] skipping balancing round because balancing is disabled [js_test:multi_coll_drop] 2016-04-06T02:52:39.579-0500 s20015| 2016-04-06T02:52:22.564-0500 D ASIO [Balancer] startCommand: RemoteCommand 60 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:52.564-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929142564), up: 15, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.580-0500 s20015| 2016-04-06T02:52:22.564-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 60 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:39.589-0500 s20015| 2016-04-06T02:52:22.631-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 60 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929142000|2, t: 2 }, electionId: ObjectId('7fffffff0000000000000002') } [js_test:multi_coll_drop] 2016-04-06T02:52:39.591-0500 c20013| 2016-04-06T02:52:08.728-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 574 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:39.598-0500 c20013| 2016-04-06T02:52:08.728-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 574 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.620-0500 c20013| 2016-04-06T02:52:08.728-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 576 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.728-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|36, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:39.626-0500 c20013| 2016-04-06T02:52:08.728-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 576 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:39.637-0500 c20013| 2016-04-06T02:52:08.729-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:39.662-0500 c20013| 2016-04-06T02:52:08.729-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 577 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:39.663-0500 c20013| 2016-04-06T02:52:08.729-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 577 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:39.663-0500 c20013| 2016-04-06T02:52:08.729-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 577 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.672-0500 c20013| 2016-04-06T02:52:08.730-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 576 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.677-0500 c20013| 2016-04-06T02:52:08.730-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|37, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.677-0500 c20013| 2016-04-06T02:52:08.730-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:39.685-0500 c20013| 2016-04-06T02:52:08.730-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 580 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.730-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|37, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:39.686-0500 c20013| 2016-04-06T02:52:08.730-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 580 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:39.694-0500 c20013| 2016-04-06T02:52:08.732-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 580 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|38, t: 1, h: 1151462575445385727, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-95.0", lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -95.0 }, max: { _id: -94.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-95.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-94.0", lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -94.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-94.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.697-0500 c20013| 2016-04-06T02:52:08.732-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|38 and ending at ts: Timestamp 1459929128000|38 [js_test:multi_coll_drop] 2016-04-06T02:52:39.699-0500 c20013| 2016-04-06T02:52:08.732-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:39.699-0500 c20013| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.700-0500 c20013| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.700-0500 c20013| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.701-0500 c20013| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.721-0500 c20013| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.735-0500 c20013| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.742-0500 c20013| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.780-0500 c20013| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.781-0500 c20013| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.782-0500 c20013| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.783-0500 c20013| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.784-0500 c20013| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.785-0500 c20013| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.785-0500 c20013| 2016-04-06T02:52:08.732-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:39.786-0500 c20013| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.787-0500 c20013| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.787-0500 c20013| 2016-04-06T02:52:08.732-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-95.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:39.792-0500 c20013| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.793-0500 c20013| 2016-04-06T02:52:08.733-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-94.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:39.795-0500 c20013| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.796-0500 c20013| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.796-0500 c20013| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.797-0500 c20013| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.797-0500 c20013| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.798-0500 c20013| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.798-0500 c20013| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.799-0500 c20013| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.800-0500 c20013| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.800-0500 c20013| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.803-0500 c20013| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.804-0500 c20013| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.808-0500 c20013| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.809-0500 c20013| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.813-0500 c20013| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.814-0500 c20013| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.815-0500 c20013| 2016-04-06T02:52:08.733-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:39.819-0500 c20013| 2016-04-06T02:52:08.733-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:39.838-0500 c20013| 2016-04-06T02:52:08.733-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 582 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:39.839-0500 c20013| 2016-04-06T02:52:08.733-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 582 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:39.839-0500 c20013| 2016-04-06T02:52:08.733-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 582 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.840-0500 c20013| 2016-04-06T02:52:08.734-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 584 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.734-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|37, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:39.840-0500 c20013| 2016-04-06T02:52:08.734-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 584 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:39.842-0500 c20013| 2016-04-06T02:52:08.734-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:39.847-0500 c20013| 2016-04-06T02:52:08.734-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 585 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:39.856-0500 c20013| 2016-04-06T02:52:08.734-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 585 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:39.857-0500 c20013| 2016-04-06T02:52:08.734-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 585 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.862-0500 c20013| 2016-04-06T02:52:08.735-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 584 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.862-0500 c20013| 2016-04-06T02:52:08.735-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|38, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.863-0500 c20013| 2016-04-06T02:52:08.735-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:39.870-0500 c20013| 2016-04-06T02:52:08.735-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 588 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.735-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|38, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:39.894-0500 c20013| 2016-04-06T02:52:08.735-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 588 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:39.902-0500 c20013| 2016-04-06T02:52:08.737-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 588 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|39, t: 1, h: -144793915507581801, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.735-0500-5704c02865c17830b843f189", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128735), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -95.0 }, max: { _id: MaxKey } }, left: { min: { _id: -95.0 }, max: { _id: -94.0 }, lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -94.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:39.908-0500 c20013| 2016-04-06T02:52:08.738-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|39 and ending at ts: Timestamp 1459929128000|39 [js_test:multi_coll_drop] 2016-04-06T02:52:39.917-0500 c20013| 2016-04-06T02:52:08.738-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:39.920-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.923-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.932-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.932-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.938-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.943-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.946-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.949-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.953-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.959-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.960-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.961-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.964-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.965-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.966-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.967-0500 c20013| 2016-04-06T02:52:08.738-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:39.968-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.970-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.971-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.979-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.983-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.983-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.984-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.985-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.986-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.988-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:39.994-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.020-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.031-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.032-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.032-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.034-0500 c20013| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.035-0500 c20013| 2016-04-06T02:52:08.739-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.035-0500 c20013| 2016-04-06T02:52:08.739-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:40.039-0500 c20013| 2016-04-06T02:52:08.739-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:40.044-0500 c20013| 2016-04-06T02:52:08.739-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 590 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:40.045-0500 c20013| 2016-04-06T02:52:08.739-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 590 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:40.048-0500 c20013| 2016-04-06T02:52:08.739-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 590 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.049-0500 c20013| 2016-04-06T02:52:08.740-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 592 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.740-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|38, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:40.050-0500 c20013| 2016-04-06T02:52:08.740-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 592 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:40.055-0500 c20013| 2016-04-06T02:52:08.751-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:40.059-0500 c20013| 2016-04-06T02:52:08.751-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 593 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:40.060-0500 c20013| 2016-04-06T02:52:08.751-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 593 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:40.061-0500 c20013| 2016-04-06T02:52:08.751-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 593 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.062-0500 c20013| 2016-04-06T02:52:08.751-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 592 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.063-0500 c20013| 2016-04-06T02:52:08.752-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|39, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.063-0500 c20013| 2016-04-06T02:52:08.752-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:40.067-0500 c20013| 2016-04-06T02:52:08.752-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 596 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.752-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|39, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:40.071-0500 c20013| 2016-04-06T02:52:08.752-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 596 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:40.074-0500 c20013| 2016-04-06T02:52:08.752-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 596 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|40, t: 1, h: -5970909802005772631, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.080-0500 c20013| 2016-04-06T02:52:08.752-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|40 and ending at ts: Timestamp 1459929128000|40 [js_test:multi_coll_drop] 2016-04-06T02:52:40.085-0500 c20013| 2016-04-06T02:52:08.754-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:40.087-0500 c20013| 2016-04-06T02:52:08.754-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.092-0500 c20013| 2016-04-06T02:52:08.754-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.100-0500 c20013| 2016-04-06T02:52:08.754-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.101-0500 c20013| 2016-04-06T02:52:08.754-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.112-0500 c20013| 2016-04-06T02:52:08.754-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.119-0500 c20013| 2016-04-06T02:52:08.754-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.120-0500 c20013| 2016-04-06T02:52:08.754-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.122-0500 c20013| 2016-04-06T02:52:08.754-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.122-0500 c20013| 2016-04-06T02:52:08.754-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.127-0500 c20013| 2016-04-06T02:52:08.754-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.130-0500 c20013| 2016-04-06T02:52:08.754-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.131-0500 c20013| 2016-04-06T02:52:08.754-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.132-0500 c20013| 2016-04-06T02:52:08.754-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.134-0500 c20013| 2016-04-06T02:52:08.754-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.137-0500 c20013| 2016-04-06T02:52:08.754-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.139-0500 c20013| 2016-04-06T02:52:08.754-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:40.141-0500 c20013| 2016-04-06T02:52:08.754-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.143-0500 c20013| 2016-04-06T02:52:08.755-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 598 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.755-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|39, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:40.144-0500 c20013| 2016-04-06T02:52:08.755-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:40.149-0500 c20013| 2016-04-06T02:52:08.755-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 598 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:40.152-0500 c20013| 2016-04-06T02:52:08.755-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.152-0500 c20013| 2016-04-06T02:52:08.755-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.157-0500 c20013| 2016-04-06T02:52:08.755-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.158-0500 c20013| 2016-04-06T02:52:08.755-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.159-0500 c20013| 2016-04-06T02:52:08.755-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.159-0500 c20013| 2016-04-06T02:52:08.755-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.163-0500 c20013| 2016-04-06T02:52:08.755-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.164-0500 c20013| 2016-04-06T02:52:08.755-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.165-0500 c20013| 2016-04-06T02:52:08.755-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.166-0500 c20013| 2016-04-06T02:52:08.755-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.167-0500 c20013| 2016-04-06T02:52:08.755-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.167-0500 c20013| 2016-04-06T02:52:08.755-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.169-0500 c20013| 2016-04-06T02:52:08.755-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.172-0500 c20013| 2016-04-06T02:52:08.755-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.174-0500 c20013| 2016-04-06T02:52:08.755-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.175-0500 c20013| 2016-04-06T02:52:08.755-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.180-0500 c20013| 2016-04-06T02:52:08.755-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:40.185-0500 c20013| 2016-04-06T02:52:08.756-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:40.204-0500 c20013| 2016-04-06T02:52:08.756-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 599 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:40.205-0500 c20013| 2016-04-06T02:52:08.756-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 599 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:40.205-0500 c20013| 2016-04-06T02:52:08.756-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 599 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.207-0500 c20013| 2016-04-06T02:52:08.759-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:40.209-0500 c20013| 2016-04-06T02:52:08.759-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 601 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:40.211-0500 c20013| 2016-04-06T02:52:08.759-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 601 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:40.211-0500 c20013| 2016-04-06T02:52:08.759-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 601 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.212-0500 c20013| 2016-04-06T02:52:08.765-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 598 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.213-0500 c20013| 2016-04-06T02:52:08.765-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|40, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.214-0500 c20013| 2016-04-06T02:52:08.765-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:40.215-0500 c20013| 2016-04-06T02:52:08.765-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 604 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.765-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|40, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:40.217-0500 c20013| 2016-04-06T02:52:08.765-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 604 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:40.219-0500 c20013| 2016-04-06T02:52:08.767-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|12 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|40, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.221-0500 c20013| 2016-04-06T02:52:08.767-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|40, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:40.223-0500 c20013| 2016-04-06T02:52:08.767-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|12 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|40, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.226-0500 c20013| 2016-04-06T02:52:08.768-0500 D QUERY [conn10] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:40.227-0500 c20013| 2016-04-06T02:52:08.768-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|12 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|40, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:712 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:40.229-0500 c20013| 2016-04-06T02:52:08.770-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 604 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|41, t: 1, h: -8586936061680186804, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f18a'), state: 2, when: new Date(1459929128769), why: "splitting chunk [{ _id: -94.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.230-0500 c20013| 2016-04-06T02:52:08.770-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|41 and ending at ts: Timestamp 1459929128000|41 [js_test:multi_coll_drop] 2016-04-06T02:52:40.231-0500 c20013| 2016-04-06T02:52:08.770-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:40.232-0500 c20013| 2016-04-06T02:52:08.770-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.233-0500 c20013| 2016-04-06T02:52:08.770-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.234-0500 c20013| 2016-04-06T02:52:08.770-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.235-0500 c20013| 2016-04-06T02:52:08.770-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.236-0500 c20013| 2016-04-06T02:52:08.770-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.242-0500 c20013| 2016-04-06T02:52:08.770-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.243-0500 c20013| 2016-04-06T02:52:08.770-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.243-0500 c20013| 2016-04-06T02:52:08.770-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.245-0500 c20013| 2016-04-06T02:52:08.770-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.246-0500 c20013| 2016-04-06T02:52:08.770-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.246-0500 c20013| 2016-04-06T02:52:08.770-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.247-0500 c20013| 2016-04-06T02:52:08.770-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.248-0500 c20013| 2016-04-06T02:52:08.770-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.248-0500 c20013| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.249-0500 c20013| 2016-04-06T02:52:08.771-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:40.250-0500 c20013| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.252-0500 c20013| 2016-04-06T02:52:08.771-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:40.253-0500 c20013| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.254-0500 c20013| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.255-0500 c20013| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.256-0500 c20013| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.258-0500 c20013| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.258-0500 c20013| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.273-0500 c20013| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.275-0500 c20013| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.278-0500 c20013| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.278-0500 c20013| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.280-0500 c20013| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.282-0500 c20013| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.283-0500 c20013| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.283-0500 c20013| 2016-04-06T02:52:08.772-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.284-0500 c20013| 2016-04-06T02:52:08.772-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.285-0500 c20013| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.286-0500 c20013| 2016-04-06T02:52:08.772-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.287-0500 c20013| 2016-04-06T02:52:08.772-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:40.289-0500 c20013| 2016-04-06T02:52:08.772-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 606 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.772-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|40, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:40.291-0500 c20013| 2016-04-06T02:52:08.772-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 606 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:40.293-0500 c20013| 2016-04-06T02:52:08.772-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:40.296-0500 c20013| 2016-04-06T02:52:08.772-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 607 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:40.306-0500 c20013| 2016-04-06T02:52:08.772-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 607 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:40.312-0500 c20013| 2016-04-06T02:52:08.772-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 607 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.322-0500 c20013| 2016-04-06T02:52:08.774-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:40.352-0500 c20013| 2016-04-06T02:52:08.774-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 609 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:40.355-0500 c20011| 2016-04-06T02:52:08.697-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|32, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:40.364-0500 c20011| 2016-04-06T02:52:08.700-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:40.364-0500 c20011| 2016-04-06T02:52:08.700-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:40.368-0500 c20011| 2016-04-06T02:52:08.700-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|33, t: 1 } and is durable through: { ts: Timestamp 1459929128000|33, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.370-0500 c20011| 2016-04-06T02:52:08.700-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.372-0500 c20011| 2016-04-06T02:52:08.700-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:40.373-0500 c20011| 2016-04-06T02:52:08.700-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:40.373-0500 c20011| 2016-04-06T02:52:08.700-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:40.376-0500 c20011| 2016-04-06T02:52:08.700-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.378-0500 c20011| 2016-04-06T02:52:08.700-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|33, t: 1 } and is durable through: { ts: Timestamp 1459929128000|33, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.380-0500 c20011| 2016-04-06T02:52:08.700-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|33, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.382-0500 c20011| 2016-04-06T02:52:08.700-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:40.384-0500 c20011| 2016-04-06T02:52:08.700-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|32, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:40.393-0500 c20011| 2016-04-06T02:52:08.700-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|32, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:40.398-0500 c20011| 2016-04-06T02:52:08.700-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|33, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:40.405-0500 c20011| 2016-04-06T02:52:08.700-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|33, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:40.409-0500 c20011| 2016-04-06T02:52:08.704-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f186'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128693), why: "splitting chunk [{ _id: -96.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c02865c17830b843f186'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128693), why: "splitting chunk [{ _id: -96.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 10ms [js_test:multi_coll_drop] 2016-04-06T02:52:40.411-0500 c20011| 2016-04-06T02:52:08.704-0500 D COMMAND [conn25] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|33, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.412-0500 c20011| 2016-04-06T02:52:08.704-0500 D COMMAND [conn25] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|33, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:40.413-0500 c20011| 2016-04-06T02:52:08.704-0500 D COMMAND [conn25] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|33, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.415-0500 c20011| 2016-04-06T02:52:08.704-0500 D QUERY [conn25] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:40.420-0500 c20011| 2016-04-06T02:52:08.704-0500 I COMMAND [conn25] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|33, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:512 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:40.427-0500 c20011| 2016-04-06T02:52:08.705-0500 D COMMAND [conn25] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-96.0", lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -96.0 }, max: { _id: -95.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-96.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-95.0", lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -95.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-95.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|10 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.428-0500 c20011| 2016-04-06T02:52:08.705-0500 D QUERY [conn25] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:40.431-0500 c20011| 2016-04-06T02:52:08.705-0500 D QUERY [conn25] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:40.433-0500 c20011| 2016-04-06T02:52:08.705-0500 I COMMAND [conn25] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:40.434-0500 c20011| 2016-04-06T02:52:08.706-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-96.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:40.434-0500 c20011| 2016-04-06T02:52:08.706-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-95.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:40.434-0500 c20011| 2016-04-06T02:52:08.706-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|33, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:40.436-0500 c20011| 2016-04-06T02:52:08.706-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|33, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:40.438-0500 c20011| 2016-04-06T02:52:08.709-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|34, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|33, t: 1 }, name-id: "113" } [js_test:multi_coll_drop] 2016-04-06T02:52:40.438-0500 c20011| 2016-04-06T02:52:08.710-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|33, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:40.442-0500 c20011| 2016-04-06T02:52:08.711-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|34, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:40.442-0500 c20011| 2016-04-06T02:52:08.711-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:40.446-0500 c20011| 2016-04-06T02:52:08.711-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|34, t: 1 } and is durable through: { ts: Timestamp 1459929128000|33, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.460-0500 c20011| 2016-04-06T02:52:08.711-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|34, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|33, t: 1 }, name-id: "113" } [js_test:multi_coll_drop] 2016-04-06T02:52:40.464-0500 c20011| 2016-04-06T02:52:08.711-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.469-0500 c20011| 2016-04-06T02:52:08.712-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|34, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:40.471-0500 c20011| 2016-04-06T02:52:08.712-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|34, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|34, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:40.472-0500 c20011| 2016-04-06T02:52:08.712-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:40.473-0500 c20011| 2016-04-06T02:52:08.712-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|34, t: 1 } and is durable through: { ts: Timestamp 1459929128000|34, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.474-0500 c20011| 2016-04-06T02:52:08.712-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|34, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.476-0500 c20011| 2016-04-06T02:52:08.712-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.484-0500 c20011| 2016-04-06T02:52:08.712-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|34, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|34, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:40.487-0500 c20011| 2016-04-06T02:52:08.713-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|33, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:40.493-0500 c20011| 2016-04-06T02:52:08.713-0500 I COMMAND [conn25] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-96.0", lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -96.0 }, max: { _id: -95.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-96.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-95.0", lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -95.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-95.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|10 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:52:40.499-0500 c20011| 2016-04-06T02:52:08.713-0500 D COMMAND [conn25] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.713-0500-5704c02865c17830b843f187", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128713), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -96.0 }, max: { _id: MaxKey } }, left: { min: { _id: -96.0 }, max: { _id: -95.0 }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -95.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.502-0500 c20011| 2016-04-06T02:52:08.713-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|34, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:40.507-0500 c20011| 2016-04-06T02:52:08.713-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|34, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:40.508-0500 c20011| 2016-04-06T02:52:08.713-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|35, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|34, t: 1 }, name-id: "114" } [js_test:multi_coll_drop] 2016-04-06T02:52:40.509-0500 c20013| 2016-04-06T02:52:08.774-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 609 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:40.510-0500 c20013| 2016-04-06T02:52:08.774-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 609 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.511-0500 c20013| 2016-04-06T02:52:08.774-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 606 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.512-0500 c20013| 2016-04-06T02:52:08.774-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|41, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.513-0500 c20013| 2016-04-06T02:52:08.774-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:40.517-0500 c20013| 2016-04-06T02:52:08.774-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 612 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.774-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|41, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:40.518-0500 c20013| 2016-04-06T02:52:08.774-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 612 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:40.526-0500 c20013| 2016-04-06T02:52:08.780-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 612 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|42, t: 1, h: 833305568785647658, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-94.0", lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -94.0 }, max: { _id: -93.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-94.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-93.0", lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -93.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-93.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.527-0500 c20013| 2016-04-06T02:52:08.780-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|42 and ending at ts: Timestamp 1459929128000|42 [js_test:multi_coll_drop] 2016-04-06T02:52:40.530-0500 c20013| 2016-04-06T02:52:08.780-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:40.533-0500 c20013| 2016-04-06T02:52:08.780-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.535-0500 c20013| 2016-04-06T02:52:08.780-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.536-0500 c20013| 2016-04-06T02:52:08.780-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.537-0500 c20013| 2016-04-06T02:52:08.780-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.538-0500 c20013| 2016-04-06T02:52:08.780-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.541-0500 c20013| 2016-04-06T02:52:08.780-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.543-0500 c20013| 2016-04-06T02:52:08.780-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.543-0500 c20013| 2016-04-06T02:52:08.780-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.544-0500 c20013| 2016-04-06T02:52:08.780-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.546-0500 c20013| 2016-04-06T02:52:08.780-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.546-0500 c20013| 2016-04-06T02:52:08.780-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.548-0500 c20013| 2016-04-06T02:52:08.780-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.548-0500 c20013| 2016-04-06T02:52:08.780-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.551-0500 c20013| 2016-04-06T02:52:08.780-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.551-0500 c20013| 2016-04-06T02:52:08.781-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:40.554-0500 c20013| 2016-04-06T02:52:08.781-0500 D QUERY [repl writer worker 4] Using idhack: { _id: "multidrop.coll-_id_-94.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:40.554-0500 c20013| 2016-04-06T02:52:08.781-0500 D QUERY [repl writer worker 4] Using idhack: { _id: "multidrop.coll-_id_-93.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:40.555-0500 c20013| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.557-0500 c20013| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.559-0500 c20013| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.560-0500 c20013| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.564-0500 c20013| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.564-0500 c20013| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.567-0500 c20013| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.567-0500 c20013| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.589-0500 c20013| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.589-0500 c20013| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.593-0500 c20013| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.593-0500 c20013| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.593-0500 c20013| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.595-0500 c20013| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.602-0500 c20011| 2016-04-06T02:52:08.715-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|34, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:40.602-0500 c20011| 2016-04-06T02:52:08.715-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:40.605-0500 c20011| 2016-04-06T02:52:08.715-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|35, t: 1 } and is durable through: { ts: Timestamp 1459929128000|34, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.608-0500 c20011| 2016-04-06T02:52:08.715-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|35, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|34, t: 1 }, name-id: "114" } [js_test:multi_coll_drop] 2016-04-06T02:52:40.611-0500 c20011| 2016-04-06T02:52:08.715-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.616-0500 c20011| 2016-04-06T02:52:08.715-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|34, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:40.617-0500 c20011| 2016-04-06T02:52:08.715-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|33, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:40.621-0500 c20011| 2016-04-06T02:52:08.716-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|33, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:40.632-0500 c20011| 2016-04-06T02:52:08.716-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|34, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:40.635-0500 c20011| 2016-04-06T02:52:08.717-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:40.636-0500 c20011| 2016-04-06T02:52:08.717-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:40.637-0500 c20013| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.638-0500 c20012| 2016-04-06T02:52:08.477-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:40.641-0500 c20012| 2016-04-06T02:52:08.477-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 360 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.477-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:40.645-0500 c20012| 2016-04-06T02:52:08.477-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.646-0500 c20012| 2016-04-06T02:52:08.477-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.647-0500 c20012| 2016-04-06T02:52:08.478-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 360 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:40.648-0500 c20012| 2016-04-06T02:52:08.478-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.648-0500 c20012| 2016-04-06T02:52:08.478-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.650-0500 c20012| 2016-04-06T02:52:08.478-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.650-0500 c20012| 2016-04-06T02:52:08.478-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.653-0500 c20012| 2016-04-06T02:52:08.478-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:40.655-0500 c20012| 2016-04-06T02:52:08.478-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:40.656-0500 c20012| 2016-04-06T02:52:08.478-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.703-0500 c20012| 2016-04-06T02:52:08.478-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:40.704-0500 c20012| 2016-04-06T02:52:08.478-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.705-0500 c20012| 2016-04-06T02:52:08.478-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.708-0500 c20012| 2016-04-06T02:52:08.478-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 361 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:40.709-0500 c20012| 2016-04-06T02:52:08.478-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.710-0500 c20012| 2016-04-06T02:52:08.478-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 361 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:40.711-0500 c20012| 2016-04-06T02:52:08.478-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.711-0500 c20012| 2016-04-06T02:52:08.478-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.714-0500 c20012| 2016-04-06T02:52:08.478-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.719-0500 c20011| 2016-04-06T02:52:08.717-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|35, t: 1 } and is durable through: { ts: Timestamp 1459929128000|35, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.720-0500 c20013| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.723-0500 c20012| 2016-04-06T02:52:08.478-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.724-0500 c20012| 2016-04-06T02:52:08.478-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.726-0500 c20012| 2016-04-06T02:52:08.478-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.729-0500 c20012| 2016-04-06T02:52:08.478-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 361 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.731-0500 c20012| 2016-04-06T02:52:08.478-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.732-0500 c20012| 2016-04-06T02:52:08.478-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.732-0500 c20012| 2016-04-06T02:52:08.478-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:40.736-0500 c20012| 2016-04-06T02:52:08.478-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.739-0500 c20012| 2016-04-06T02:52:08.479-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.739-0500 c20012| 2016-04-06T02:52:08.479-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.742-0500 c20012| 2016-04-06T02:52:08.479-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.743-0500 c20012| 2016-04-06T02:52:08.479-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.746-0500 c20012| 2016-04-06T02:52:08.479-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.748-0500 c20012| 2016-04-06T02:52:08.479-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.761-0500 c20012| 2016-04-06T02:52:08.479-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.762-0500 c20012| 2016-04-06T02:52:08.479-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.765-0500 c20012| 2016-04-06T02:52:08.479-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.766-0500 c20012| 2016-04-06T02:52:08.479-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.771-0500 c20012| 2016-04-06T02:52:08.479-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.775-0500 c20012| 2016-04-06T02:52:08.479-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.776-0500 c20012| 2016-04-06T02:52:08.479-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.777-0500 c20011| 2016-04-06T02:52:08.717-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|35, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.778-0500 c20011| 2016-04-06T02:52:08.717-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.786-0500 c20011| 2016-04-06T02:52:08.717-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:40.789-0500 c20011| 2016-04-06T02:52:08.717-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|34, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:40.798-0500 c20011| 2016-04-06T02:52:08.717-0500 I COMMAND [conn25] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.713-0500-5704c02865c17830b843f187", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128713), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -96.0 }, max: { _id: MaxKey } }, left: { min: { _id: -96.0 }, max: { _id: -95.0 }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -95.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:40.807-0500 c20011| 2016-04-06T02:52:08.717-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|34, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:40.808-0500 c20011| 2016-04-06T02:52:08.717-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:40.811-0500 c20011| 2016-04-06T02:52:08.717-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f186') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.813-0500 c20011| 2016-04-06T02:52:08.717-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.814-0500 c20011| 2016-04-06T02:52:08.717-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|34, t: 1 } and is durable through: { ts: Timestamp 1459929128000|33, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.820-0500 c20011| 2016-04-06T02:52:08.718-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|34, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:40.834-0500 c20011| 2016-04-06T02:52:08.718-0500 D QUERY [conn25] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:40.841-0500 c20011| 2016-04-06T02:52:08.718-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c02865c17830b843f186') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.843-0500 c20011| 2016-04-06T02:52:08.718-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|35, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:40.845-0500 c20011| 2016-04-06T02:52:08.719-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|34, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:40.855-0500 c20011| 2016-04-06T02:52:08.719-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|34, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:40.876-0500 c20011| 2016-04-06T02:52:08.719-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|35, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:40.885-0500 c20011| 2016-04-06T02:52:08.719-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|36, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|35, t: 1 }, name-id: "115" } [js_test:multi_coll_drop] 2016-04-06T02:52:40.893-0500 c20011| 2016-04-06T02:52:08.720-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|34, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:40.893-0500 c20011| 2016-04-06T02:52:08.720-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:40.897-0500 c20011| 2016-04-06T02:52:08.720-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.901-0500 c20011| 2016-04-06T02:52:08.720-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|35, t: 1 } and is durable through: { ts: Timestamp 1459929128000|34, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.907-0500 c20011| 2016-04-06T02:52:08.720-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|36, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|35, t: 1 }, name-id: "115" } [js_test:multi_coll_drop] 2016-04-06T02:52:40.909-0500 c20011| 2016-04-06T02:52:08.720-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|36, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|35, t: 1 }, name-id: "115" } [js_test:multi_coll_drop] 2016-04-06T02:52:40.913-0500 c20011| 2016-04-06T02:52:08.720-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|34, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:40.916-0500 c20013| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.919-0500 c20013| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.921-0500 c20013| 2016-04-06T02:52:08.782-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:40.925-0500 c20013| 2016-04-06T02:52:08.782-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 614 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.782-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|41, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:40.926-0500 c20013| 2016-04-06T02:52:08.782-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 614 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:40.931-0500 c20013| 2016-04-06T02:52:08.782-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:40.937-0500 c20013| 2016-04-06T02:52:08.782-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 615 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:40.938-0500 c20013| 2016-04-06T02:52:08.782-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 615 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:40.940-0500 c20013| 2016-04-06T02:52:08.782-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 615 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.943-0500 c20013| 2016-04-06T02:52:08.784-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:40.949-0500 c20013| 2016-04-06T02:52:08.784-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 617 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:40.952-0500 c20013| 2016-04-06T02:52:08.784-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 617 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:40.954-0500 c20013| 2016-04-06T02:52:08.784-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 617 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.955-0500 c20013| 2016-04-06T02:52:08.784-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 614 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.958-0500 c20013| 2016-04-06T02:52:08.784-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|42, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.959-0500 c20013| 2016-04-06T02:52:08.784-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:40.960-0500 c20013| 2016-04-06T02:52:08.784-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 620 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.784-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|42, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:40.963-0500 c20013| 2016-04-06T02:52:08.784-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 620 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:40.972-0500 c20013| 2016-04-06T02:52:08.785-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 620 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|43, t: 1, h: -3405107048992371553, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.784-0500-5704c02865c17830b843f18b", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128784), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -94.0 }, max: { _id: MaxKey } }, left: { min: { _id: -94.0 }, max: { _id: -93.0 }, lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -93.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:40.976-0500 c20013| 2016-04-06T02:52:08.785-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|43 and ending at ts: Timestamp 1459929128000|43 [js_test:multi_coll_drop] 2016-04-06T02:52:40.980-0500 c20013| 2016-04-06T02:52:08.785-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:40.980-0500 c20013| 2016-04-06T02:52:08.785-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.981-0500 c20013| 2016-04-06T02:52:08.785-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.986-0500 c20013| 2016-04-06T02:52:08.785-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.987-0500 c20013| 2016-04-06T02:52:08.785-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.988-0500 c20013| 2016-04-06T02:52:08.785-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.989-0500 c20013| 2016-04-06T02:52:08.785-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.989-0500 c20013| 2016-04-06T02:52:08.785-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.991-0500 c20013| 2016-04-06T02:52:08.785-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.993-0500 c20013| 2016-04-06T02:52:08.785-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.995-0500 c20013| 2016-04-06T02:52:08.785-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:40.996-0500 c20013| 2016-04-06T02:52:08.785-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.024-0500 c20013| 2016-04-06T02:52:08.785-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.024-0500 c20013| 2016-04-06T02:52:08.785-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:41.024-0500 c20013| 2016-04-06T02:52:08.785-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.025-0500 c20013| 2016-04-06T02:52:08.785-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.025-0500 c20013| 2016-04-06T02:52:08.785-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.027-0500 c20013| 2016-04-06T02:52:08.785-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.030-0500 c20013| 2016-04-06T02:52:08.787-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 622 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.787-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|42, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:41.033-0500 c20013| 2016-04-06T02:52:08.787-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 622 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:41.033-0500 c20013| 2016-04-06T02:52:08.789-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.035-0500 c20013| 2016-04-06T02:52:08.789-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.036-0500 c20013| 2016-04-06T02:52:08.793-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.037-0500 c20013| 2016-04-06T02:52:08.793-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.039-0500 c20013| 2016-04-06T02:52:08.793-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.040-0500 c20013| 2016-04-06T02:52:08.793-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.040-0500 c20013| 2016-04-06T02:52:08.793-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.043-0500 c20013| 2016-04-06T02:52:08.793-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.046-0500 c20013| 2016-04-06T02:52:08.793-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.046-0500 c20013| 2016-04-06T02:52:08.793-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.047-0500 c20013| 2016-04-06T02:52:08.793-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.048-0500 c20013| 2016-04-06T02:52:08.793-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.053-0500 c20013| 2016-04-06T02:52:08.793-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.054-0500 c20013| 2016-04-06T02:52:08.793-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.054-0500 c20013| 2016-04-06T02:52:08.793-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.055-0500 c20013| 2016-04-06T02:52:08.793-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.057-0500 c20013| 2016-04-06T02:52:08.794-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:41.068-0500 c20013| 2016-04-06T02:52:08.794-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:41.072-0500 c20013| 2016-04-06T02:52:08.794-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 623 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:41.076-0500 c20013| 2016-04-06T02:52:08.794-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 623 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:41.079-0500 c20013| 2016-04-06T02:52:08.794-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 623 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.083-0500 c20013| 2016-04-06T02:52:08.798-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:41.095-0500 c20013| 2016-04-06T02:52:08.798-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 625 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:41.096-0500 c20013| 2016-04-06T02:52:08.798-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 625 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:41.103-0500 c20013| 2016-04-06T02:52:08.798-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 625 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.104-0500 c20013| 2016-04-06T02:52:08.800-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 622 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.105-0500 c20013| 2016-04-06T02:52:08.800-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|43, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.106-0500 c20013| 2016-04-06T02:52:08.800-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:41.110-0500 c20013| 2016-04-06T02:52:08.800-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 628 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.800-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|43, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:41.111-0500 c20013| 2016-04-06T02:52:08.800-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 628 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:41.115-0500 c20013| 2016-04-06T02:52:08.801-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 628 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|44, t: 1, h: -7327796729150212279, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.121-0500 c20013| 2016-04-06T02:52:08.801-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|44 and ending at ts: Timestamp 1459929128000|44 [js_test:multi_coll_drop] 2016-04-06T02:52:41.124-0500 c20013| 2016-04-06T02:52:08.801-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:41.127-0500 c20013| 2016-04-06T02:52:08.801-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.129-0500 c20013| 2016-04-06T02:52:08.801-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.140-0500 c20013| 2016-04-06T02:52:08.801-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.148-0500 c20013| 2016-04-06T02:52:08.801-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.149-0500 c20013| 2016-04-06T02:52:08.801-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.151-0500 c20013| 2016-04-06T02:52:08.801-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.155-0500 c20013| 2016-04-06T02:52:08.801-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.163-0500 c20013| 2016-04-06T02:52:08.801-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.196-0500 c20013| 2016-04-06T02:52:08.801-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.198-0500 c20013| 2016-04-06T02:52:08.801-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.199-0500 c20013| 2016-04-06T02:52:08.801-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.201-0500 c20013| 2016-04-06T02:52:08.801-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.202-0500 c20013| 2016-04-06T02:52:08.801-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.204-0500 c20013| 2016-04-06T02:52:08.801-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.205-0500 c20013| 2016-04-06T02:52:08.801-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:41.207-0500 c20013| 2016-04-06T02:52:08.801-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.208-0500 c20013| 2016-04-06T02:52:08.801-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:41.209-0500 c20013| 2016-04-06T02:52:08.802-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.210-0500 c20013| 2016-04-06T02:52:08.802-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.210-0500 c20013| 2016-04-06T02:52:08.802-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.212-0500 c20013| 2016-04-06T02:52:08.802-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.214-0500 c20013| 2016-04-06T02:52:08.802-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.216-0500 c20013| 2016-04-06T02:52:08.802-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.216-0500 c20013| 2016-04-06T02:52:08.802-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.219-0500 c20013| 2016-04-06T02:52:08.802-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.222-0500 c20013| 2016-04-06T02:52:08.802-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.224-0500 c20013| 2016-04-06T02:52:08.802-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.226-0500 c20013| 2016-04-06T02:52:08.802-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.227-0500 c20013| 2016-04-06T02:52:08.802-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.228-0500 c20013| 2016-04-06T02:52:08.802-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.229-0500 c20013| 2016-04-06T02:52:08.802-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.230-0500 c20013| 2016-04-06T02:52:08.802-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.232-0500 c20013| 2016-04-06T02:52:08.803-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 630 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.803-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|43, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:41.244-0500 c20013| 2016-04-06T02:52:08.803-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 630 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:41.245-0500 c20013| 2016-04-06T02:52:08.805-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.255-0500 c20013| 2016-04-06T02:52:08.805-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.262-0500 c20013| 2016-04-06T02:52:08.805-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:41.265-0500 c20013| 2016-04-06T02:52:08.806-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:41.287-0500 c20013| 2016-04-06T02:52:08.806-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 631 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:41.289-0500 c20013| 2016-04-06T02:52:08.806-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 631 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:41.290-0500 c20013| 2016-04-06T02:52:08.806-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 631 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.293-0500 c20013| 2016-04-06T02:52:08.823-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:41.299-0500 c20013| 2016-04-06T02:52:08.824-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 633 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:41.305-0500 c20013| 2016-04-06T02:52:08.824-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 633 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:41.307-0500 c20013| 2016-04-06T02:52:08.824-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 633 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.309-0500 c20013| 2016-04-06T02:52:08.824-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 630 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.310-0500 c20013| 2016-04-06T02:52:08.824-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|44, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.310-0500 c20013| 2016-04-06T02:52:08.824-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:41.312-0500 c20013| 2016-04-06T02:52:08.825-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|44, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.314-0500 c20013| 2016-04-06T02:52:08.825-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|44, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:41.317-0500 c20013| 2016-04-06T02:52:08.825-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|44, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.318-0500 c20013| 2016-04-06T02:52:08.825-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:41.322-0500 c20013| 2016-04-06T02:52:08.825-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 636 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.825-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|44, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:41.330-0500 c20013| 2016-04-06T02:52:08.825-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|44, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:41.332-0500 c20013| 2016-04-06T02:52:08.825-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 636 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:41.336-0500 c20013| 2016-04-06T02:52:08.827-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|44, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.338-0500 c20013| 2016-04-06T02:52:08.827-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|44, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:41.340-0500 c20013| 2016-04-06T02:52:08.827-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|44, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.343-0500 c20013| 2016-04-06T02:52:08.827-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:41.348-0500 c20013| 2016-04-06T02:52:08.828-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|44, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:41.354-0500 c20013| 2016-04-06T02:52:08.829-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 636 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|45, t: 1, h: -2798690155182775057, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f18c'), state: 2, when: new Date(1459929128828), why: "splitting chunk [{ _id: -93.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.361-0500 c20013| 2016-04-06T02:52:08.829-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|45 and ending at ts: Timestamp 1459929128000|45 [js_test:multi_coll_drop] 2016-04-06T02:52:41.361-0500 c20013| 2016-04-06T02:52:08.829-0500 D REPL [rsBackgroundSync-0] bgsync buffer has 0 bytes [js_test:multi_coll_drop] 2016-04-06T02:52:41.364-0500 c20013| 2016-04-06T02:52:08.829-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:41.367-0500 c20013| 2016-04-06T02:52:08.829-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.368-0500 c20013| 2016-04-06T02:52:08.829-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.370-0500 c20013| 2016-04-06T02:52:08.829-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.370-0500 c20013| 2016-04-06T02:52:08.829-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.371-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.372-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.382-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.383-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.386-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.387-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.392-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.395-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.398-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.402-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.404-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.404-0500 c20013| 2016-04-06T02:52:08.830-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:41.405-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.412-0500 c20013| 2016-04-06T02:52:08.830-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:41.414-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.416-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.417-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.418-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.422-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.424-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.426-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.427-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.429-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.429-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.435-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.439-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.441-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.444-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.448-0500 c20013| 2016-04-06T02:52:08.830-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.452-0500 c20013| 2016-04-06T02:52:08.831-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.454-0500 c20013| 2016-04-06T02:52:08.831-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:41.457-0500 c20013| 2016-04-06T02:52:08.831-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:41.461-0500 c20013| 2016-04-06T02:52:08.831-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 638 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:41.465-0500 c20013| 2016-04-06T02:52:08.831-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 638 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:41.467-0500 c20013| 2016-04-06T02:52:08.831-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 638 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.470-0500 c20013| 2016-04-06T02:52:08.831-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 640 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.831-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|44, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:41.472-0500 c20013| 2016-04-06T02:52:08.831-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 640 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:41.574-0500 c20013| 2016-04-06T02:52:08.832-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:41.596-0500 c20013| 2016-04-06T02:52:08.832-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 641 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:41.600-0500 c20013| 2016-04-06T02:52:08.832-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 641 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:41.601-0500 c20013| 2016-04-06T02:52:08.832-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 641 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.603-0500 c20013| 2016-04-06T02:52:08.832-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 640 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.609-0500 c20013| 2016-04-06T02:52:08.832-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|45, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.613-0500 c20013| 2016-04-06T02:52:08.832-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:41.616-0500 c20013| 2016-04-06T02:52:08.832-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 644 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.832-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|45, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:41.617-0500 c20013| 2016-04-06T02:52:08.832-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 644 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:41.629-0500 c20013| 2016-04-06T02:52:08.837-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 644 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|46, t: 1, h: 3326031865404345327, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-93.0", lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -93.0 }, max: { _id: -92.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-93.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-92.0", lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -92.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-92.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.632-0500 c20013| 2016-04-06T02:52:08.837-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|46 and ending at ts: Timestamp 1459929128000|46 [js_test:multi_coll_drop] 2016-04-06T02:52:41.634-0500 c20013| 2016-04-06T02:52:08.838-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:41.635-0500 c20013| 2016-04-06T02:52:08.838-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.636-0500 c20013| 2016-04-06T02:52:08.838-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.638-0500 c20013| 2016-04-06T02:52:08.838-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.641-0500 c20013| 2016-04-06T02:52:08.838-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.644-0500 c20013| 2016-04-06T02:52:08.838-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.644-0500 c20013| 2016-04-06T02:52:08.838-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.647-0500 c20013| 2016-04-06T02:52:08.838-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.652-0500 c20013| 2016-04-06T02:52:08.838-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.652-0500 c20013| 2016-04-06T02:52:08.838-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.654-0500 c20013| 2016-04-06T02:52:08.838-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.654-0500 c20013| 2016-04-06T02:52:08.838-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.656-0500 c20013| 2016-04-06T02:52:08.839-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.659-0500 c20013| 2016-04-06T02:52:08.839-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.659-0500 c20013| 2016-04-06T02:52:08.839-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:41.662-0500 c20013| 2016-04-06T02:52:08.839-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.663-0500 c20013| 2016-04-06T02:52:08.839-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll-_id_-93.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:41.676-0500 c20013| 2016-04-06T02:52:08.839-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll-_id_-92.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:41.678-0500 c20013| 2016-04-06T02:52:08.839-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.683-0500 c20013| 2016-04-06T02:52:08.839-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.698-0500 c20013| 2016-04-06T02:52:08.839-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 646 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.839-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|45, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:41.705-0500 c20013| 2016-04-06T02:52:08.839-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 646 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:41.708-0500 c20013| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.714-0500 c20013| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.715-0500 c20013| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.718-0500 c20013| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.721-0500 c20013| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.724-0500 c20013| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.726-0500 c20013| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.727-0500 c20013| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.730-0500 c20013| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.732-0500 c20013| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.739-0500 c20013| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.739-0500 c20013| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.740-0500 c20013| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.741-0500 c20013| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.743-0500 c20013| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.747-0500 c20013| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.748-0500 c20013| 2016-04-06T02:52:08.840-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:41.768-0500 c20013| 2016-04-06T02:52:08.840-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|46, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:41.781-0500 c20012| 2016-04-06T02:52:08.479-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.783-0500 c20012| 2016-04-06T02:52:08.479-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.790-0500 c20012| 2016-04-06T02:52:08.479-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.793-0500 c20012| 2016-04-06T02:52:08.479-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.804-0500 c20012| 2016-04-06T02:52:08.479-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.807-0500 c20012| 2016-04-06T02:52:08.479-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:41.808-0500 c20012| 2016-04-06T02:52:08.480-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:41.815-0500 c20012| 2016-04-06T02:52:08.480-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:41.823-0500 c20012| 2016-04-06T02:52:08.480-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 363 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:41.826-0500 c20012| 2016-04-06T02:52:08.480-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 363 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:41.833-0500 c20012| 2016-04-06T02:52:08.480-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 363 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.837-0500 c20012| 2016-04-06T02:52:08.480-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:41.849-0500 c20012| 2016-04-06T02:52:08.480-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 364 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:41.852-0500 c20012| 2016-04-06T02:52:08.480-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 364 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:41.853-0500 c20012| 2016-04-06T02:52:08.480-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 364 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.866-0500 c20012| 2016-04-06T02:52:08.482-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 360 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.870-0500 c20012| 2016-04-06T02:52:08.482-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.871-0500 c20012| 2016-04-06T02:52:08.482-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:41.878-0500 c20012| 2016-04-06T02:52:08.482-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:41.889-0500 c20012| 2016-04-06T02:52:08.482-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 368 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:41.895-0500 c20012| 2016-04-06T02:52:08.482-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 368 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:41.898-0500 c20012| 2016-04-06T02:52:08.482-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 369 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.482-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:41.900-0500 c20012| 2016-04-06T02:52:08.482-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 369 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:41.902-0500 c20012| 2016-04-06T02:52:08.483-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 368 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.903-0500 c20012| 2016-04-06T02:52:08.486-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:36790 #11 (9 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:41.905-0500 c20012| 2016-04-06T02:52:08.486-0500 D COMMAND [conn11] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20010" } [js_test:multi_coll_drop] 2016-04-06T02:52:41.907-0500 c20012| 2016-04-06T02:52:08.486-0500 I COMMAND [conn11] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20010" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:41.909-0500 c20012| 2016-04-06T02:52:08.486-0500 D COMMAND [conn11] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 0|0 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|10, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.913-0500 c20012| 2016-04-06T02:52:08.486-0500 D COMMAND [conn11] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|10, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:41.919-0500 c20012| 2016-04-06T02:52:08.486-0500 D COMMAND [conn11] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 0|0 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|10, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.924-0500 c20012| 2016-04-06T02:52:08.486-0500 D QUERY [conn11] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:41.934-0500 c20012| 2016-04-06T02:52:08.487-0500 I COMMAND [conn11] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 0|0 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|10, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:530 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:41.940-0500 c20012| 2016-04-06T02:52:08.490-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 369 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|11, t: 1, h: 3457335805137684592, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.489-0500-5704c02806c33406d4d9c0c1", server: "mongovm16", clientAddr: "127.0.0.1:55066", time: new Date(1459929128489), what: "shardCollection.end", ns: "multidrop.coll", details: { version: "1|0||5704c02806c33406d4d9c0c0" } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:41.953-0500 c20012| 2016-04-06T02:52:08.490-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|11 and ending at ts: Timestamp 1459929128000|11 [js_test:multi_coll_drop] 2016-04-06T02:52:41.989-0500 c20012| 2016-04-06T02:52:08.490-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:42.010-0500 c20012| 2016-04-06T02:52:08.490-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.010-0500 c20012| 2016-04-06T02:52:08.490-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.010-0500 c20012| 2016-04-06T02:52:08.490-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.014-0500 c20012| 2016-04-06T02:52:08.490-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.019-0500 c20012| 2016-04-06T02:52:08.491-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.027-0500 c20012| 2016-04-06T02:52:08.491-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.036-0500 c20012| 2016-04-06T02:52:08.491-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.036-0500 c20012| 2016-04-06T02:52:08.491-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.040-0500 c20012| 2016-04-06T02:52:08.491-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.044-0500 c20012| 2016-04-06T02:52:08.491-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.045-0500 c20012| 2016-04-06T02:52:08.491-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.047-0500 c20012| 2016-04-06T02:52:08.491-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.047-0500 c20012| 2016-04-06T02:52:08.491-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.048-0500 c20012| 2016-04-06T02:52:08.491-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.049-0500 c20012| 2016-04-06T02:52:08.491-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.049-0500 c20012| 2016-04-06T02:52:08.491-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:42.050-0500 c20012| 2016-04-06T02:52:08.491-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.050-0500 c20012| 2016-04-06T02:52:08.491-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.054-0500 c20012| 2016-04-06T02:52:08.491-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.055-0500 c20012| 2016-04-06T02:52:08.491-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.056-0500 c20012| 2016-04-06T02:52:08.491-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.059-0500 c20012| 2016-04-06T02:52:08.491-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.063-0500 c20012| 2016-04-06T02:52:08.491-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.065-0500 c20012| 2016-04-06T02:52:08.491-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.067-0500 c20012| 2016-04-06T02:52:08.491-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.067-0500 c20012| 2016-04-06T02:52:08.491-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.068-0500 c20012| 2016-04-06T02:52:08.491-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.072-0500 c20012| 2016-04-06T02:52:08.491-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.074-0500 c20012| 2016-04-06T02:52:08.491-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.075-0500 c20012| 2016-04-06T02:52:08.491-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.075-0500 c20012| 2016-04-06T02:52:08.492-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.076-0500 c20012| 2016-04-06T02:52:08.492-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.079-0500 c20012| 2016-04-06T02:52:08.492-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 372 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.492-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:42.081-0500 c20012| 2016-04-06T02:52:08.492-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.082-0500 c20012| 2016-04-06T02:52:08.492-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 372 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:42.083-0500 c20012| 2016-04-06T02:52:08.493-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:42.087-0500 c20012| 2016-04-06T02:52:08.493-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:42.092-0500 c20012| 2016-04-06T02:52:08.493-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 373 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:42.095-0500 c20012| 2016-04-06T02:52:08.493-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 373 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:42.097-0500 c20012| 2016-04-06T02:52:08.493-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 373 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.103-0500 c20012| 2016-04-06T02:52:08.495-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:42.112-0500 c20012| 2016-04-06T02:52:08.495-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 375 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:42.117-0500 c20012| 2016-04-06T02:52:08.495-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 375 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:42.117-0500 c20012| 2016-04-06T02:52:08.496-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 375 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.119-0500 c20012| 2016-04-06T02:52:08.496-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 372 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.120-0500 c20012| 2016-04-06T02:52:08.496-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.120-0500 c20012| 2016-04-06T02:52:08.496-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:42.122-0500 c20012| 2016-04-06T02:52:08.496-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 378 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.496-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:42.123-0500 c20012| 2016-04-06T02:52:08.496-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 378 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:42.127-0500 c20012| 2016-04-06T02:52:08.496-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 378 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|12, t: 1, h: 8307982106745841146, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.128-0500 c20012| 2016-04-06T02:52:08.496-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|12 and ending at ts: Timestamp 1459929128000|12 [js_test:multi_coll_drop] 2016-04-06T02:52:42.130-0500 c20012| 2016-04-06T02:52:08.497-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:42.132-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.134-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.135-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.166-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.166-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.173-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.175-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.179-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.180-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.210-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.211-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.212-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.218-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.220-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.221-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.221-0500 c20012| 2016-04-06T02:52:08.497-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:42.221-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.222-0500 c20012| 2016-04-06T02:52:08.497-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:42.223-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.225-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.225-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.227-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.227-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.227-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.229-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.230-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.231-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.233-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.235-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.235-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.241-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.241-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.252-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.254-0500 c20012| 2016-04-06T02:52:08.497-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.269-0500 c20012| 2016-04-06T02:52:08.499-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 380 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.499-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:42.270-0500 c20012| 2016-04-06T02:52:08.499-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:42.271-0500 c20012| 2016-04-06T02:52:08.499-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 380 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:42.275-0500 c20012| 2016-04-06T02:52:08.500-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:42.312-0500 c20012| 2016-04-06T02:52:08.500-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 381 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:42.314-0500 c20012| 2016-04-06T02:52:08.500-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 381 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:42.315-0500 c20012| 2016-04-06T02:52:08.500-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 381 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.318-0500 c20012| 2016-04-06T02:52:08.502-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 380 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.329-0500 c20012| 2016-04-06T02:52:08.502-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:42.332-0500 c20012| 2016-04-06T02:52:08.502-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 384 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:42.340-0500 c20012| 2016-04-06T02:52:08.502-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 384 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:42.344-0500 c20012| 2016-04-06T02:52:08.503-0500 D COMMAND [conn7] run command config.$cmd { find: "databases", filter: { _id: "multidrop" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|12, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.345-0500 c20012| 2016-04-06T02:52:08.503-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.348-0500 c20012| 2016-04-06T02:52:08.503-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:42.353-0500 c20012| 2016-04-06T02:52:08.503-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 384 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.360-0500 c20012| 2016-04-06T02:52:08.503-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 386 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.503-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:42.361-0500 c20012| 2016-04-06T02:52:08.503-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|12, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:42.368-0500 c20012| 2016-04-06T02:52:08.503-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "databases", filter: { _id: "multidrop" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|12, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.372-0500 c20012| 2016-04-06T02:52:08.503-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 386 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:42.372-0500 c20012| 2016-04-06T02:52:08.503-0500 D QUERY [conn7] Using idhack: query: { _id: "multidrop" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:42.379-0500 c20012| 2016-04-06T02:52:08.503-0500 I COMMAND [conn7] command config.databases command: find { find: "databases", filter: { _id: "multidrop" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|12, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:437 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:42.380-0500 c20012| 2016-04-06T02:52:08.503-0500 D COMMAND [conn7] run command config.$cmd { find: "collections", filter: { _id: /^multidrop\./ }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|12, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.382-0500 c20012| 2016-04-06T02:52:08.503-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|12, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:42.385-0500 c20012| 2016-04-06T02:52:08.503-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "collections", filter: { _id: /^multidrop\./ }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|12, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.386-0500 c20012| 2016-04-06T02:52:08.503-0500 D QUERY [conn7] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.collections" } [js_test:multi_coll_drop] 2016-04-06T02:52:42.390-0500 c20012| 2016-04-06T02:52:08.503-0500 D QUERY [conn7] Only one plan is available; it will be run but will not be cached. query: { _id: /^multidrop\./ } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.397-0500 c20012| 2016-04-06T02:52:08.504-0500 I COMMAND [conn7] command config.collections command: find { find: "collections", filter: { _id: /^multidrop\./ }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|12, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { _id: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:492 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:42.408-0500 c20012| 2016-04-06T02:52:08.514-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 386 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|13, t: 1, h: -7456382829225788614, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f17c'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128507), why: "splitting chunk [{ _id: MinKey }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.414-0500 c20012| 2016-04-06T02:52:08.514-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|13 and ending at ts: Timestamp 1459929128000|13 [js_test:multi_coll_drop] 2016-04-06T02:52:42.414-0500 c20012| 2016-04-06T02:52:08.515-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:42.416-0500 c20012| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.417-0500 c20012| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.417-0500 c20012| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.418-0500 c20012| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.419-0500 c20012| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.419-0500 c20012| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.420-0500 c20012| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.421-0500 c20012| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.421-0500 c20012| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.422-0500 c20012| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.424-0500 c20012| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.427-0500 c20012| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.428-0500 c20012| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.429-0500 c20012| 2016-04-06T02:52:08.515-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:42.431-0500 c20012| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.433-0500 c20012| 2016-04-06T02:52:08.515-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.433-0500 c20012| 2016-04-06T02:52:08.515-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:42.436-0500 c20012| 2016-04-06T02:52:08.516-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.440-0500 c20012| 2016-04-06T02:52:08.516-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.441-0500 c20012| 2016-04-06T02:52:08.516-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.441-0500 c20012| 2016-04-06T02:52:08.516-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.448-0500 c20013| 2016-04-06T02:52:08.840-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 647 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|46, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:42.455-0500 c20013| 2016-04-06T02:52:08.840-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 647 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:42.456-0500 c20013| 2016-04-06T02:52:08.841-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 647 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.458-0500 c20013| 2016-04-06T02:52:08.842-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 646 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.460-0500 c20013| 2016-04-06T02:52:08.842-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|46, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.464-0500 c20013| 2016-04-06T02:52:08.842-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:42.467-0500 c20013| 2016-04-06T02:52:08.842-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 650 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.842-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|46, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:42.473-0500 c20011| 2016-04-06T02:52:08.720-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:42.475-0500 c20011| 2016-04-06T02:52:08.720-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:42.478-0500 c20011| 2016-04-06T02:52:08.720-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.486-0500 c20011| 2016-04-06T02:52:08.720-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|35, t: 1 } and is durable through: { ts: Timestamp 1459929128000|35, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.488-0500 c20011| 2016-04-06T02:52:08.720-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929128000|36, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|35, t: 1 }, name-id: "115" } [js_test:multi_coll_drop] 2016-04-06T02:52:42.495-0500 c20011| 2016-04-06T02:52:08.720-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:42.500-0500 c20011| 2016-04-06T02:52:08.721-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:42.501-0500 c20011| 2016-04-06T02:52:08.721-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:42.504-0500 c20011| 2016-04-06T02:52:08.721-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|36, t: 1 } and is durable through: { ts: Timestamp 1459929128000|35, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.508-0500 c20011| 2016-04-06T02:52:08.721-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|36, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|35, t: 1 }, name-id: "115" } [js_test:multi_coll_drop] 2016-04-06T02:52:42.510-0500 c20011| 2016-04-06T02:52:08.721-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.512-0500 c20011| 2016-04-06T02:52:08.721-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:42.515-0500 c20011| 2016-04-06T02:52:08.721-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|35, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:42.517-0500 c20011| 2016-04-06T02:52:08.722-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:42.518-0500 c20011| 2016-04-06T02:52:08.722-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:42.519-0500 c20011| 2016-04-06T02:52:08.722-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|36, t: 1 } and is durable through: { ts: Timestamp 1459929128000|36, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.519-0500 c20011| 2016-04-06T02:52:08.722-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|36, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.521-0500 c20011| 2016-04-06T02:52:08.722-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.525-0500 c20011| 2016-04-06T02:52:08.722-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:42.536-0500 c20011| 2016-04-06T02:52:08.722-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|35, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:42.571-0500 c20011| 2016-04-06T02:52:08.722-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f186') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:42.581-0500 c20011| 2016-04-06T02:52:08.722-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:42.582-0500 c20011| 2016-04-06T02:52:08.722-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:42.586-0500 c20011| 2016-04-06T02:52:08.722-0500 D REPL [conn16] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.590-0500 c20011| 2016-04-06T02:52:08.722-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|36, t: 1 } and is durable through: { ts: Timestamp 1459929128000|35, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.604-0500 c20011| 2016-04-06T02:52:08.722-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|35, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:42.617-0500 c20011| 2016-04-06T02:52:08.722-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:42.619-0500 c20011| 2016-04-06T02:52:08.722-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|36, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:42.623-0500 c20011| 2016-04-06T02:52:08.722-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:42.627-0500 c20011| 2016-04-06T02:52:08.722-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|35, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:42.628-0500 c20011| 2016-04-06T02:52:08.722-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:42.638-0500 c20011| 2016-04-06T02:52:08.722-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.643-0500 c20011| 2016-04-06T02:52:08.722-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|36, t: 1 } and is durable through: { ts: Timestamp 1459929128000|36, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.673-0500 c20011| 2016-04-06T02:52:08.722-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:42.679-0500 c20011| 2016-04-06T02:52:08.723-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|36, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:42.689-0500 c20011| 2016-04-06T02:52:08.725-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f188'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128725), why: "splitting chunk [{ _id: -95.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.694-0500 c20011| 2016-04-06T02:52:08.725-0500 D QUERY [conn25] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:42.696-0500 c20011| 2016-04-06T02:52:08.725-0500 D QUERY [conn25] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:42.700-0500 c20011| 2016-04-06T02:52:08.725-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.704-0500 c20011| 2016-04-06T02:52:08.726-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|36, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:42.707-0500 c20011| 2016-04-06T02:52:08.726-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|36, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:42.719-0500 c20011| 2016-04-06T02:52:08.728-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:42.720-0500 c20011| 2016-04-06T02:52:08.728-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:42.720-0500 c20011| 2016-04-06T02:52:08.728-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|37, t: 1 } and is durable through: { ts: Timestamp 1459929128000|36, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.724-0500 c20011| 2016-04-06T02:52:08.728-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.726-0500 c20011| 2016-04-06T02:52:08.728-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:42.765-0500 c20011| 2016-04-06T02:52:08.728-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:42.765-0500 c20011| 2016-04-06T02:52:08.728-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:42.768-0500 c20011| 2016-04-06T02:52:08.728-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.792-0500 c20011| 2016-04-06T02:52:08.728-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|37, t: 1 } and is durable through: { ts: Timestamp 1459929128000|36, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.797-0500 c20013| 2016-04-06T02:52:08.842-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 650 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:42.805-0500 c20013| 2016-04-06T02:52:08.842-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 650 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|47, t: 1, h: -7437953265225953598, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.842-0500-5704c02865c17830b843f18d", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128842), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -93.0 }, max: { _id: MaxKey } }, left: { min: { _id: -93.0 }, max: { _id: -92.0 }, lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -92.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.807-0500 c20013| 2016-04-06T02:52:08.842-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|47 and ending at ts: Timestamp 1459929128000|47 [js_test:multi_coll_drop] 2016-04-06T02:52:42.814-0500 c20013| 2016-04-06T02:52:08.843-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:42.814-0500 c20013| 2016-04-06T02:52:08.843-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.816-0500 c20013| 2016-04-06T02:52:08.843-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.817-0500 c20013| 2016-04-06T02:52:08.843-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.818-0500 c20013| 2016-04-06T02:52:08.843-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.819-0500 c20013| 2016-04-06T02:52:08.843-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.824-0500 c20013| 2016-04-06T02:52:08.843-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.834-0500 c20013| 2016-04-06T02:52:08.843-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.838-0500 c20013| 2016-04-06T02:52:08.843-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.841-0500 c20013| 2016-04-06T02:52:08.843-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.842-0500 c20013| 2016-04-06T02:52:08.843-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.843-0500 c20013| 2016-04-06T02:52:08.843-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:42.848-0500 c20011| 2016-04-06T02:52:08.728-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:42.850-0500 c20011| 2016-04-06T02:52:08.728-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|36, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:42.854-0500 c20011| 2016-04-06T02:52:08.728-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|36, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:42.858-0500 c20011| 2016-04-06T02:52:08.729-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|37, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|36, t: 1 }, name-id: "116" } [js_test:multi_coll_drop] 2016-04-06T02:52:42.868-0500 c20011| 2016-04-06T02:52:08.729-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:42.871-0500 c20011| 2016-04-06T02:52:08.729-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:42.874-0500 c20011| 2016-04-06T02:52:08.729-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.875-0500 c20011| 2016-04-06T02:52:08.729-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|37, t: 1 } and is durable through: { ts: Timestamp 1459929128000|37, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.876-0500 c20011| 2016-04-06T02:52:08.729-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|37, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.880-0500 c20011| 2016-04-06T02:52:08.729-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:42.886-0500 c20011| 2016-04-06T02:52:08.729-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f188'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128725), why: "splitting chunk [{ _id: -95.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c02865c17830b843f188'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128725), why: "splitting chunk [{ _id: -95.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:42.903-0500 c20011| 2016-04-06T02:52:08.730-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:42.919-0500 c20011| 2016-04-06T02:52:08.730-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|36, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:42.925-0500 c20011| 2016-04-06T02:52:08.730-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:42.929-0500 c20011| 2016-04-06T02:52:08.730-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|37, t: 1 } and is durable through: { ts: Timestamp 1459929128000|37, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.939-0500 c20011| 2016-04-06T02:52:08.730-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.941-0500 c20011| 2016-04-06T02:52:08.730-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:42.942-0500 c20011| 2016-04-06T02:52:08.730-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|37, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:42.952-0500 c20011| 2016-04-06T02:52:08.730-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|36, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:42.964-0500 c20011| 2016-04-06T02:52:08.730-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|37, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:42.964-0500 *** Stepping down connection to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:42.975-0500 c20011| 2016-04-06T02:52:08.731-0500 D COMMAND [conn25] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-95.0", lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -95.0 }, max: { _id: -94.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-95.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-94.0", lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -94.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-94.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|12 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:42.978-0500 c20011| 2016-04-06T02:52:08.731-0500 D QUERY [conn25] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:42.995-0500 c20011| 2016-04-06T02:52:08.731-0500 D QUERY [conn25] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:42.995-0500 c20011| 2016-04-06T02:52:08.731-0500 I COMMAND [conn25] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:42.995-0500 c20011| 2016-04-06T02:52:08.731-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-95.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:42.996-0500 c20011| 2016-04-06T02:52:08.731-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-94.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:43.018-0500 c20011| 2016-04-06T02:52:08.731-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|37, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.018-0500 c20011| 2016-04-06T02:52:08.732-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|37, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.022-0500 c20011| 2016-04-06T02:52:08.733-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:43.022-0500 c20011| 2016-04-06T02:52:08.733-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:43.025-0500 c20011| 2016-04-06T02:52:08.733-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|38, t: 1 } and is durable through: { ts: Timestamp 1459929128000|37, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.027-0500 c20011| 2016-04-06T02:52:08.733-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:43.028-0500 c20011| 2016-04-06T02:52:08.733-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:43.029-0500 c20011| 2016-04-06T02:52:08.733-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.034-0500 c20011| 2016-04-06T02:52:08.733-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.037-0500 c20011| 2016-04-06T02:52:08.733-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.039-0500 c20011| 2016-04-06T02:52:08.733-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|38, t: 1 } and is durable through: { ts: Timestamp 1459929128000|37, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.045-0500 c20011| 2016-04-06T02:52:08.733-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.047-0500 c20011| 2016-04-06T02:52:08.734-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|37, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:43.052-0500 c20011| 2016-04-06T02:52:08.734-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|37, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:43.053-0500 c20011| 2016-04-06T02:52:08.734-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|38, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|37, t: 1 }, name-id: "117" } [js_test:multi_coll_drop] 2016-04-06T02:52:43.056-0500 c20011| 2016-04-06T02:52:08.734-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:43.056-0500 c20011| 2016-04-06T02:52:08.734-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:43.057-0500 c20011| 2016-04-06T02:52:08.734-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.067-0500 c20011| 2016-04-06T02:52:08.734-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:43.068-0500 c20011| 2016-04-06T02:52:08.734-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:43.068-0500 c20011| 2016-04-06T02:52:08.734-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|38, t: 1 } and is durable through: { ts: Timestamp 1459929128000|38, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.069-0500 c20011| 2016-04-06T02:52:08.734-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|38, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.070-0500 c20011| 2016-04-06T02:52:08.734-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.071-0500 c20011| 2016-04-06T02:52:08.734-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|38, t: 1 } and is durable through: { ts: Timestamp 1459929128000|38, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.072-0500 c20011| 2016-04-06T02:52:08.735-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.075-0500 c20011| 2016-04-06T02:52:08.735-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.101-0500 c20011| 2016-04-06T02:52:08.735-0500 I COMMAND [conn25] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-95.0", lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -95.0 }, max: { _id: -94.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-95.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-94.0", lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -94.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-94.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|12 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.120-0500 c20011| 2016-04-06T02:52:08.735-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|37, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.128-0500 c20011| 2016-04-06T02:52:08.735-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|37, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.131-0500 c20011| 2016-04-06T02:52:08.735-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|38, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:43.132-0500 c20011| 2016-04-06T02:52:08.735-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|38, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:43.135-0500 c20011| 2016-04-06T02:52:08.737-0500 D COMMAND [conn25] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.735-0500-5704c02865c17830b843f189", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128735), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -95.0 }, max: { _id: MaxKey } }, left: { min: { _id: -95.0 }, max: { _id: -94.0 }, lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -94.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.152-0500 c20011| 2016-04-06T02:52:08.737-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|38, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.154-0500 c20011| 2016-04-06T02:52:08.738-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|38, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.157-0500 c20011| 2016-04-06T02:52:08.739-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:43.158-0500 c20011| 2016-04-06T02:52:08.739-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:43.161-0500 c20011| 2016-04-06T02:52:08.739-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.168-0500 c20011| 2016-04-06T02:52:08.739-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|39, t: 1 } and is durable through: { ts: Timestamp 1459929128000|38, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.199-0500 c20011| 2016-04-06T02:52:08.739-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.230-0500 c20011| 2016-04-06T02:52:08.739-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:43.231-0500 c20011| 2016-04-06T02:52:08.739-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:43.247-0500 c20011| 2016-04-06T02:52:08.739-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|39, t: 1 } and is durable through: { ts: Timestamp 1459929128000|38, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.250-0500 c20011| 2016-04-06T02:52:08.739-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.255-0500 c20011| 2016-04-06T02:52:08.739-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.258-0500 c20011| 2016-04-06T02:52:08.740-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|38, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:43.260-0500 c20011| 2016-04-06T02:52:08.740-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|38, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:43.263-0500 c20011| 2016-04-06T02:52:08.751-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|39, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|38, t: 1 }, name-id: "118" } [js_test:multi_coll_drop] 2016-04-06T02:52:43.266-0500 c20011| 2016-04-06T02:52:08.751-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:43.266-0500 c20011| 2016-04-06T02:52:08.751-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:43.268-0500 c20011| 2016-04-06T02:52:08.751-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:43.270-0500 c20011| 2016-04-06T02:52:08.751-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|39, t: 1 } and is durable through: { ts: Timestamp 1459929128000|39, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.271-0500 c20011| 2016-04-06T02:52:08.751-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|39, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.272-0500 c20011| 2016-04-06T02:52:08.751-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:43.275-0500 c20011| 2016-04-06T02:52:08.751-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.278-0500 c20011| 2016-04-06T02:52:08.751-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.280-0500 c20011| 2016-04-06T02:52:08.751-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.284-0500 c20011| 2016-04-06T02:52:08.751-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|39, t: 1 } and is durable through: { ts: Timestamp 1459929128000|39, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.289-0500 c20011| 2016-04-06T02:52:08.751-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.293-0500 c20011| 2016-04-06T02:52:08.751-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|38, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 11ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.294-0500 c20011| 2016-04-06T02:52:08.752-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|38, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 11ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.300-0500 c20011| 2016-04-06T02:52:08.752-0500 I COMMAND [conn25] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.735-0500-5704c02865c17830b843f189", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128735), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -95.0 }, max: { _id: MaxKey } }, left: { min: { _id: -95.0 }, max: { _id: -94.0 }, lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -94.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 14ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.302-0500 c20011| 2016-04-06T02:52:08.752-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|39, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:43.303-0500 c20011| 2016-04-06T02:52:08.752-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f188') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.304-0500 c20011| 2016-04-06T02:52:08.752-0500 D QUERY [conn25] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:43.307-0500 c20011| 2016-04-06T02:52:08.752-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c02865c17830b843f188') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.310-0500 c20011| 2016-04-06T02:52:08.752-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|39, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:43.312-0500 c20011| 2016-04-06T02:52:08.752-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|39, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.313-0500 c20011| 2016-04-06T02:52:08.753-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|40, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|39, t: 1 }, name-id: "119" } [js_test:multi_coll_drop] 2016-04-06T02:52:43.318-0500 c20011| 2016-04-06T02:52:08.754-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|39, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.319-0500 c20011| 2016-04-06T02:52:08.755-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|39, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:43.321-0500 c20011| 2016-04-06T02:52:08.756-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:43.321-0500 c20011| 2016-04-06T02:52:08.756-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:43.322-0500 c20011| 2016-04-06T02:52:08.756-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.324-0500 c20011| 2016-04-06T02:52:08.756-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|40, t: 1 } and is durable through: { ts: Timestamp 1459929128000|39, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.327-0500 c20011| 2016-04-06T02:52:08.756-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|40, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|39, t: 1 }, name-id: "119" } [js_test:multi_coll_drop] 2016-04-06T02:52:43.333-0500 c20011| 2016-04-06T02:52:08.756-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.334-0500 c20011| 2016-04-06T02:52:08.756-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|39, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:43.363-0500 c20011| 2016-04-06T02:52:08.758-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:43.363-0500 c20011| 2016-04-06T02:52:08.758-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:43.368-0500 c20011| 2016-04-06T02:52:08.758-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|40, t: 1 } and is durable through: { ts: Timestamp 1459929128000|39, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.371-0500 c20011| 2016-04-06T02:52:08.758-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|40, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|39, t: 1 }, name-id: "119" } [js_test:multi_coll_drop] 2016-04-06T02:52:43.377-0500 c20011| 2016-04-06T02:52:08.758-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.387-0500 c20011| 2016-04-06T02:52:08.758-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.393-0500 c20011| 2016-04-06T02:52:08.759-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:43.396-0500 c20011| 2016-04-06T02:52:08.759-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:43.402-0500 c20011| 2016-04-06T02:52:08.759-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.404-0500 c20011| 2016-04-06T02:52:08.759-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|40, t: 1 } and is durable through: { ts: Timestamp 1459929128000|40, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.405-0500 c20011| 2016-04-06T02:52:08.759-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|40, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.415-0500 c20011| 2016-04-06T02:52:08.759-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.428-0500 c20011| 2016-04-06T02:52:08.760-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:43.431-0500 c20011| 2016-04-06T02:52:08.760-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:43.438-0500 c20011| 2016-04-06T02:52:08.760-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|40, t: 1 } and is durable through: { ts: Timestamp 1459929128000|40, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.445-0500 c20011| 2016-04-06T02:52:08.760-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.450-0500 c20011| 2016-04-06T02:52:08.760-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.453-0500 c20011| 2016-04-06T02:52:08.765-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|39, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 9ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.462-0500 c20011| 2016-04-06T02:52:08.765-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f188') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 12ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.468-0500 c20011| 2016-04-06T02:52:08.765-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|39, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 8ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.471-0500 c20011| 2016-04-06T02:52:08.765-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|40, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:43.473-0500 c20011| 2016-04-06T02:52:08.765-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|40, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:43.478-0500 c20011| 2016-04-06T02:52:08.765-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|40, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.481-0500 c20011| 2016-04-06T02:52:08.765-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|40, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:43.487-0500 c20011| 2016-04-06T02:52:08.765-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|40, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.488-0500 c20011| 2016-04-06T02:52:08.766-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:43.497-0500 c20011| 2016-04-06T02:52:08.767-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|40, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.501-0500 c20011| 2016-04-06T02:52:08.768-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|40, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.507-0500 c20011| 2016-04-06T02:52:08.768-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|40, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:43.508-0500 c20011| 2016-04-06T02:52:08.768-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|40, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.511-0500 c20011| 2016-04-06T02:52:08.768-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:43.519-0500 c20011| 2016-04-06T02:52:08.768-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|40, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.529-0500 c20011| 2016-04-06T02:52:08.769-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f18a'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128769), why: "splitting chunk [{ _id: -94.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.531-0500 c20011| 2016-04-06T02:52:08.769-0500 D QUERY [conn25] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:43.536-0500 c20011| 2016-04-06T02:52:08.769-0500 D QUERY [conn25] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:43.538-0500 c20011| 2016-04-06T02:52:08.769-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.545-0500 c20011| 2016-04-06T02:52:08.770-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|40, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.550-0500 c20011| 2016-04-06T02:52:08.770-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|40, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.562-0500 c20011| 2016-04-06T02:52:08.772-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|40, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:43.572-0500 c20011| 2016-04-06T02:52:08.772-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:43.574-0500 c20011| 2016-04-06T02:52:08.772-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:43.582-0500 c20011| 2016-04-06T02:52:08.772-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.587-0500 c20011| 2016-04-06T02:52:08.772-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|41, t: 1 } and is durable through: { ts: Timestamp 1459929128000|40, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.591-0500 c20011| 2016-04-06T02:52:08.772-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.596-0500 c20011| 2016-04-06T02:52:08.772-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|41, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|40, t: 1 }, name-id: "120" } [js_test:multi_coll_drop] 2016-04-06T02:52:43.599-0500 c20011| 2016-04-06T02:52:08.772-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|40, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:43.621-0500 c20011| 2016-04-06T02:52:08.773-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:43.621-0500 c20011| 2016-04-06T02:52:08.773-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:43.632-0500 c20011| 2016-04-06T02:52:08.773-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|41, t: 1 } and is durable through: { ts: Timestamp 1459929128000|40, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.653-0500 c20011| 2016-04-06T02:52:08.773-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|41, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|40, t: 1 }, name-id: "120" } [js_test:multi_coll_drop] 2016-04-06T02:52:43.658-0500 c20011| 2016-04-06T02:52:08.773-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.665-0500 c20011| 2016-04-06T02:52:08.773-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.671-0500 c20011| 2016-04-06T02:52:08.774-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:43.675-0500 c20011| 2016-04-06T02:52:08.774-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:43.679-0500 c20011| 2016-04-06T02:52:08.774-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.680-0500 c20011| 2016-04-06T02:52:08.774-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|41, t: 1 } and is durable through: { ts: Timestamp 1459929128000|41, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.682-0500 c20011| 2016-04-06T02:52:08.774-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|41, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.687-0500 c20011| 2016-04-06T02:52:08.774-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.692-0500 c20011| 2016-04-06T02:52:08.774-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|40, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.698-0500 c20011| 2016-04-06T02:52:08.774-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f18a'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128769), why: "splitting chunk [{ _id: -94.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c02865c17830b843f18a'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128769), why: "splitting chunk [{ _id: -94.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.699-0500 c20011| 2016-04-06T02:52:08.774-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|41, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:43.704-0500 c20011| 2016-04-06T02:52:08.775-0500 D COMMAND [conn25] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|41, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.709-0500 c20011| 2016-04-06T02:52:08.775-0500 D COMMAND [conn25] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|41, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:43.711-0500 c20011| 2016-04-06T02:52:08.775-0500 D COMMAND [conn25] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|41, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.719-0500 c20011| 2016-04-06T02:52:08.775-0500 D QUERY [conn25] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:43.728-0500 c20011| 2016-04-06T02:52:08.775-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|40, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.734-0500 c20011| 2016-04-06T02:52:08.776-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:43.734-0500 c20011| 2016-04-06T02:52:08.776-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:43.739-0500 c20011| 2016-04-06T02:52:08.776-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|41, t: 1 } and is durable through: { ts: Timestamp 1459929128000|41, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.741-0500 c20011| 2016-04-06T02:52:08.776-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.748-0500 c20011| 2016-04-06T02:52:08.776-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.750-0500 c20011| 2016-04-06T02:52:08.777-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|41, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:43.754-0500 c20011| 2016-04-06T02:52:08.778-0500 I COMMAND [conn25] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|41, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:512 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.760-0500 c20011| 2016-04-06T02:52:08.778-0500 D COMMAND [conn25] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|14 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|41, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.762-0500 c20011| 2016-04-06T02:52:08.778-0500 D COMMAND [conn25] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|41, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:43.765-0500 c20011| 2016-04-06T02:52:08.778-0500 D COMMAND [conn25] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|14 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|41, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.767-0500 c20011| 2016-04-06T02:52:08.779-0500 D QUERY [conn25] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:43.776-0500 c20011| 2016-04-06T02:52:08.779-0500 I COMMAND [conn25] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|14 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|41, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.780-0500 c20011| 2016-04-06T02:52:08.779-0500 D COMMAND [conn25] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-94.0", lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -94.0 }, max: { _id: -93.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-94.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-93.0", lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -93.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-93.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|14 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.786-0500 c20011| 2016-04-06T02:52:08.779-0500 D QUERY [conn25] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:43.787-0500 c20011| 2016-04-06T02:52:08.779-0500 D QUERY [conn25] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:43.792-0500 c20011| 2016-04-06T02:52:08.779-0500 I COMMAND [conn25] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.793-0500 c20011| 2016-04-06T02:52:08.779-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-94.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:43.795-0500 c20011| 2016-04-06T02:52:08.779-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-93.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:43.800-0500 c20011| 2016-04-06T02:52:08.779-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|41, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.804-0500 c20011| 2016-04-06T02:52:08.780-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|41, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.810-0500 c20011| 2016-04-06T02:52:08.782-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|41, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:43.811-0500 c20011| 2016-04-06T02:52:08.782-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|42, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|41, t: 1 }, name-id: "121" } [js_test:multi_coll_drop] 2016-04-06T02:52:43.815-0500 c20011| 2016-04-06T02:52:08.782-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:43.816-0500 c20011| 2016-04-06T02:52:08.782-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:43.821-0500 c20011| 2016-04-06T02:52:08.782-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.825-0500 c20011| 2016-04-06T02:52:08.782-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|42, t: 1 } and is durable through: { ts: Timestamp 1459929128000|41, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.827-0500 c20011| 2016-04-06T02:52:08.782-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|42, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|41, t: 1 }, name-id: "121" } [js_test:multi_coll_drop] 2016-04-06T02:52:43.830-0500 c20011| 2016-04-06T02:52:08.782-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.832-0500 c20011| 2016-04-06T02:52:08.783-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|41, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:43.836-0500 c20011| 2016-04-06T02:52:08.784-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:43.836-0500 c20011| 2016-04-06T02:52:08.784-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:43.839-0500 c20011| 2016-04-06T02:52:08.784-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|42, t: 1 } and is durable through: { ts: Timestamp 1459929128000|41, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.844-0500 c20011| 2016-04-06T02:52:08.784-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|42, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|41, t: 1 }, name-id: "121" } [js_test:multi_coll_drop] 2016-04-06T02:52:43.846-0500 c20011| 2016-04-06T02:52:08.784-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.851-0500 c20011| 2016-04-06T02:52:08.784-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.856-0500 c20011| 2016-04-06T02:52:08.784-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:43.857-0500 c20011| 2016-04-06T02:52:08.784-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:43.862-0500 c20011| 2016-04-06T02:52:08.784-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.868-0500 c20011| 2016-04-06T02:52:08.784-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|42, t: 1 } and is durable through: { ts: Timestamp 1459929128000|42, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.868-0500 c20011| 2016-04-06T02:52:08.784-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|42, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.874-0500 c20011| 2016-04-06T02:52:08.784-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.878-0500 c20011| 2016-04-06T02:52:08.784-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|41, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.887-0500 c20011| 2016-04-06T02:52:08.784-0500 I COMMAND [conn25] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-94.0", lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -94.0 }, max: { _id: -93.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-94.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-93.0", lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -93.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-93.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|14 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.890-0500 c20011| 2016-04-06T02:52:08.784-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|41, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.894-0500 c20011| 2016-04-06T02:52:08.784-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|42, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:43.897-0500 c20011| 2016-04-06T02:52:08.784-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|42, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:43.902-0500 c20011| 2016-04-06T02:52:08.784-0500 D COMMAND [conn25] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.784-0500-5704c02865c17830b843f18b", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128784), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -94.0 }, max: { _id: MaxKey } }, left: { min: { _id: -94.0 }, max: { _id: -93.0 }, lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -93.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.904-0500 c20011| 2016-04-06T02:52:08.785-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|42, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.906-0500 c20011| 2016-04-06T02:52:08.787-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|42, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:43.909-0500 c20012| 2016-04-06T02:52:08.516-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:43.913-0500 d20010| 2016-04-06T02:52:26.811-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -78.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -77.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|46, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:52:43.915-0500 c20013| 2016-04-06T02:52:08.843-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:43.917-0500 c20013| 2016-04-06T02:52:08.843-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:43.919-0500 c20013| 2016-04-06T02:52:08.844-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:43.920-0500 c20013| 2016-04-06T02:52:08.844-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:43.926-0500 c20013| 2016-04-06T02:52:08.844-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:43.929-0500 c20011| 2016-04-06T02:52:08.788-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|42, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.930-0500 c20011| 2016-04-06T02:52:08.791-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|42, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:43.933-0500 c20011| 2016-04-06T02:52:08.793-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|43, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|42, t: 1 }, name-id: "122" } [js_test:multi_coll_drop] 2016-04-06T02:52:43.938-0500 c20011| 2016-04-06T02:52:08.794-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:43.938-0500 c20011| 2016-04-06T02:52:08.794-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:43.944-0500 c20011| 2016-04-06T02:52:08.794-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|42, t: 1 } and is durable through: { ts: Timestamp 1459929128000|42, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.947-0500 c20011| 2016-04-06T02:52:08.794-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|43, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|42, t: 1 }, name-id: "122" } [js_test:multi_coll_drop] 2016-04-06T02:52:43.952-0500 c20011| 2016-04-06T02:52:08.794-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.959-0500 c20011| 2016-04-06T02:52:08.794-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:43.966-0500 c20011| 2016-04-06T02:52:08.794-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:43.967-0500 c20011| 2016-04-06T02:52:08.794-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:43.977-0500 c20011| 2016-04-06T02:52:08.794-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:43.978-0500 c20011| 2016-04-06T02:52:08.794-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:43.986-0500 c20011| 2016-04-06T02:52:08.794-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.991-0500 c20011| 2016-04-06T02:52:08.794-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|43, t: 1 } and is durable through: { ts: Timestamp 1459929128000|42, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:43.993-0500 c20011| 2016-04-06T02:52:08.794-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|43, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|42, t: 1 }, name-id: "122" } [js_test:multi_coll_drop] 2016-04-06T02:52:43.999-0500 c20011| 2016-04-06T02:52:08.794-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:44.000-0500 c20013| 2016-04-06T02:52:08.844-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.000-0500 c20013| 2016-04-06T02:52:08.844-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.001-0500 c20013| 2016-04-06T02:52:08.844-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.011-0500 d20010| 2016-04-06T02:52:26.819-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -78.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c03a65c17830b843f1ab [js_test:multi_coll_drop] 2016-04-06T02:52:44.025-0500 d20010| 2016-04-06T02:52:26.819-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|46||5704c02806c33406d4d9c0c0, current metadata version is 1|46||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:44.038-0500 d20010| 2016-04-06T02:52:26.821-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|46||5704c02806c33406d4d9c0c0, took 2ms) [js_test:multi_coll_drop] 2016-04-06T02:52:44.041-0500 d20010| 2016-04-06T02:52:26.821-0500 I SHARDING [conn5] splitChunk accepted at version 1|46||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:44.062-0500 d20010| 2016-04-06T02:52:26.828-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:26.828-0500-5704c03a65c17830b843f1ac", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929146828), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -78.0 }, max: { _id: MaxKey } }, left: { min: { _id: -78.0 }, max: { _id: -77.0 }, lastmod: Timestamp 1000|47, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -77.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|48, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:52:44.065-0500 d20010| 2016-04-06T02:52:26.837-0500 I SHARDING [conn5] distributed lock with ts: 5704c03a65c17830b843f1ab' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:44.065-0500 c20013| 2016-04-06T02:52:08.844-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.067-0500 c20013| 2016-04-06T02:52:08.844-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.067-0500 c20013| 2016-04-06T02:52:08.844-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.073-0500 c20013| 2016-04-06T02:52:08.844-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.074-0500 c20013| 2016-04-06T02:52:08.844-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.082-0500 c20013| 2016-04-06T02:52:08.844-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.082-0500 c20013| 2016-04-06T02:52:08.844-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.083-0500 c20013| 2016-04-06T02:52:08.844-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.084-0500 c20013| 2016-04-06T02:52:08.844-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.088-0500 c20013| 2016-04-06T02:52:08.844-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.095-0500 c20011| 2016-04-06T02:52:08.795-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|43, t: 1 } and is durable through: { ts: Timestamp 1459929128000|42, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.099-0500 c20011| 2016-04-06T02:52:08.795-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|43, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|42, t: 1 }, name-id: "122" } [js_test:multi_coll_drop] 2016-04-06T02:52:44.103-0500 c20011| 2016-04-06T02:52:08.795-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.109-0500 c20011| 2016-04-06T02:52:08.795-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:44.120-0500 c20011| 2016-04-06T02:52:08.798-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:44.122-0500 c20011| 2016-04-06T02:52:08.798-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:44.127-0500 c20011| 2016-04-06T02:52:08.798-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.131-0500 c20011| 2016-04-06T02:52:08.798-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|43, t: 1 } and is durable through: { ts: Timestamp 1459929128000|43, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.133-0500 c20011| 2016-04-06T02:52:08.798-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|43, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.155-0500 c20011| 2016-04-06T02:52:08.798-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:44.161-0500 c20011| 2016-04-06T02:52:08.798-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:44.165-0500 c20011| 2016-04-06T02:52:08.798-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:44.180-0500 c20011| 2016-04-06T02:52:08.798-0500 I COMMAND [conn25] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.784-0500-5704c02865c17830b843f18b", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128784), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -94.0 }, max: { _id: MaxKey } }, left: { min: { _id: -94.0 }, max: { _id: -93.0 }, lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -93.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 13ms [js_test:multi_coll_drop] 2016-04-06T02:52:44.182-0500 c20011| 2016-04-06T02:52:08.798-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|43, t: 1 } and is durable through: { ts: Timestamp 1459929128000|43, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.186-0500 c20011| 2016-04-06T02:52:08.798-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.195-0500 c20011| 2016-04-06T02:52:08.798-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:44.202-0500 c20011| 2016-04-06T02:52:08.798-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f18a') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.213-0500 c20011| 2016-04-06T02:52:08.799-0500 D QUERY [conn25] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:44.224-0500 c20011| 2016-04-06T02:52:08.799-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c02865c17830b843f18a') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.231-0500 c20011| 2016-04-06T02:52:08.799-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|42, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 8ms [js_test:multi_coll_drop] 2016-04-06T02:52:44.237-0500 c20011| 2016-04-06T02:52:08.800-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|42, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 12ms [js_test:multi_coll_drop] 2016-04-06T02:52:44.240-0500 c20011| 2016-04-06T02:52:08.800-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|43, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:44.246-0500 c20011| 2016-04-06T02:52:08.800-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|43, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:44.250-0500 c20011| 2016-04-06T02:52:08.800-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|43, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:44.257-0500 c20011| 2016-04-06T02:52:08.800-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|44, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|43, t: 1 }, name-id: "123" } [js_test:multi_coll_drop] 2016-04-06T02:52:44.263-0500 c20011| 2016-04-06T02:52:08.800-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|43, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:44.266-0500 c20011| 2016-04-06T02:52:08.803-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|43, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:44.267-0500 c20011| 2016-04-06T02:52:08.805-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|43, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:44.271-0500 c20011| 2016-04-06T02:52:08.806-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:44.271-0500 c20011| 2016-04-06T02:52:08.806-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:44.273-0500 c20011| 2016-04-06T02:52:08.806-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.277-0500 c20011| 2016-04-06T02:52:08.806-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|44, t: 1 } and is durable through: { ts: Timestamp 1459929128000|43, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.278-0500 c20011| 2016-04-06T02:52:08.806-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|44, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|43, t: 1 }, name-id: "123" } [js_test:multi_coll_drop] 2016-04-06T02:52:44.283-0500 c20011| 2016-04-06T02:52:08.806-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:44.289-0500 c20011| 2016-04-06T02:52:08.808-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:44.290-0500 c20011| 2016-04-06T02:52:08.808-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:44.295-0500 c20011| 2016-04-06T02:52:08.808-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|44, t: 1 } and is durable through: { ts: Timestamp 1459929128000|43, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.297-0500 c20011| 2016-04-06T02:52:08.808-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|44, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|43, t: 1 }, name-id: "123" } [js_test:multi_coll_drop] 2016-04-06T02:52:44.300-0500 c20011| 2016-04-06T02:52:08.808-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.310-0500 c20011| 2016-04-06T02:52:08.809-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:44.316-0500 c20011| 2016-04-06T02:52:08.824-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:44.319-0500 c20011| 2016-04-06T02:52:08.824-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:44.330-0500 c20011| 2016-04-06T02:52:08.824-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:44.331-0500 c20011| 2016-04-06T02:52:08.824-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:44.332-0500 c20011| 2016-04-06T02:52:08.824-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|44, t: 1 } and is durable through: { ts: Timestamp 1459929128000|44, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.337-0500 c20011| 2016-04-06T02:52:08.824-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|44, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.339-0500 c20011| 2016-04-06T02:52:08.824-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.349-0500 c20011| 2016-04-06T02:52:08.824-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:44.355-0500 c20011| 2016-04-06T02:52:08.824-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.359-0500 c20011| 2016-04-06T02:52:08.824-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|44, t: 1 } and is durable through: { ts: Timestamp 1459929128000|44, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.367-0500 c20011| 2016-04-06T02:52:08.824-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:44.370-0500 c20011| 2016-04-06T02:52:08.824-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|43, t: 1 } } cursorid:17466612721 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 20ms [js_test:multi_coll_drop] 2016-04-06T02:52:44.377-0500 c20011| 2016-04-06T02:52:08.824-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f18a') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 25ms [js_test:multi_coll_drop] 2016-04-06T02:52:44.381-0500 c20011| 2016-04-06T02:52:08.824-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|43, t: 1 } } cursorid:20785203637 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 18ms [js_test:multi_coll_drop] 2016-04-06T02:52:44.382-0500 c20011| 2016-04-06T02:52:08.825-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|44, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:44.384-0500 c20011| 2016-04-06T02:52:08.825-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|44, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:44.387-0500 c20011| 2016-04-06T02:52:08.826-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|14 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|44, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.392-0500 c20011| 2016-04-06T02:52:08.826-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|44, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:44.396-0500 c20011| 2016-04-06T02:52:08.826-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|14 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|44, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.400-0500 c20011| 2016-04-06T02:52:08.826-0500 D QUERY [conn10] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:44.404-0500 c20011| 2016-04-06T02:52:08.826-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|14 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|44, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:732 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:44.408-0500 c20011| 2016-04-06T02:52:08.828-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f18c'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128828), why: "splitting chunk [{ _id: -93.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.410-0500 c20011| 2016-04-06T02:52:08.828-0500 D QUERY [conn25] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:44.419-0500 c20011| 2016-04-06T02:52:08.828-0500 D QUERY [conn25] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:44.420-0500 c20011| 2016-04-06T02:52:08.828-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.427-0500 c20011| 2016-04-06T02:52:08.829-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|44, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:44.429-0500 c20011| 2016-04-06T02:52:08.830-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|45, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|44, t: 1 }, name-id: "124" } [js_test:multi_coll_drop] 2016-04-06T02:52:44.432-0500 c20011| 2016-04-06T02:52:08.831-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:44.433-0500 c20011| 2016-04-06T02:52:08.831-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:44.436-0500 c20011| 2016-04-06T02:52:08.831-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.439-0500 c20011| 2016-04-06T02:52:08.831-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|45, t: 1 } and is durable through: { ts: Timestamp 1459929128000|44, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.439-0500 c20013| 2016-04-06T02:52:08.844-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.439-0500 c20013| 2016-04-06T02:52:08.844-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.441-0500 c20013| 2016-04-06T02:52:08.845-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 652 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.845-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|46, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:44.442-0500 c20013| 2016-04-06T02:52:08.845-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 652 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:44.443-0500 c20013| 2016-04-06T02:52:08.845-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.445-0500 s20014| 2016-04-06T02:52:26.805-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 277 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-78.0", lastmod: Timestamp 1000|46, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -78.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.445-0500 s20015| 2016-04-06T02:52:27.336-0500 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:44.447-0500 c20011| 2016-04-06T02:52:08.831-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|45, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|44, t: 1 }, name-id: "124" } [js_test:multi_coll_drop] 2016-04-06T02:52:44.450-0500 s20014| 2016-04-06T02:52:26.805-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|44||5704c02806c33406d4d9c0c0 and 23 chunks [js_test:multi_coll_drop] 2016-04-06T02:52:44.453-0500 c20012| 2016-04-06T02:52:08.516-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.465-0500 d20010| 2016-04-06T02:52:26.841-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -77.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -76.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|48, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:52:44.467-0500 c20012| 2016-04-06T02:52:08.516-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.487-0500 d20010| 2016-04-06T02:52:26.846-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -77.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c03a65c17830b843f1ad [js_test:multi_coll_drop] 2016-04-06T02:52:44.489-0500 c20011| 2016-04-06T02:52:08.831-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:44.492-0500 s20014| 2016-04-06T02:52:26.805-0500 D SHARDING [conn1] major version query from 1|44||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|44 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.493-0500 c20013| 2016-04-06T02:52:08.845-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.500-0500 d20010| 2016-04-06T02:52:26.846-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|48||5704c02806c33406d4d9c0c0, current metadata version is 1|48||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:44.503-0500 s20014| 2016-04-06T02:52:26.805-0500 D ASIO [conn1] startCommand: RemoteCommand 279 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:56.805-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|44 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|12, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.505-0500 c20013| 2016-04-06T02:52:08.846-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:44.507-0500 d20010| 2016-04-06T02:52:26.852-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|48||5704c02806c33406d4d9c0c0, took 5ms) [js_test:multi_coll_drop] 2016-04-06T02:52:44.512-0500 c20011| 2016-04-06T02:52:08.831-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|44, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:44.512-0500 s20014| 2016-04-06T02:52:26.805-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 279 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:44.517-0500 c20011| 2016-04-06T02:52:08.832-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:44.518-0500 c20011| 2016-04-06T02:52:08.832-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:44.521-0500 c20011| 2016-04-06T02:52:08.832-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.523-0500 c20011| 2016-04-06T02:52:08.832-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|45, t: 1 } and is durable through: { ts: Timestamp 1459929128000|45, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.529-0500 c20013| 2016-04-06T02:52:08.846-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:44.535-0500 s20014| 2016-04-06T02:52:26.810-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 279 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-79.0", lastmod: Timestamp 1000|45, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -79.0 }, max: { _id: -78.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-78.0", lastmod: Timestamp 1000|46, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -78.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.538-0500 s20014| 2016-04-06T02:52:26.810-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|46||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:44.550-0500 s20014| 2016-04-06T02:52:26.810-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 5ms sequenceNumber: 26 version: 1|46||5704c02806c33406d4d9c0c0 based on: 1|44||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:44.551-0500 c20011| 2016-04-06T02:52:08.832-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|45, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.557-0500 c20011| 2016-04-06T02:52:08.832-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:44.561-0500 c20011| 2016-04-06T02:52:08.832-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|44, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:44.566-0500 c20013| 2016-04-06T02:52:08.846-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 653 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:44.570-0500 c20013| 2016-04-06T02:52:08.846-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 653 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:44.573-0500 c20013| 2016-04-06T02:52:08.846-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 653 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.580-0500 c20013| 2016-04-06T02:52:08.848-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|46, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:44.581-0500 d20010| 2016-04-06T02:52:26.852-0500 I SHARDING [conn5] splitChunk accepted at version 1|48||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:44.587-0500 d20010| 2016-04-06T02:52:26.862-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:26.862-0500-5704c03a65c17830b843f1ae", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929146862), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -77.0 }, max: { _id: MaxKey } }, left: { min: { _id: -77.0 }, max: { _id: -76.0 }, lastmod: Timestamp 1000|49, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -76.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|50, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:52:44.590-0500 d20010| 2016-04-06T02:52:26.881-0500 I SHARDING [conn5] distributed lock with ts: 5704c03a65c17830b843f1ad' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:52:44.592-0500 d20010| 2016-04-06T02:52:26.883-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -76.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -75.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|50, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:52:44.594-0500 d20010| 2016-04-06T02:52:26.891-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -76.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c03a65c17830b843f1af [js_test:multi_coll_drop] 2016-04-06T02:52:44.595-0500 d20010| 2016-04-06T02:52:26.891-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|50||5704c02806c33406d4d9c0c0, current metadata version is 1|50||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:44.597-0500 d20010| 2016-04-06T02:52:26.893-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|50||5704c02806c33406d4d9c0c0, took 1ms) [js_test:multi_coll_drop] 2016-04-06T02:52:44.598-0500 d20010| 2016-04-06T02:52:26.893-0500 I SHARDING [conn5] splitChunk accepted at version 1|50||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:44.603-0500 s20014| 2016-04-06T02:52:26.810-0500 D ASIO [conn1] startCommand: RemoteCommand 281 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:52:56.810-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|12, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.607-0500 s20014| 2016-04-06T02:52:26.810-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 281 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:44.612-0500 s20014| 2016-04-06T02:52:26.811-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 281 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-78.0", lastmod: Timestamp 1000|46, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -78.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.613-0500 s20014| 2016-04-06T02:52:26.811-0500 I COMMAND [conn1] splitting chunk [{ _id: -78.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:44.630-0500 s20014| 2016-04-06T02:52:26.837-0500 D ASIO [conn1] startCommand: RemoteCommand 283 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:56.837-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|4, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.632-0500 s20014| 2016-04-06T02:52:26.838-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 283 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:44.646-0500 s20014| 2016-04-06T02:52:26.838-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 283 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-77.0", lastmod: Timestamp 1000|48, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -77.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.652-0500 s20014| 2016-04-06T02:52:26.838-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|46||5704c02806c33406d4d9c0c0 and 24 chunks [js_test:multi_coll_drop] 2016-04-06T02:52:44.656-0500 s20014| 2016-04-06T02:52:26.838-0500 D SHARDING [conn1] major version query from 1|46||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|46 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.662-0500 s20014| 2016-04-06T02:52:26.838-0500 D ASIO [conn1] startCommand: RemoteCommand 285 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:52:56.838-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|46 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|4, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.663-0500 s20014| 2016-04-06T02:52:26.839-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 285 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:44.684-0500 s20014| 2016-04-06T02:52:26.840-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 285 finished with response: { waitedMS: 1, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-78.0", lastmod: Timestamp 1000|47, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -78.0 }, max: { _id: -77.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-77.0", lastmod: Timestamp 1000|48, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -77.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.686-0500 s20014| 2016-04-06T02:52:26.840-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|48||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:44.687-0500 s20014| 2016-04-06T02:52:26.840-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 1ms sequenceNumber: 27 version: 1|48||5704c02806c33406d4d9c0c0 based on: 1|46||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:44.691-0500 s20014| 2016-04-06T02:52:26.840-0500 D ASIO [conn1] startCommand: RemoteCommand 287 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:56.840-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|4, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.693-0500 s20014| 2016-04-06T02:52:26.841-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 287 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:44.698-0500 s20014| 2016-04-06T02:52:26.841-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 287 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-77.0", lastmod: Timestamp 1000|48, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -77.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.700-0500 s20014| 2016-04-06T02:52:26.841-0500 I COMMAND [conn1] splitting chunk [{ _id: -77.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:44.703-0500 s20014| 2016-04-06T02:52:26.881-0500 D ASIO [conn1] startCommand: RemoteCommand 289 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:52:56.881-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|8, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.703-0500 s20014| 2016-04-06T02:52:26.881-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 289 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:44.708-0500 s20014| 2016-04-06T02:52:26.882-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 289 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-76.0", lastmod: Timestamp 1000|50, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -76.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.720-0500 s20014| 2016-04-06T02:52:26.882-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|48||5704c02806c33406d4d9c0c0 and 25 chunks [js_test:multi_coll_drop] 2016-04-06T02:52:44.725-0500 s20014| 2016-04-06T02:52:26.882-0500 D SHARDING [conn1] major version query from 1|48||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|48 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.727-0500 s20014| 2016-04-06T02:52:26.882-0500 D ASIO [conn1] startCommand: RemoteCommand 291 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:52:56.882-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|48 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|8, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.730-0500 s20014| 2016-04-06T02:52:26.882-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 291 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:44.735-0500 s20014| 2016-04-06T02:52:26.882-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 291 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-77.0", lastmod: Timestamp 1000|49, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -77.0 }, max: { _id: -76.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-76.0", lastmod: Timestamp 1000|50, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -76.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.738-0500 s20014| 2016-04-06T02:52:26.882-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|50||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:44.740-0500 s20014| 2016-04-06T02:52:26.882-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 28 version: 1|50||5704c02806c33406d4d9c0c0 based on: 1|48||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:52:44.742-0500 s20014| 2016-04-06T02:52:26.883-0500 D ASIO [conn1] startCommand: RemoteCommand 293 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:52:56.883-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|8, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.747-0500 s20014| 2016-04-06T02:52:26.883-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 293 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:44.753-0500 s20014| 2016-04-06T02:52:26.883-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 293 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-76.0", lastmod: Timestamp 1000|50, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -76.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.756-0500 s20014| 2016-04-06T02:52:26.883-0500 I COMMAND [conn1] splitting chunk [{ _id: -76.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:52:44.757-0500 c20012| 2016-04-06T02:52:08.516-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.764-0500 c20012| 2016-04-06T02:52:08.516-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.766-0500 c20012| 2016-04-06T02:52:08.516-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.767-0500 c20012| 2016-04-06T02:52:08.516-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.769-0500 c20012| 2016-04-06T02:52:08.516-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.771-0500 c20012| 2016-04-06T02:52:08.516-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.772-0500 c20012| 2016-04-06T02:52:08.516-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.775-0500 c20012| 2016-04-06T02:52:08.516-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.778-0500 c20012| 2016-04-06T02:52:08.516-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.780-0500 c20012| 2016-04-06T02:52:08.516-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.783-0500 c20012| 2016-04-06T02:52:08.516-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:44.788-0500 c20012| 2016-04-06T02:52:08.516-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:44.792-0500 c20012| 2016-04-06T02:52:08.517-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 388 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:44.794-0500 c20012| 2016-04-06T02:52:08.517-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 389 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.517-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:44.797-0500 c20012| 2016-04-06T02:52:08.517-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 388 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:44.797-0500 c20012| 2016-04-06T02:52:08.517-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 388 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.798-0500 c20012| 2016-04-06T02:52:08.517-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 389 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:44.805-0500 c20012| 2016-04-06T02:52:08.519-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:44.807-0500 c20012| 2016-04-06T02:52:08.519-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 391 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:44.808-0500 c20012| 2016-04-06T02:52:08.519-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 391 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:44.811-0500 c20013| 2016-04-06T02:52:08.848-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 655 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|46, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:44.812-0500 c20013| 2016-04-06T02:52:08.848-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 655 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:44.813-0500 c20013| 2016-04-06T02:52:08.848-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 655 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.827-0500 c20013| 2016-04-06T02:52:08.849-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:44.830-0500 c20013| 2016-04-06T02:52:08.849-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 657 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:44.830-0500 c20013| 2016-04-06T02:52:08.849-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 657 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:44.831-0500 c20013| 2016-04-06T02:52:08.850-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 657 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.834-0500 c20013| 2016-04-06T02:52:08.850-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 652 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.836-0500 c20013| 2016-04-06T02:52:08.850-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|47, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.838-0500 c20013| 2016-04-06T02:52:08.850-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:44.844-0500 c20013| 2016-04-06T02:52:08.851-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 660 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.851-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|47, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:44.844-0500 c20013| 2016-04-06T02:52:08.851-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 660 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:44.848-0500 c20013| 2016-04-06T02:52:08.851-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 660 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|48, t: 1, h: -6375076965146338454, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.849-0500 c20013| 2016-04-06T02:52:08.851-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|48 and ending at ts: Timestamp 1459929128000|48 [js_test:multi_coll_drop] 2016-04-06T02:52:44.850-0500 c20013| 2016-04-06T02:52:08.851-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:44.851-0500 c20013| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.853-0500 c20013| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.853-0500 c20013| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.854-0500 c20013| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.856-0500 c20013| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.857-0500 c20013| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.858-0500 c20013| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.858-0500 c20013| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.859-0500 c20013| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.860-0500 c20013| 2016-04-06T02:52:08.852-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:44.862-0500 c20013| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.863-0500 c20013| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.864-0500 c20013| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.865-0500 c20013| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.866-0500 c20013| 2016-04-06T02:52:08.852-0500 D QUERY [repl writer worker 13] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:44.868-0500 c20013| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.868-0500 c20013| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.869-0500 c20013| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.870-0500 c20013| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.871-0500 c20013| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.873-0500 c20013| 2016-04-06T02:52:08.853-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.873-0500 c20013| 2016-04-06T02:52:08.853-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.875-0500 c20013| 2016-04-06T02:52:08.853-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.877-0500 c20013| 2016-04-06T02:52:08.853-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.879-0500 c20013| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.881-0500 c20013| 2016-04-06T02:52:08.853-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.882-0500 c20013| 2016-04-06T02:52:08.853-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.883-0500 c20013| 2016-04-06T02:52:08.853-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.884-0500 c20013| 2016-04-06T02:52:08.853-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.885-0500 c20013| 2016-04-06T02:52:08.853-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.886-0500 c20013| 2016-04-06T02:52:08.853-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.887-0500 c20013| 2016-04-06T02:52:08.853-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.891-0500 c20013| 2016-04-06T02:52:08.853-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.894-0500 c20013| 2016-04-06T02:52:08.853-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.896-0500 c20013| 2016-04-06T02:52:08.853-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 662 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.853-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|47, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:44.897-0500 c20013| 2016-04-06T02:52:08.853-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 662 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:44.899-0500 c20013| 2016-04-06T02:52:08.853-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:44.904-0500 c20013| 2016-04-06T02:52:08.853-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:44.909-0500 c20013| 2016-04-06T02:52:08.853-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 663 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:44.911-0500 c20013| 2016-04-06T02:52:08.853-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 663 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:44.915-0500 c20013| 2016-04-06T02:52:08.854-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 663 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.920-0500 c20013| 2016-04-06T02:52:08.856-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:44.926-0500 c20013| 2016-04-06T02:52:08.856-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 665 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:44.929-0500 c20013| 2016-04-06T02:52:08.856-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 665 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:44.930-0500 c20013| 2016-04-06T02:52:08.856-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 665 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.935-0500 c20013| 2016-04-06T02:52:08.856-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 662 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.939-0500 c20013| 2016-04-06T02:52:08.856-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|48, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.940-0500 c20013| 2016-04-06T02:52:08.856-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:44.943-0500 c20013| 2016-04-06T02:52:08.856-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 668 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.856-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|48, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:44.954-0500 c20013| 2016-04-06T02:52:08.856-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 668 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:44.958-0500 c20013| 2016-04-06T02:52:08.857-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|48, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.963-0500 c20013| 2016-04-06T02:52:08.857-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|48, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:44.971-0500 c20013| 2016-04-06T02:52:08.857-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|48, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.973-0500 c20013| 2016-04-06T02:52:08.857-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:44.976-0500 c20013| 2016-04-06T02:52:08.857-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|48, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:44.981-0500 c20013| 2016-04-06T02:52:08.860-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 668 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|49, t: 1, h: 8965959093496929051, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f18e'), state: 2, when: new Date(1459929128859), why: "splitting chunk [{ _id: -92.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:44.986-0500 c20013| 2016-04-06T02:52:08.860-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|49 and ending at ts: Timestamp 1459929128000|49 [js_test:multi_coll_drop] 2016-04-06T02:52:44.988-0500 c20013| 2016-04-06T02:52:08.860-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:44.989-0500 c20013| 2016-04-06T02:52:08.860-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.991-0500 c20013| 2016-04-06T02:52:08.860-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.993-0500 c20013| 2016-04-06T02:52:08.860-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.994-0500 c20013| 2016-04-06T02:52:08.860-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.994-0500 c20013| 2016-04-06T02:52:08.860-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.998-0500 c20013| 2016-04-06T02:52:08.861-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:44.998-0500 c20013| 2016-04-06T02:52:08.861-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.016-0500 c20013| 2016-04-06T02:52:08.861-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.016-0500 c20013| 2016-04-06T02:52:08.861-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.021-0500 c20013| 2016-04-06T02:52:08.861-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.021-0500 c20013| 2016-04-06T02:52:08.861-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.022-0500 c20013| 2016-04-06T02:52:08.861-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:45.025-0500 c20013| 2016-04-06T02:52:08.861-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.028-0500 c20013| 2016-04-06T02:52:08.861-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:45.030-0500 c20013| 2016-04-06T02:52:08.861-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.032-0500 c20013| 2016-04-06T02:52:08.861-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.037-0500 c20013| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.039-0500 c20013| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.041-0500 c20013| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.058-0500 c20013| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.061-0500 c20013| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.063-0500 c20013| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.063-0500 c20013| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.064-0500 c20013| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.067-0500 c20013| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.070-0500 c20013| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.072-0500 c20013| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.073-0500 c20013| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.076-0500 c20013| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.077-0500 c20013| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.078-0500 c20013| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.079-0500 c20013| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.081-0500 c20013| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.082-0500 c20013| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.084-0500 c20013| 2016-04-06T02:52:08.862-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 670 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.862-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|48, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:45.085-0500 c20013| 2016-04-06T02:52:08.862-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 670 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:45.088-0500 c20013| 2016-04-06T02:52:08.863-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:45.094-0500 c20013| 2016-04-06T02:52:08.864-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:45.106-0500 c20013| 2016-04-06T02:52:08.864-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 671 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:45.106-0500 c20013| 2016-04-06T02:52:08.864-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 671 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:45.109-0500 c20013| 2016-04-06T02:52:08.864-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 671 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.115-0500 c20013| 2016-04-06T02:52:08.866-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:45.124-0500 c20013| 2016-04-06T02:52:08.866-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 673 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:45.129-0500 c20013| 2016-04-06T02:52:08.866-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 673 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:45.131-0500 c20013| 2016-04-06T02:52:08.867-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 673 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.134-0500 c20013| 2016-04-06T02:52:08.867-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 670 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.136-0500 c20013| 2016-04-06T02:52:08.867-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|49, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.141-0500 c20013| 2016-04-06T02:52:08.867-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:45.144-0500 c20013| 2016-04-06T02:52:08.867-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 676 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.867-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|49, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:45.145-0500 c20013| 2016-04-06T02:52:08.867-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 676 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:45.166-0500 c20013| 2016-04-06T02:52:08.869-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 676 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|50, t: 1, h: 2946125543669679599, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-92.0", lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -92.0 }, max: { _id: -91.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-92.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-91.0", lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -91.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-91.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.170-0500 c20013| 2016-04-06T02:52:08.869-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|50 and ending at ts: Timestamp 1459929128000|50 [js_test:multi_coll_drop] 2016-04-06T02:52:45.184-0500 c20013| 2016-04-06T02:52:08.869-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:45.187-0500 c20013| 2016-04-06T02:52:08.869-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.191-0500 c20012| 2016-04-06T02:52:08.519-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 391 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.198-0500 c20012| 2016-04-06T02:52:08.520-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 389 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.201-0500 c20012| 2016-04-06T02:52:08.520-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|13, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.201-0500 c20012| 2016-04-06T02:52:08.520-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:45.219-0500 c20012| 2016-04-06T02:52:08.520-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 394 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.520-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|13, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:45.223-0500 c20012| 2016-04-06T02:52:08.520-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 394 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:45.226-0500 c20013| 2016-04-06T02:52:08.869-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.227-0500 c20013| 2016-04-06T02:52:08.869-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.229-0500 c20013| 2016-04-06T02:52:08.869-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.230-0500 c20013| 2016-04-06T02:52:08.869-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.231-0500 c20013| 2016-04-06T02:52:08.869-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.232-0500 c20013| 2016-04-06T02:52:08.869-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.239-0500 c20012| 2016-04-06T02:52:08.520-0500 D COMMAND [conn11] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|13, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.247-0500 c20011| 2016-04-06T02:52:08.832-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f18c'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128828), why: "splitting chunk [{ _id: -93.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c02865c17830b843f18c'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128828), why: "splitting chunk [{ _id: -93.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:45.250-0500 c20011| 2016-04-06T02:52:08.832-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|45, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:45.253-0500 c20011| 2016-04-06T02:52:08.832-0500 D COMMAND [conn25] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|45, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.255-0500 c20011| 2016-04-06T02:52:08.832-0500 D COMMAND [conn25] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|45, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:45.263-0500 c20011| 2016-04-06T02:52:08.832-0500 D COMMAND [conn25] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|45, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.275-0500 c20011| 2016-04-06T02:52:08.832-0500 D QUERY [conn25] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:45.285-0500 c20011| 2016-04-06T02:52:08.833-0500 I COMMAND [conn25] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|45, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:512 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:45.291-0500 c20011| 2016-04-06T02:52:08.835-0500 D COMMAND [conn25] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|16 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|45, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.294-0500 c20011| 2016-04-06T02:52:08.835-0500 D COMMAND [conn25] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|45, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:45.298-0500 c20011| 2016-04-06T02:52:08.835-0500 D COMMAND [conn25] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|16 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|45, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.302-0500 c20011| 2016-04-06T02:52:08.835-0500 D QUERY [conn25] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:45.307-0500 c20011| 2016-04-06T02:52:08.835-0500 I COMMAND [conn25] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|16 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|45, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:45.311-0500 c20011| 2016-04-06T02:52:08.835-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|44, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 9ms [js_test:multi_coll_drop] 2016-04-06T02:52:45.315-0500 c20011| 2016-04-06T02:52:08.835-0500 D COMMAND [conn25] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-93.0", lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -93.0 }, max: { _id: -92.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-93.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-92.0", lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -92.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-92.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|16 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.317-0500 c20011| 2016-04-06T02:52:08.835-0500 D QUERY [conn25] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:45.318-0500 c20011| 2016-04-06T02:52:08.835-0500 D QUERY [conn25] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:45.320-0500 c20011| 2016-04-06T02:52:08.835-0500 I COMMAND [conn25] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:45.321-0500 c20011| 2016-04-06T02:52:08.835-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-93.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:45.323-0500 c20011| 2016-04-06T02:52:08.835-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-92.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:45.326-0500 c20011| 2016-04-06T02:52:08.837-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|45, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:45.332-0500 c20011| 2016-04-06T02:52:08.837-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:45.332-0500 c20011| 2016-04-06T02:52:08.837-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:45.339-0500 c20011| 2016-04-06T02:52:08.837-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|45, t: 1 } and is durable through: { ts: Timestamp 1459929128000|44, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.343-0500 c20011| 2016-04-06T02:52:08.837-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.350-0500 c20011| 2016-04-06T02:52:08.837-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:45.355-0500 c20011| 2016-04-06T02:52:08.838-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|46, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|45, t: 1 }, name-id: "125" } [js_test:multi_coll_drop] 2016-04-06T02:52:45.356-0500 c20011| 2016-04-06T02:52:08.839-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|45, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:45.362-0500 c20011| 2016-04-06T02:52:08.839-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:45.362-0500 c20011| 2016-04-06T02:52:08.839-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:45.365-0500 c20011| 2016-04-06T02:52:08.839-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|45, t: 1 } and is durable through: { ts: Timestamp 1459929128000|45, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.369-0500 c20011| 2016-04-06T02:52:08.839-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|46, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|45, t: 1 }, name-id: "125" } [js_test:multi_coll_drop] 2016-04-06T02:52:45.374-0500 c20011| 2016-04-06T02:52:08.839-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.377-0500 c20011| 2016-04-06T02:52:08.839-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:45.383-0500 c20011| 2016-04-06T02:52:08.839-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|45, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:45.384-0500 c20011| 2016-04-06T02:52:08.840-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|45, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:45.390-0500 c20011| 2016-04-06T02:52:08.841-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|46, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:45.390-0500 c20011| 2016-04-06T02:52:08.841-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:45.392-0500 c20011| 2016-04-06T02:52:08.841-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.398-0500 c20011| 2016-04-06T02:52:08.841-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|46, t: 1 } and is durable through: { ts: Timestamp 1459929128000|45, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.403-0500 c20011| 2016-04-06T02:52:08.841-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|46, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|45, t: 1 }, name-id: "125" } [js_test:multi_coll_drop] 2016-04-06T02:52:45.405-0500 c20011| 2016-04-06T02:52:08.841-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|46, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:45.411-0500 c20011| 2016-04-06T02:52:08.841-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|46, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:45.411-0500 c20011| 2016-04-06T02:52:08.841-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:45.415-0500 c20011| 2016-04-06T02:52:08.841-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|46, t: 1 } and is durable through: { ts: Timestamp 1459929128000|45, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.417-0500 c20011| 2016-04-06T02:52:08.841-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|46, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|45, t: 1 }, name-id: "125" } [js_test:multi_coll_drop] 2016-04-06T02:52:45.422-0500 c20011| 2016-04-06T02:52:08.841-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.427-0500 c20011| 2016-04-06T02:52:08.841-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|46, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:45.428-0500 c20012| 2016-04-06T02:52:08.520-0500 D COMMAND [conn11] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|13, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:45.430-0500 c20012| 2016-04-06T02:52:08.520-0500 D COMMAND [conn11] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|13, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.431-0500 c20012| 2016-04-06T02:52:08.520-0500 D QUERY [conn11] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:45.458-0500 c20012| 2016-04-06T02:52:08.521-0500 I COMMAND [conn11] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|13, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:492 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:45.463-0500 c20012| 2016-04-06T02:52:08.524-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 394 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|14, t: 1, h: -6429269363497138108, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: MinKey }, max: { _id: -100.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_MinKey" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-100.0", lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -100.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-100.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.464-0500 c20012| 2016-04-06T02:52:08.524-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|14 and ending at ts: Timestamp 1459929128000|14 [js_test:multi_coll_drop] 2016-04-06T02:52:45.467-0500 c20012| 2016-04-06T02:52:08.524-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:45.467-0500 c20012| 2016-04-06T02:52:08.524-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.469-0500 c20012| 2016-04-06T02:52:08.524-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.469-0500 c20012| 2016-04-06T02:52:08.524-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.471-0500 c20012| 2016-04-06T02:52:08.524-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.474-0500 c20012| 2016-04-06T02:52:08.524-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.486-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.486-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.486-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.487-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.487-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.488-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.504-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.505-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.508-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.508-0500 c20012| 2016-04-06T02:52:08.525-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:45.510-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.516-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.519-0500 c20012| 2016-04-06T02:52:08.525-0500 D QUERY [repl writer worker 4] Using idhack: { _id: "multidrop.coll-_id_MinKey" } [js_test:multi_coll_drop] 2016-04-06T02:52:45.522-0500 c20012| 2016-04-06T02:52:08.525-0500 D QUERY [repl writer worker 4] Using idhack: { _id: "multidrop.coll-_id_-100.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:45.523-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.530-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.534-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.534-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.535-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.539-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.539-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.540-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.540-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.541-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.543-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.545-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.548-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.549-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.551-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.553-0500 c20012| 2016-04-06T02:52:08.525-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.554-0500 c20012| 2016-04-06T02:52:08.526-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:45.558-0500 c20012| 2016-04-06T02:52:08.526-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:45.581-0500 c20012| 2016-04-06T02:52:08.526-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 396 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|13, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:45.584-0500 c20012| 2016-04-06T02:52:08.526-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 396 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:45.587-0500 c20012| 2016-04-06T02:52:08.526-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 396 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.597-0500 c20012| 2016-04-06T02:52:08.526-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 398 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.526-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|13, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:45.602-0500 c20012| 2016-04-06T02:52:08.526-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 398 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:45.613-0500 c20012| 2016-04-06T02:52:08.527-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:45.625-0500 c20012| 2016-04-06T02:52:08.527-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 399 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:45.628-0500 c20012| 2016-04-06T02:52:08.527-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 399 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:45.630-0500 c20012| 2016-04-06T02:52:08.527-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 399 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.631-0500 c20012| 2016-04-06T02:52:08.528-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 398 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.633-0500 c20013| 2016-04-06T02:52:08.869-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.633-0500 c20013| 2016-04-06T02:52:08.869-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.635-0500 c20013| 2016-04-06T02:52:08.869-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.637-0500 c20013| 2016-04-06T02:52:08.869-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.639-0500 c20013| 2016-04-06T02:52:08.869-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.640-0500 c20013| 2016-04-06T02:52:08.869-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.641-0500 c20013| 2016-04-06T02:52:08.869-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:45.648-0500 c20011| 2016-04-06T02:52:08.842-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|46, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|46, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:45.650-0500 c20011| 2016-04-06T02:52:08.842-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:45.664-0500 c20011| 2016-04-06T02:52:08.842-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|46, t: 1 } and is durable through: { ts: Timestamp 1459929128000|46, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.664-0500 c20011| 2016-04-06T02:52:08.842-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|46, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.668-0500 c20013| 2016-04-06T02:52:08.869-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.672-0500 c20013| 2016-04-06T02:52:08.869-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll-_id_-92.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:45.673-0500 c20012| 2016-04-06T02:52:08.528-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|14, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.677-0500 c20012| 2016-04-06T02:52:08.528-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:45.681-0500 c20012| 2016-04-06T02:52:08.528-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 402 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.528-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:45.681-0500 c20012| 2016-04-06T02:52:08.528-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 402 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:45.689-0500 c20012| 2016-04-06T02:52:08.529-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 402 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|15, t: 1, h: 7753166607224067281, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.528-0500-5704c02865c17830b843f17d", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128528), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey } }, left: { min: { _id: MinKey }, max: { _id: -100.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -100.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.691-0500 c20012| 2016-04-06T02:52:08.529-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|15 and ending at ts: Timestamp 1459929128000|15 [js_test:multi_coll_drop] 2016-04-06T02:52:45.693-0500 c20012| 2016-04-06T02:52:08.529-0500 D REPL [rsBackgroundSync-0] bgsync buffer has 0 bytes [js_test:multi_coll_drop] 2016-04-06T02:52:45.697-0500 c20012| 2016-04-06T02:52:08.529-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:45.698-0500 c20012| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.699-0500 c20012| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.699-0500 c20012| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.700-0500 c20012| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.700-0500 c20012| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.702-0500 c20012| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.705-0500 c20012| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.707-0500 c20012| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.708-0500 c20012| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.709-0500 c20012| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.713-0500 c20012| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.716-0500 c20012| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.717-0500 c20012| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.718-0500 c20012| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.721-0500 c20012| 2016-04-06T02:52:08.529-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:45.722-0500 c20012| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.724-0500 c20012| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.725-0500 c20012| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.727-0500 c20012| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.729-0500 c20012| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.732-0500 c20012| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.732-0500 c20012| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.735-0500 c20012| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.741-0500 c20012| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.744-0500 c20012| 2016-04-06T02:52:08.529-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.746-0500 c20012| 2016-04-06T02:52:08.530-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.747-0500 c20012| 2016-04-06T02:52:08.530-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.750-0500 c20012| 2016-04-06T02:52:08.530-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.753-0500 c20012| 2016-04-06T02:52:08.530-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.753-0500 c20012| 2016-04-06T02:52:08.530-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.755-0500 c20012| 2016-04-06T02:52:08.530-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.756-0500 c20012| 2016-04-06T02:52:08.530-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.758-0500 c20012| 2016-04-06T02:52:08.530-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:45.759-0500 c20012| 2016-04-06T02:52:08.530-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:45.770-0500 c20012| 2016-04-06T02:52:08.530-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:45.793-0500 c20012| 2016-04-06T02:52:08.530-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 404 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|14, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:45.795-0500 c20011| 2016-04-06T02:52:08.842-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.800-0500 c20011| 2016-04-06T02:52:08.842-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|46, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|46, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:45.808-0500 c20011| 2016-04-06T02:52:08.842-0500 I COMMAND [conn25] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-93.0", lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -93.0 }, max: { _id: -92.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-93.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-92.0", lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -92.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-92.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|16 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:45.815-0500 c20011| 2016-04-06T02:52:08.842-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|45, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:45.816-0500 c20011| 2016-04-06T02:52:08.842-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|45, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:45.820-0500 c20011| 2016-04-06T02:52:08.842-0500 D COMMAND [conn25] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.842-0500-5704c02865c17830b843f18d", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128842), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -93.0 }, max: { _id: MaxKey } }, left: { min: { _id: -93.0 }, max: { _id: -92.0 }, lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -92.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.824-0500 c20011| 2016-04-06T02:52:08.842-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|46, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:45.829-0500 c20011| 2016-04-06T02:52:08.842-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|46, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:45.830-0500 c20011| 2016-04-06T02:52:08.845-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|46, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:45.833-0500 c20011| 2016-04-06T02:52:08.845-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|45, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:45.834-0500 c20011| 2016-04-06T02:52:08.845-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|46, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:45.838-0500 c20011| 2016-04-06T02:52:08.846-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:45.838-0500 c20011| 2016-04-06T02:52:08.846-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:45.841-0500 c20011| 2016-04-06T02:52:08.846-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.845-0500 c20011| 2016-04-06T02:52:08.846-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|47, t: 1 } and is durable through: { ts: Timestamp 1459929128000|45, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.847-0500 c20011| 2016-04-06T02:52:08.846-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:45.855-0500 c20011| 2016-04-06T02:52:08.848-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|46, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:45.864-0500 c20011| 2016-04-06T02:52:08.848-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|47, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|46, t: 1 }, name-id: "126" } [js_test:multi_coll_drop] 2016-04-06T02:52:45.869-0500 c20011| 2016-04-06T02:52:08.848-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|46, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:45.871-0500 c20011| 2016-04-06T02:52:08.848-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:45.875-0500 c20011| 2016-04-06T02:52:08.848-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.877-0500 c20011| 2016-04-06T02:52:08.848-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|47, t: 1 } and is durable through: { ts: Timestamp 1459929128000|46, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.882-0500 c20011| 2016-04-06T02:52:08.848-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|47, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|46, t: 1 }, name-id: "126" } [js_test:multi_coll_drop] 2016-04-06T02:52:45.892-0500 c20011| 2016-04-06T02:52:08.848-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|46, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:45.898-0500 c20011| 2016-04-06T02:52:08.849-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:45.900-0500 c20011| 2016-04-06T02:52:08.849-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:45.904-0500 c20011| 2016-04-06T02:52:08.850-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.908-0500 c20011| 2016-04-06T02:52:08.850-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|47, t: 1 } and is durable through: { ts: Timestamp 1459929128000|47, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.910-0500 c20011| 2016-04-06T02:52:08.850-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|47, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.939-0500 c20011| 2016-04-06T02:52:08.850-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:45.953-0500 c20011| 2016-04-06T02:52:08.850-0500 I COMMAND [conn25] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.842-0500-5704c02865c17830b843f18d", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128842), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -93.0 }, max: { _id: MaxKey } }, left: { min: { _id: -93.0 }, max: { _id: -92.0 }, lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -92.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:52:45.958-0500 c20011| 2016-04-06T02:52:08.850-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f18c') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.960-0500 c20011| 2016-04-06T02:52:08.850-0500 D QUERY [conn25] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:45.962-0500 c20011| 2016-04-06T02:52:08.850-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c02865c17830b843f18c') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.970-0500 c20011| 2016-04-06T02:52:08.850-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|46, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:45.972-0500 c20011| 2016-04-06T02:52:08.850-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|46, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:45.976-0500 c20011| 2016-04-06T02:52:08.850-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|46, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:45.976-0500 c20011| 2016-04-06T02:52:08.850-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:45.979-0500 c20011| 2016-04-06T02:52:08.851-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|47, t: 1 } and is durable through: { ts: Timestamp 1459929128000|46, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.981-0500 c20011| 2016-04-06T02:52:08.851-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:45.985-0500 c20011| 2016-04-06T02:52:08.851-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|47, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:45.993-0500 c20011| 2016-04-06T02:52:08.851-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|46, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:45.995-0500 c20011| 2016-04-06T02:52:08.851-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|47, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.002-0500 c20011| 2016-04-06T02:52:08.851-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|46, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.008-0500 c20011| 2016-04-06T02:52:08.851-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|48, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|47, t: 1 }, name-id: "127" } [js_test:multi_coll_drop] 2016-04-06T02:52:46.020-0500 c20011| 2016-04-06T02:52:08.852-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:46.020-0500 c20011| 2016-04-06T02:52:08.852-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:46.025-0500 c20011| 2016-04-06T02:52:08.852-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|47, t: 1 } and is durable through: { ts: Timestamp 1459929128000|47, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.027-0500 c20011| 2016-04-06T02:52:08.852-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|48, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|47, t: 1 }, name-id: "127" } [js_test:multi_coll_drop] 2016-04-06T02:52:46.031-0500 c20011| 2016-04-06T02:52:08.852-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.035-0500 c20011| 2016-04-06T02:52:08.852-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.036-0500 c20011| 2016-04-06T02:52:08.853-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|47, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:46.041-0500 c20011| 2016-04-06T02:52:08.853-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:46.044-0500 c20011| 2016-04-06T02:52:08.853-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:46.050-0500 c20011| 2016-04-06T02:52:08.853-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|48, t: 1 } and is durable through: { ts: Timestamp 1459929128000|47, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.053-0500 c20011| 2016-04-06T02:52:08.853-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|48, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|47, t: 1 }, name-id: "127" } [js_test:multi_coll_drop] 2016-04-06T02:52:46.059-0500 c20011| 2016-04-06T02:52:08.853-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.068-0500 c20011| 2016-04-06T02:52:08.853-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.070-0500 c20011| 2016-04-06T02:52:08.854-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|47, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:46.081-0500 c20011| 2016-04-06T02:52:08.854-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:46.081-0500 c20011| 2016-04-06T02:52:08.854-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:46.085-0500 c20011| 2016-04-06T02:52:08.854-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.090-0500 c20011| 2016-04-06T02:52:08.854-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|48, t: 1 } and is durable through: { ts: Timestamp 1459929128000|47, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.093-0500 c20011| 2016-04-06T02:52:08.854-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|48, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|47, t: 1 }, name-id: "127" } [js_test:multi_coll_drop] 2016-04-06T02:52:46.101-0500 c20011| 2016-04-06T02:52:08.854-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.106-0500 c20011| 2016-04-06T02:52:08.856-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:46.109-0500 c20011| 2016-04-06T02:52:08.856-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:46.111-0500 c20011| 2016-04-06T02:52:08.856-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.114-0500 c20011| 2016-04-06T02:52:08.856-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|48, t: 1 } and is durable through: { ts: Timestamp 1459929128000|48, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.115-0500 c20011| 2016-04-06T02:52:08.856-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|48, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.124-0500 c20011| 2016-04-06T02:52:08.856-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.128-0500 c20011| 2016-04-06T02:52:08.856-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f18c') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.133-0500 c20011| 2016-04-06T02:52:08.856-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|47, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.137-0500 c20011| 2016-04-06T02:52:08.856-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|47, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.144-0500 c20011| 2016-04-06T02:52:08.856-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|48, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:46.147-0500 c20011| 2016-04-06T02:52:08.856-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|48, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:46.155-0500 c20011| 2016-04-06T02:52:08.857-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:46.155-0500 c20011| 2016-04-06T02:52:08.857-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:46.159-0500 c20011| 2016-04-06T02:52:08.857-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|48, t: 1 } and is durable through: { ts: Timestamp 1459929128000|48, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.161-0500 c20011| 2016-04-06T02:52:08.857-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.170-0500 c20011| 2016-04-06T02:52:08.857-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.173-0500 c20011| 2016-04-06T02:52:08.857-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|16 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|48, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.178-0500 c20011| 2016-04-06T02:52:08.857-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|48, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:46.183-0500 c20011| 2016-04-06T02:52:08.857-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|16 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|48, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.185-0500 c20011| 2016-04-06T02:52:08.857-0500 D QUERY [conn10] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:46.191-0500 c20011| 2016-04-06T02:52:08.858-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|16 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|48, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:732 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.194-0500 c20011| 2016-04-06T02:52:08.859-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f18e'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128859), why: "splitting chunk [{ _id: -92.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.196-0500 c20011| 2016-04-06T02:52:08.859-0500 D QUERY [conn25] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:46.199-0500 c20011| 2016-04-06T02:52:08.859-0500 D QUERY [conn25] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:46.202-0500 c20011| 2016-04-06T02:52:08.859-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.206-0500 c20011| 2016-04-06T02:52:08.860-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|48, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.209-0500 c20011| 2016-04-06T02:52:08.860-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|48, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.213-0500 c20011| 2016-04-06T02:52:08.862-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|48, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:46.218-0500 c20011| 2016-04-06T02:52:08.862-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|48, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:46.221-0500 c20011| 2016-04-06T02:52:08.863-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|49, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|48, t: 1 }, name-id: "128" } [js_test:multi_coll_drop] 2016-04-06T02:52:46.229-0500 c20011| 2016-04-06T02:52:08.863-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:46.230-0500 c20011| 2016-04-06T02:52:08.863-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:46.235-0500 c20011| 2016-04-06T02:52:08.863-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|49, t: 1 } and is durable through: { ts: Timestamp 1459929128000|48, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.239-0500 c20011| 2016-04-06T02:52:08.863-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|49, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|48, t: 1 }, name-id: "128" } [js_test:multi_coll_drop] 2016-04-06T02:52:46.243-0500 c20011| 2016-04-06T02:52:08.863-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.249-0500 c20011| 2016-04-06T02:52:08.863-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.254-0500 c20011| 2016-04-06T02:52:08.864-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:46.255-0500 c20011| 2016-04-06T02:52:08.864-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:46.256-0500 c20011| 2016-04-06T02:52:08.864-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.260-0500 c20011| 2016-04-06T02:52:08.864-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|49, t: 1 } and is durable through: { ts: Timestamp 1459929128000|48, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.263-0500 c20011| 2016-04-06T02:52:08.864-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|49, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|48, t: 1 }, name-id: "128" } [js_test:multi_coll_drop] 2016-04-06T02:52:46.266-0500 c20011| 2016-04-06T02:52:08.864-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.270-0500 c20011| 2016-04-06T02:52:08.866-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:46.273-0500 c20011| 2016-04-06T02:52:08.866-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:46.276-0500 c20011| 2016-04-06T02:52:08.867-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:46.279-0500 c20011| 2016-04-06T02:52:08.867-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|49, t: 1 } and is durable through: { ts: Timestamp 1459929128000|49, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.281-0500 c20011| 2016-04-06T02:52:08.867-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|49, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.283-0500 c20011| 2016-04-06T02:52:08.866-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:46.285-0500 c20011| 2016-04-06T02:52:08.867-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.290-0500 c20011| 2016-04-06T02:52:08.867-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.293-0500 c20011| 2016-04-06T02:52:08.867-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.293-0500 c20011| 2016-04-06T02:52:08.867-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|49, t: 1 } and is durable through: { ts: Timestamp 1459929128000|49, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.297-0500 c20011| 2016-04-06T02:52:08.867-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.299-0500 c20011| 2016-04-06T02:52:08.867-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|48, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.302-0500 c20011| 2016-04-06T02:52:08.867-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|48, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.306-0500 c20011| 2016-04-06T02:52:08.867-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f18e'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128859), why: "splitting chunk [{ _id: -92.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c02865c17830b843f18e'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128859), why: "splitting chunk [{ _id: -92.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.310-0500 c20011| 2016-04-06T02:52:08.867-0500 D COMMAND [conn25] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|49, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.312-0500 c20011| 2016-04-06T02:52:08.867-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|49, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:46.316-0500 c20011| 2016-04-06T02:52:08.867-0500 D COMMAND [conn25] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|49, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:46.322-0500 c20011| 2016-04-06T02:52:08.867-0500 D COMMAND [conn25] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|49, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.322-0500 c20011| 2016-04-06T02:52:08.867-0500 D QUERY [conn25] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:46.327-0500 c20011| 2016-04-06T02:52:08.867-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|49, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:46.329-0500 c20011| 2016-04-06T02:52:08.868-0500 I COMMAND [conn25] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|49, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:512 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.331-0500 c20011| 2016-04-06T02:52:08.868-0500 D COMMAND [conn25] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|18 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|49, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.332-0500 c20011| 2016-04-06T02:52:08.868-0500 D COMMAND [conn25] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|49, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:46.336-0500 c20011| 2016-04-06T02:52:08.868-0500 D COMMAND [conn25] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|18 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|49, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.339-0500 c20011| 2016-04-06T02:52:08.868-0500 D QUERY [conn25] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:46.347-0500 c20011| 2016-04-06T02:52:08.868-0500 I COMMAND [conn25] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|18 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|49, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.352-0500 c20011| 2016-04-06T02:52:08.868-0500 D COMMAND [conn25] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-92.0", lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -92.0 }, max: { _id: -91.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-92.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-91.0", lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -91.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-91.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|18 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.354-0500 c20011| 2016-04-06T02:52:08.868-0500 D QUERY [conn25] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:46.357-0500 c20011| 2016-04-06T02:52:08.868-0500 D QUERY [conn25] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:46.362-0500 c20011| 2016-04-06T02:52:08.868-0500 I COMMAND [conn25] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.363-0500 c20011| 2016-04-06T02:52:08.868-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-92.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:46.364-0500 c20011| 2016-04-06T02:52:08.868-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-91.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:46.368-0500 c20011| 2016-04-06T02:52:08.869-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|49, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.372-0500 c20011| 2016-04-06T02:52:08.869-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|49, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.375-0500 c20011| 2016-04-06T02:52:08.871-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:46.376-0500 c20011| 2016-04-06T02:52:08.871-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:46.379-0500 c20011| 2016-04-06T02:52:08.871-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.382-0500 c20011| 2016-04-06T02:52:08.871-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|50, t: 1 } and is durable through: { ts: Timestamp 1459929128000|49, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.387-0500 c20011| 2016-04-06T02:52:08.871-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.391-0500 c20011| 2016-04-06T02:52:08.871-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:46.392-0500 c20011| 2016-04-06T02:52:08.871-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:46.395-0500 c20011| 2016-04-06T02:52:08.871-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|49, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:46.398-0500 c20011| 2016-04-06T02:52:08.871-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|50, t: 1 } and is durable through: { ts: Timestamp 1459929128000|49, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.401-0500 c20011| 2016-04-06T02:52:08.871-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.403-0500 c20011| 2016-04-06T02:52:08.871-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|49, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:46.409-0500 c20011| 2016-04-06T02:52:08.871-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.414-0500 c20011| 2016-04-06T02:52:08.871-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:46.415-0500 c20011| 2016-04-06T02:52:08.871-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:46.418-0500 c20011| 2016-04-06T02:52:08.871-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.421-0500 c20011| 2016-04-06T02:52:08.871-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|50, t: 1 } and is durable through: { ts: Timestamp 1459929128000|50, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.426-0500 c20011| 2016-04-06T02:52:08.871-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.428-0500 c20011| 2016-04-06T02:52:08.872-0500 D REPL [conn25] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|50, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.434-0500 c20011| 2016-04-06T02:52:08.872-0500 I COMMAND [conn25] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-92.0", lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -92.0 }, max: { _id: -91.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-92.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-91.0", lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -91.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-91.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|18 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.437-0500 c20011| 2016-04-06T02:52:08.872-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|49, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.440-0500 c20011| 2016-04-06T02:52:08.872-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|49, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.442-0500 c20011| 2016-04-06T02:52:08.873-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:46.443-0500 c20011| 2016-04-06T02:52:08.873-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:46.447-0500 c20011| 2016-04-06T02:52:08.873-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|50, t: 1 } and is durable through: { ts: Timestamp 1459929128000|50, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.451-0500 c20011| 2016-04-06T02:52:08.873-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.454-0500 c20011| 2016-04-06T02:52:08.873-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.458-0500 c20011| 2016-04-06T02:52:08.873-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|50, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:46.459-0500 c20011| 2016-04-06T02:52:08.873-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|50, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:46.465-0500 c20011| 2016-04-06T02:52:08.873-0500 D COMMAND [conn25] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.872-0500-5704c02865c17830b843f18f", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128872), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -92.0 }, max: { _id: MaxKey } }, left: { min: { _id: -92.0 }, max: { _id: -91.0 }, lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -91.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.470-0500 c20011| 2016-04-06T02:52:08.873-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|50, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.471-0500 c20011| 2016-04-06T02:52:08.873-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|50, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.474-0500 c20011| 2016-04-06T02:52:08.874-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|51, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|50, t: 1 }, name-id: "130" } [js_test:multi_coll_drop] 2016-04-06T02:52:46.479-0500 c20011| 2016-04-06T02:52:08.876-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:46.480-0500 c20011| 2016-04-06T02:52:08.876-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|50, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:46.485-0500 c20011| 2016-04-06T02:52:08.876-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:46.487-0500 c20011| 2016-04-06T02:52:08.876-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:46.490-0500 c20011| 2016-04-06T02:52:08.876-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|51, t: 1 } and is durable through: { ts: Timestamp 1459929128000|50, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.492-0500 c20011| 2016-04-06T02:52:08.876-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|51, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|50, t: 1 }, name-id: "130" } [js_test:multi_coll_drop] 2016-04-06T02:52:46.496-0500 c20011| 2016-04-06T02:52:08.876-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.503-0500 c20011| 2016-04-06T02:52:08.876-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.503-0500 c20011| 2016-04-06T02:52:08.876-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:46.506-0500 c20011| 2016-04-06T02:52:08.878-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.511-0500 c20011| 2016-04-06T02:52:08.878-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|51, t: 1 } and is durable through: { ts: Timestamp 1459929128000|50, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.513-0500 c20011| 2016-04-06T02:52:08.878-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|51, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|50, t: 1 }, name-id: "130" } [js_test:multi_coll_drop] 2016-04-06T02:52:46.518-0500 c20011| 2016-04-06T02:52:08.878-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.769-0500 c20011| 2016-04-06T02:52:08.878-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|50, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:46.774-0500 c20011| 2016-04-06T02:52:08.878-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:46.774-0500 c20011| 2016-04-06T02:52:08.878-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:46.781-0500 c20011| 2016-04-06T02:52:08.878-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:46.801-0500 c20011| 2016-04-06T02:52:08.878-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|51, t: 1 } and is durable through: { ts: Timestamp 1459929128000|51, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.810-0500 c20011| 2016-04-06T02:52:08.878-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:46.814-0500 c20011| 2016-04-06T02:52:08.878-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|51, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.818-0500 c20011| 2016-04-06T02:52:08.878-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.831-0500 c20011| 2016-04-06T02:52:08.878-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.835-0500 c20011| 2016-04-06T02:52:08.878-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.838-0500 c20011| 2016-04-06T02:52:08.878-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|51, t: 1 } and is durable through: { ts: Timestamp 1459929128000|51, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.843-0500 c20011| 2016-04-06T02:52:08.879-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.848-0500 c20011| 2016-04-06T02:52:08.881-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|50, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.853-0500 c20011| 2016-04-06T02:52:08.881-0500 I COMMAND [conn25] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.872-0500-5704c02865c17830b843f18f", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128872), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -92.0 }, max: { _id: MaxKey } }, left: { min: { _id: -92.0 }, max: { _id: -91.0 }, lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -91.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 8ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.859-0500 c20011| 2016-04-06T02:52:08.882-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f18e') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.860-0500 c20011| 2016-04-06T02:52:08.882-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|51, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:46.862-0500 c20011| 2016-04-06T02:52:08.882-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|50, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.867-0500 c20011| 2016-04-06T02:52:08.882-0500 D QUERY [conn25] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:46.869-0500 c20011| 2016-04-06T02:52:08.882-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c02865c17830b843f18e') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.874-0500 c20011| 2016-04-06T02:52:08.882-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|51, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.879-0500 c20011| 2016-04-06T02:52:08.882-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|51, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:46.885-0500 c20011| 2016-04-06T02:52:08.882-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|51, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.892-0500 c20011| 2016-04-06T02:52:08.883-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|52, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|51, t: 1 }, name-id: "131" } [js_test:multi_coll_drop] 2016-04-06T02:52:46.897-0500 c20011| 2016-04-06T02:52:08.884-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:46.899-0500 c20011| 2016-04-06T02:52:08.884-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:46.902-0500 c20011| 2016-04-06T02:52:08.884-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.906-0500 c20011| 2016-04-06T02:52:08.884-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|52, t: 1 } and is durable through: { ts: Timestamp 1459929128000|51, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.908-0500 c20011| 2016-04-06T02:52:08.884-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|52, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|51, t: 1 }, name-id: "131" } [js_test:multi_coll_drop] 2016-04-06T02:52:46.914-0500 c20011| 2016-04-06T02:52:08.884-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.916-0500 c20011| 2016-04-06T02:52:08.885-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|51, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:46.921-0500 c20011| 2016-04-06T02:52:08.885-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|51, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:46.927-0500 c20011| 2016-04-06T02:52:08.885-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:46.927-0500 c20011| 2016-04-06T02:52:08.885-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:46.940-0500 c20011| 2016-04-06T02:52:08.885-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|52, t: 1 } and is durable through: { ts: Timestamp 1459929128000|51, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.943-0500 c20011| 2016-04-06T02:52:08.885-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|52, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|51, t: 1 }, name-id: "131" } [js_test:multi_coll_drop] 2016-04-06T02:52:46.956-0500 c20011| 2016-04-06T02:52:08.885-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.959-0500 c20011| 2016-04-06T02:52:08.885-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:46.965-0500 c20011| 2016-04-06T02:52:08.885-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:46.966-0500 c20011| 2016-04-06T02:52:08.885-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:46.980-0500 c20011| 2016-04-06T02:52:08.885-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.986-0500 c20011| 2016-04-06T02:52:08.885-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|52, t: 1 } and is durable through: { ts: Timestamp 1459929128000|52, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:46.991-0500 c20011| 2016-04-06T02:52:08.885-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|52, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.015-0500 c20011| 2016-04-06T02:52:08.885-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.023-0500 c20011| 2016-04-06T02:52:08.886-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|51, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.025-0500 c20011| 2016-04-06T02:52:08.886-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|51, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.026-0500 c20011| 2016-04-06T02:52:08.886-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f18e') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.027-0500 c20011| 2016-04-06T02:52:08.886-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|52, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:47.028-0500 c20011| 2016-04-06T02:52:08.886-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|52, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:47.031-0500 c20011| 2016-04-06T02:52:08.886-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:47.036-0500 c20011| 2016-04-06T02:52:08.886-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:47.036-0500 c20011| 2016-04-06T02:52:08.886-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|52, t: 1 } and is durable through: { ts: Timestamp 1459929128000|52, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.038-0500 c20011| 2016-04-06T02:52:08.886-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.042-0500 c20011| 2016-04-06T02:52:08.886-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.048-0500 c20011| 2016-04-06T02:52:08.888-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f190'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128888), why: "splitting chunk [{ _id: -91.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.054-0500 c20011| 2016-04-06T02:52:08.888-0500 D QUERY [conn25] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:47.059-0500 c20011| 2016-04-06T02:52:08.888-0500 D QUERY [conn25] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:47.062-0500 c20011| 2016-04-06T02:52:08.888-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.067-0500 c20011| 2016-04-06T02:52:08.889-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|52, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.073-0500 c20011| 2016-04-06T02:52:08.889-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|52, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.077-0500 c20011| 2016-04-06T02:52:08.890-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|53, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|52, t: 1 }, name-id: "132" } [js_test:multi_coll_drop] 2016-04-06T02:52:47.082-0500 c20011| 2016-04-06T02:52:08.890-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:47.082-0500 c20011| 2016-04-06T02:52:08.890-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:47.085-0500 c20011| 2016-04-06T02:52:08.890-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|53, t: 1 } and is durable through: { ts: Timestamp 1459929128000|52, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.088-0500 c20011| 2016-04-06T02:52:08.890-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|53, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|52, t: 1 }, name-id: "132" } [js_test:multi_coll_drop] 2016-04-06T02:52:47.095-0500 c20011| 2016-04-06T02:52:08.890-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.099-0500 c20011| 2016-04-06T02:52:08.890-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.101-0500 c20011| 2016-04-06T02:52:08.891-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:47.103-0500 c20011| 2016-04-06T02:52:08.891-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:47.107-0500 c20011| 2016-04-06T02:52:08.891-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.111-0500 c20011| 2016-04-06T02:52:08.891-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|53, t: 1 } and is durable through: { ts: Timestamp 1459929128000|52, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.118-0500 c20011| 2016-04-06T02:52:08.891-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|53, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|52, t: 1 }, name-id: "132" } [js_test:multi_coll_drop] 2016-04-06T02:52:47.131-0500 c20011| 2016-04-06T02:52:08.891-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.141-0500 c20011| 2016-04-06T02:52:08.891-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|52, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:47.145-0500 c20011| 2016-04-06T02:52:08.891-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|52, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:47.153-0500 c20011| 2016-04-06T02:52:08.892-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:47.156-0500 c20011| 2016-04-06T02:52:08.892-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:47.158-0500 c20011| 2016-04-06T02:52:08.892-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.169-0500 c20011| 2016-04-06T02:52:08.892-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|53, t: 1 } and is durable through: { ts: Timestamp 1459929128000|53, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.170-0500 c20011| 2016-04-06T02:52:08.892-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|53, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.180-0500 c20011| 2016-04-06T02:52:08.892-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.191-0500 c20011| 2016-04-06T02:52:08.892-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|52, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.201-0500 c20011| 2016-04-06T02:52:08.892-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f190'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128888), why: "splitting chunk [{ _id: -91.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c02865c17830b843f190'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128888), why: "splitting chunk [{ _id: -91.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.205-0500 c20011| 2016-04-06T02:52:08.892-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|52, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.208-0500 c20011| 2016-04-06T02:52:08.892-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|53, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:47.211-0500 c20011| 2016-04-06T02:52:08.893-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|53, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:47.223-0500 c20011| 2016-04-06T02:52:08.893-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:47.224-0500 c20011| 2016-04-06T02:52:08.893-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:47.225-0500 c20011| 2016-04-06T02:52:08.893-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|53, t: 1 } and is durable through: { ts: Timestamp 1459929128000|53, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.231-0500 c20011| 2016-04-06T02:52:08.893-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.237-0500 c20011| 2016-04-06T02:52:08.893-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.244-0500 c20011| 2016-04-06T02:52:08.894-0500 D COMMAND [conn25] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-91.0", lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -91.0 }, max: { _id: -90.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-91.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-90.0", lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -90.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-90.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|20 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.246-0500 c20011| 2016-04-06T02:52:08.894-0500 D QUERY [conn25] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:47.249-0500 c20011| 2016-04-06T02:52:08.894-0500 D QUERY [conn25] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:47.253-0500 c20011| 2016-04-06T02:52:08.894-0500 I COMMAND [conn25] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.254-0500 c20011| 2016-04-06T02:52:08.894-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-91.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:47.256-0500 c20011| 2016-04-06T02:52:08.894-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-90.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:47.260-0500 c20011| 2016-04-06T02:52:08.894-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|53, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 28 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.267-0500 c20011| 2016-04-06T02:52:08.894-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|53, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.271-0500 c20011| 2016-04-06T02:52:08.896-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:47.273-0500 c20011| 2016-04-06T02:52:08.896-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:47.276-0500 c20011| 2016-04-06T02:52:08.896-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|54, t: 1 } and is durable through: { ts: Timestamp 1459929128000|53, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.279-0500 c20011| 2016-04-06T02:52:08.896-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.293-0500 c20011| 2016-04-06T02:52:08.896-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.295-0500 c20011| 2016-04-06T02:52:08.897-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:47.295-0500 c20011| 2016-04-06T02:52:08.897-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:47.298-0500 c20011| 2016-04-06T02:52:08.897-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.301-0500 c20011| 2016-04-06T02:52:08.897-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|54, t: 1 } and is durable through: { ts: Timestamp 1459929128000|53, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.306-0500 c20011| 2016-04-06T02:52:08.897-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|53, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:47.312-0500 c20011| 2016-04-06T02:52:08.897-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.340-0500 c20011| 2016-04-06T02:52:08.897-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|54, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|53, t: 1 }, name-id: "133" } [js_test:multi_coll_drop] 2016-04-06T02:52:47.342-0500 c20011| 2016-04-06T02:52:08.898-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|53, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:47.353-0500 c20011| 2016-04-06T02:52:08.899-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:47.354-0500 c20011| 2016-04-06T02:52:08.899-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:47.357-0500 c20011| 2016-04-06T02:52:08.899-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|54, t: 1 } and is durable through: { ts: Timestamp 1459929128000|54, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.358-0500 c20011| 2016-04-06T02:52:08.899-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|54, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.359-0500 c20011| 2016-04-06T02:52:08.899-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.363-0500 c20011| 2016-04-06T02:52:08.899-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.368-0500 c20011| 2016-04-06T02:52:08.900-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|53, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.370-0500 c20011| 2016-04-06T02:52:08.900-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|53, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.377-0500 c20011| 2016-04-06T02:52:08.900-0500 I COMMAND [conn25] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-91.0", lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -91.0 }, max: { _id: -90.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-91.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-90.0", lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -90.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-90.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|20 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.380-0500 c20011| 2016-04-06T02:52:08.900-0500 D COMMAND [conn25] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.900-0500-5704c02865c17830b843f191", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128900), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -91.0 }, max: { _id: MaxKey } }, left: { min: { _id: -91.0 }, max: { _id: -90.0 }, lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -90.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.382-0500 c20011| 2016-04-06T02:52:08.900-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|54, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:47.385-0500 c20011| 2016-04-06T02:52:08.900-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:47.385-0500 c20011| 2016-04-06T02:52:08.900-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:47.389-0500 c20011| 2016-04-06T02:52:08.900-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.395-0500 c20011| 2016-04-06T02:52:08.900-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|54, t: 1 } and is durable through: { ts: Timestamp 1459929128000|54, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.400-0500 c20011| 2016-04-06T02:52:08.900-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.403-0500 c20011| 2016-04-06T02:52:08.900-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|54, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.404-0500 c20011| 2016-04-06T02:52:08.901-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|54, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:47.406-0500 c20011| 2016-04-06T02:52:08.901-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|54, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.408-0500 c20011| 2016-04-06T02:52:08.902-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|55, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|54, t: 1 }, name-id: "134" } [js_test:multi_coll_drop] 2016-04-06T02:52:47.409-0500 c20011| 2016-04-06T02:52:08.902-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:47.411-0500 c20011| 2016-04-06T02:52:08.902-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:47.412-0500 c20011| 2016-04-06T02:52:08.902-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|55, t: 1 } and is durable through: { ts: Timestamp 1459929128000|54, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.415-0500 c20011| 2016-04-06T02:52:08.902-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|55, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|54, t: 1 }, name-id: "134" } [js_test:multi_coll_drop] 2016-04-06T02:52:47.418-0500 c20011| 2016-04-06T02:52:08.902-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.419-0500 c20011| 2016-04-06T02:52:08.902-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.421-0500 c20011| 2016-04-06T02:52:08.903-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|54, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:47.422-0500 c20011| 2016-04-06T02:52:08.903-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:47.422-0500 c20011| 2016-04-06T02:52:08.903-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:47.426-0500 c20011| 2016-04-06T02:52:08.903-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.429-0500 c20011| 2016-04-06T02:52:08.903-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|55, t: 1 } and is durable through: { ts: Timestamp 1459929128000|54, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.432-0500 c20011| 2016-04-06T02:52:08.903-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|55, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|54, t: 1 }, name-id: "134" } [js_test:multi_coll_drop] 2016-04-06T02:52:47.434-0500 c20011| 2016-04-06T02:52:08.903-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.436-0500 c20011| 2016-04-06T02:52:08.904-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|54, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:47.442-0500 c20011| 2016-04-06T02:52:08.905-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:47.443-0500 c20011| 2016-04-06T02:52:08.905-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:47.446-0500 c20011| 2016-04-06T02:52:08.905-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|55, t: 1 } and is durable through: { ts: Timestamp 1459929128000|55, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.447-0500 c20011| 2016-04-06T02:52:08.905-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|55, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.449-0500 c20011| 2016-04-06T02:52:08.905-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.454-0500 c20011| 2016-04-06T02:52:08.905-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.459-0500 c20011| 2016-04-06T02:52:08.905-0500 I COMMAND [conn25] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.900-0500-5704c02865c17830b843f191", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128900), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -91.0 }, max: { _id: MaxKey } }, left: { min: { _id: -91.0 }, max: { _id: -90.0 }, lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -90.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.463-0500 c20011| 2016-04-06T02:52:08.905-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|54, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.467-0500 c20011| 2016-04-06T02:52:08.906-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|54, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.469-0500 c20011| 2016-04-06T02:52:08.906-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f190') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.475-0500 c20011| 2016-04-06T02:52:08.906-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|55, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:47.477-0500 c20011| 2016-04-06T02:52:08.906-0500 D QUERY [conn25] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:47.479-0500 c20011| 2016-04-06T02:52:08.906-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c02865c17830b843f190') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.482-0500 c20011| 2016-04-06T02:52:08.906-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|55, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:47.487-0500 c20011| 2016-04-06T02:52:08.906-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|55, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.493-0500 c20011| 2016-04-06T02:52:08.906-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|55, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.497-0500 c20011| 2016-04-06T02:52:08.907-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:47.498-0500 c20011| 2016-04-06T02:52:08.907-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:47.499-0500 c20011| 2016-04-06T02:52:08.907-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.502-0500 c20011| 2016-04-06T02:52:08.907-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|55, t: 1 } and is durable through: { ts: Timestamp 1459929128000|55, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.509-0500 c20011| 2016-04-06T02:52:08.907-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.517-0500 c20011| 2016-04-06T02:52:08.908-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:47.518-0500 c20011| 2016-04-06T02:52:08.908-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:47.521-0500 c20011| 2016-04-06T02:52:08.908-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|56, t: 1 } and is durable through: { ts: Timestamp 1459929128000|55, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.525-0500 c20011| 2016-04-06T02:52:08.908-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.529-0500 c20011| 2016-04-06T02:52:08.908-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.534-0500 c20011| 2016-04-06T02:52:08.909-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|55, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:47.537-0500 c20011| 2016-04-06T02:52:08.909-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|56, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|55, t: 1 }, name-id: "135" } [js_test:multi_coll_drop] 2016-04-06T02:52:47.542-0500 c20011| 2016-04-06T02:52:08.909-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:47.545-0500 c20011| 2016-04-06T02:52:08.909-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:47.549-0500 c20011| 2016-04-06T02:52:08.909-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.551-0500 c20011| 2016-04-06T02:52:08.909-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|55, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:47.560-0500 c20011| 2016-04-06T02:52:08.909-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|56, t: 1 } and is durable through: { ts: Timestamp 1459929128000|55, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.561-0500 c20011| 2016-04-06T02:52:08.909-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|56, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|55, t: 1 }, name-id: "135" } [js_test:multi_coll_drop] 2016-04-06T02:52:47.571-0500 c20011| 2016-04-06T02:52:08.909-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.575-0500 c20011| 2016-04-06T02:52:08.911-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:47.576-0500 c20011| 2016-04-06T02:52:08.911-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:47.580-0500 c20011| 2016-04-06T02:52:08.911-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:47.580-0500 c20011| 2016-04-06T02:52:08.911-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:47.583-0500 c20011| 2016-04-06T02:52:08.911-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.587-0500 c20011| 2016-04-06T02:52:08.911-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|56, t: 1 } and is durable through: { ts: Timestamp 1459929128000|56, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.590-0500 c20011| 2016-04-06T02:52:08.911-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|56, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.594-0500 c20011| 2016-04-06T02:52:08.911-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.597-0500 c20011| 2016-04-06T02:52:08.911-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|56, t: 1 } and is durable through: { ts: Timestamp 1459929128000|56, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.599-0500 c20011| 2016-04-06T02:52:08.911-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.607-0500 c20011| 2016-04-06T02:52:08.911-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.609-0500 c20011| 2016-04-06T02:52:08.911-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f190') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.613-0500 c20011| 2016-04-06T02:52:08.911-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|55, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.616-0500 c20011| 2016-04-06T02:52:08.911-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|55, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.618-0500 c20011| 2016-04-06T02:52:08.912-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|56, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:47.622-0500 c20011| 2016-04-06T02:52:08.912-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|56, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:47.625-0500 c20011| 2016-04-06T02:52:08.914-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f192'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128914), why: "splitting chunk [{ _id: -90.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.629-0500 c20011| 2016-04-06T02:52:08.914-0500 D QUERY [conn25] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:47.634-0500 c20012| 2016-04-06T02:52:08.530-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 404 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:47.637-0500 c20012| 2016-04-06T02:52:08.530-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 404 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.639-0500 c20012| 2016-04-06T02:52:08.531-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 406 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.531-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|14, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:47.640-0500 c20012| 2016-04-06T02:52:08.531-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 406 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:47.646-0500 c20012| 2016-04-06T02:52:08.531-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:47.650-0500 c20012| 2016-04-06T02:52:08.531-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 407 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:47.652-0500 c20012| 2016-04-06T02:52:08.531-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 407 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:47.653-0500 c20012| 2016-04-06T02:52:08.531-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 407 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.657-0500 c20012| 2016-04-06T02:52:08.531-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 406 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.663-0500 c20012| 2016-04-06T02:52:08.532-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|15, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.665-0500 c20012| 2016-04-06T02:52:08.532-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:47.669-0500 c20012| 2016-04-06T02:52:08.532-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 410 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.532-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|15, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:47.677-0500 c20012| 2016-04-06T02:52:08.532-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 410 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:47.685-0500 c20012| 2016-04-06T02:52:08.532-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 410 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|16, t: 1, h: 1691968072355252476, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.689-0500 c20012| 2016-04-06T02:52:08.533-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|16 and ending at ts: Timestamp 1459929128000|16 [js_test:multi_coll_drop] 2016-04-06T02:52:47.689-0500 c20012| 2016-04-06T02:52:08.533-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:47.691-0500 c20012| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.692-0500 c20012| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.694-0500 c20012| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.696-0500 c20012| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.697-0500 c20012| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.697-0500 c20012| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.701-0500 c20012| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.704-0500 c20012| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.704-0500 c20012| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.706-0500 c20012| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.721-0500 c20012| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.723-0500 c20012| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.726-0500 c20012| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.727-0500 c20012| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.729-0500 c20012| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.730-0500 c20012| 2016-04-06T02:52:08.533-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:47.730-0500 c20012| 2016-04-06T02:52:08.533-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.732-0500 c20012| 2016-04-06T02:52:08.533-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:47.733-0500 c20012| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.735-0500 c20012| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.735-0500 c20012| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.736-0500 c20012| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.738-0500 c20012| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.740-0500 c20012| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.741-0500 c20012| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.743-0500 c20012| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.745-0500 c20012| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.746-0500 c20012| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.747-0500 c20012| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.748-0500 c20012| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.749-0500 c20012| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.750-0500 c20012| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.751-0500 c20012| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.752-0500 c20012| 2016-04-06T02:52:08.534-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.756-0500 c20012| 2016-04-06T02:52:08.534-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:47.765-0500 c20012| 2016-04-06T02:52:08.534-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:47.775-0500 c20012| 2016-04-06T02:52:08.534-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 412 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|15, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:47.781-0500 c20012| 2016-04-06T02:52:08.534-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 412 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:47.787-0500 c20012| 2016-04-06T02:52:08.534-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 412 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.791-0500 c20012| 2016-04-06T02:52:08.535-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 414 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.535-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|15, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:47.791-0500 c20012| 2016-04-06T02:52:08.535-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 414 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:47.798-0500 c20012| 2016-04-06T02:52:08.539-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:47.804-0500 c20012| 2016-04-06T02:52:08.539-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 415 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:47.805-0500 c20012| 2016-04-06T02:52:08.539-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 415 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:47.807-0500 c20012| 2016-04-06T02:52:08.539-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 415 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.809-0500 c20012| 2016-04-06T02:52:08.539-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 414 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.812-0500 c20012| 2016-04-06T02:52:08.539-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.817-0500 c20012| 2016-04-06T02:52:08.539-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:47.823-0500 c20012| 2016-04-06T02:52:08.539-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 418 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.539-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|16, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:47.825-0500 c20012| 2016-04-06T02:52:08.539-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 418 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:47.831-0500 c20012| 2016-04-06T02:52:08.540-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|0 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|16, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.838-0500 c20012| 2016-04-06T02:52:08.540-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|16, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:47.843-0500 c20012| 2016-04-06T02:52:08.540-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|0 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|16, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.846-0500 c20012| 2016-04-06T02:52:08.540-0500 D QUERY [conn7] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:47.852-0500 c20012| 2016-04-06T02:52:08.540-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|0 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|16, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:706 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:47.857-0500 c20012| 2016-04-06T02:52:08.542-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 418 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|17, t: 1, h: -503423693469934212, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f17e'), state: 2, when: new Date(1459929128542), why: "splitting chunk [{ _id: -100.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.858-0500 c20012| 2016-04-06T02:52:08.543-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|17 and ending at ts: Timestamp 1459929128000|17 [js_test:multi_coll_drop] 2016-04-06T02:52:47.859-0500 c20012| 2016-04-06T02:52:08.543-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:47.861-0500 c20012| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.862-0500 c20012| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.863-0500 c20012| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.869-0500 c20012| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.870-0500 c20012| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.871-0500 c20012| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.872-0500 c20012| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.872-0500 c20012| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.873-0500 c20012| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.873-0500 c20012| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.874-0500 c20012| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.876-0500 c20012| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.877-0500 c20012| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.880-0500 c20012| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.881-0500 c20012| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.882-0500 c20012| 2016-04-06T02:52:08.543-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.882-0500 c20012| 2016-04-06T02:52:08.543-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:47.886-0500 c20012| 2016-04-06T02:52:08.543-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:47.887-0500 c20012| 2016-04-06T02:52:08.544-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.891-0500 c20012| 2016-04-06T02:52:08.544-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.892-0500 c20012| 2016-04-06T02:52:08.544-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.896-0500 c20012| 2016-04-06T02:52:08.544-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.897-0500 c20012| 2016-04-06T02:52:08.544-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.898-0500 c20012| 2016-04-06T02:52:08.544-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.899-0500 c20012| 2016-04-06T02:52:08.544-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.904-0500 c20012| 2016-04-06T02:52:08.544-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.905-0500 c20012| 2016-04-06T02:52:08.544-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.906-0500 c20012| 2016-04-06T02:52:08.544-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.908-0500 c20012| 2016-04-06T02:52:08.544-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.908-0500 c20012| 2016-04-06T02:52:08.544-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.911-0500 c20012| 2016-04-06T02:52:08.544-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.913-0500 c20012| 2016-04-06T02:52:08.544-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.914-0500 c20012| 2016-04-06T02:52:08.544-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.915-0500 c20012| 2016-04-06T02:52:08.544-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:47.924-0500 c20012| 2016-04-06T02:52:08.545-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:47.939-0500 c20012| 2016-04-06T02:52:08.545-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:47.950-0500 c20012| 2016-04-06T02:52:08.545-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 420 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:47.953-0500 c20012| 2016-04-06T02:52:08.545-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 420 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:47.958-0500 c20012| 2016-04-06T02:52:08.545-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 420 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.964-0500 c20012| 2016-04-06T02:52:08.545-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 422 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.545-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|16, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:47.964-0500 c20012| 2016-04-06T02:52:08.545-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 422 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:47.967-0500 c20012| 2016-04-06T02:52:08.547-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 422 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.969-0500 c20012| 2016-04-06T02:52:08.547-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|17, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:47.969-0500 c20012| 2016-04-06T02:52:08.547-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:47.973-0500 c20012| 2016-04-06T02:52:08.547-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 424 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.547-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|17, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:47.981-0500 c20012| 2016-04-06T02:52:08.547-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 424 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:48.004-0500 c20012| 2016-04-06T02:52:08.548-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:48.011-0500 c20012| 2016-04-06T02:52:08.548-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 425 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:48.015-0500 c20012| 2016-04-06T02:52:08.548-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 425 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:48.018-0500 c20012| 2016-04-06T02:52:08.548-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 425 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.025-0500 c20012| 2016-04-06T02:52:08.550-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 424 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|18, t: 1, h: -6620679516550812391, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-100.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -100.0 }, max: { _id: -99.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-100.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-99.0", lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -99.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-99.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.027-0500 c20012| 2016-04-06T02:52:08.550-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|18 and ending at ts: Timestamp 1459929128000|18 [js_test:multi_coll_drop] 2016-04-06T02:52:48.031-0500 c20012| 2016-04-06T02:52:08.551-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:48.036-0500 c20012| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.038-0500 c20012| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.042-0500 c20012| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.044-0500 c20012| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.045-0500 c20012| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.049-0500 c20012| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.049-0500 c20012| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.051-0500 c20012| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.052-0500 c20012| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.053-0500 c20012| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.055-0500 c20012| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.056-0500 c20012| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.057-0500 c20012| 2016-04-06T02:52:08.551-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.058-0500 c20012| 2016-04-06T02:52:08.552-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.059-0500 c20012| 2016-04-06T02:52:08.552-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:48.060-0500 c20012| 2016-04-06T02:52:08.552-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.062-0500 c20012| 2016-04-06T02:52:08.552-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.064-0500 c20012| 2016-04-06T02:52:08.552-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll-_id_-100.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:48.066-0500 c20012| 2016-04-06T02:52:08.552-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll-_id_-99.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:48.067-0500 c20012| 2016-04-06T02:52:08.552-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.070-0500 c20012| 2016-04-06T02:52:08.552-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.070-0500 c20012| 2016-04-06T02:52:08.552-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.072-0500 c20012| 2016-04-06T02:52:08.552-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.074-0500 c20012| 2016-04-06T02:52:08.552-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.075-0500 c20012| 2016-04-06T02:52:08.552-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.088-0500 c20012| 2016-04-06T02:52:08.552-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.088-0500 c20012| 2016-04-06T02:52:08.552-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.091-0500 c20012| 2016-04-06T02:52:08.552-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.092-0500 c20012| 2016-04-06T02:52:08.552-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.099-0500 c20012| 2016-04-06T02:52:08.552-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.101-0500 c20012| 2016-04-06T02:52:08.552-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.101-0500 c20012| 2016-04-06T02:52:08.552-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.109-0500 c20012| 2016-04-06T02:52:08.554-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 428 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.554-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|17, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:48.110-0500 c20012| 2016-04-06T02:52:08.554-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.114-0500 c20012| 2016-04-06T02:52:08.554-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.119-0500 c20012| 2016-04-06T02:52:08.554-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.123-0500 c20012| 2016-04-06T02:52:08.555-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:48.135-0500 c20012| 2016-04-06T02:52:08.556-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|18, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:48.143-0500 c20012| 2016-04-06T02:52:08.556-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 429 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|18, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:48.146-0500 c20012| 2016-04-06T02:52:08.556-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 429 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:48.148-0500 c20012| 2016-04-06T02:52:08.556-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 429 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.151-0500 c20012| 2016-04-06T02:52:08.556-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 428 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:48.158-0500 c20012| 2016-04-06T02:52:08.559-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 428 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|19, t: 1, h: 6809334556305798525, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.554-0500-5704c02865c17830b843f17f", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128554), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -100.0 }, max: { _id: MaxKey } }, left: { min: { _id: -100.0 }, max: { _id: -99.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -99.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.162-0500 c20012| 2016-04-06T02:52:08.560-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|18, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.164-0500 c20012| 2016-04-06T02:52:08.560-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|19 and ending at ts: Timestamp 1459929128000|19 [js_test:multi_coll_drop] 2016-04-06T02:52:48.166-0500 c20012| 2016-04-06T02:52:08.560-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:48.166-0500 c20012| 2016-04-06T02:52:08.560-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.170-0500 c20012| 2016-04-06T02:52:08.560-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.172-0500 c20012| 2016-04-06T02:52:08.560-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.174-0500 c20012| 2016-04-06T02:52:08.560-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.177-0500 c20012| 2016-04-06T02:52:08.560-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.177-0500 c20012| 2016-04-06T02:52:08.560-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.179-0500 c20012| 2016-04-06T02:52:08.560-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.180-0500 c20012| 2016-04-06T02:52:08.560-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.182-0500 c20012| 2016-04-06T02:52:08.560-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.183-0500 c20012| 2016-04-06T02:52:08.560-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.185-0500 c20012| 2016-04-06T02:52:08.560-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.185-0500 c20012| 2016-04-06T02:52:08.560-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.187-0500 c20012| 2016-04-06T02:52:08.560-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.187-0500 c20012| 2016-04-06T02:52:08.560-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.187-0500 c20012| 2016-04-06T02:52:08.560-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:48.188-0500 c20012| 2016-04-06T02:52:08.560-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.189-0500 c20012| 2016-04-06T02:52:08.560-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.190-0500 c20012| 2016-04-06T02:52:08.560-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.190-0500 c20012| 2016-04-06T02:52:08.560-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.191-0500 c20012| 2016-04-06T02:52:08.560-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.192-0500 c20012| 2016-04-06T02:52:08.560-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.194-0500 c20012| 2016-04-06T02:52:08.560-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.194-0500 c20012| 2016-04-06T02:52:08.560-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.195-0500 c20012| 2016-04-06T02:52:08.560-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.196-0500 c20012| 2016-04-06T02:52:08.560-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.198-0500 c20012| 2016-04-06T02:52:08.560-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.198-0500 c20012| 2016-04-06T02:52:08.560-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.201-0500 c20012| 2016-04-06T02:52:08.561-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.202-0500 c20012| 2016-04-06T02:52:08.561-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.202-0500 c20012| 2016-04-06T02:52:08.561-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.203-0500 c20012| 2016-04-06T02:52:08.561-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.203-0500 c20012| 2016-04-06T02:52:08.561-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.205-0500 c20012| 2016-04-06T02:52:08.561-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.206-0500 c20012| 2016-04-06T02:52:08.561-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:48.211-0500 c20012| 2016-04-06T02:52:08.561-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:48.215-0500 c20012| 2016-04-06T02:52:08.561-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 432 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|17, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:48.216-0500 c20012| 2016-04-06T02:52:08.561-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 432 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:48.219-0500 c20012| 2016-04-06T02:52:08.561-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 432 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.228-0500 c20012| 2016-04-06T02:52:08.561-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|18, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:48.238-0500 c20012| 2016-04-06T02:52:08.561-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 433 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|18, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:48.238-0500 c20012| 2016-04-06T02:52:08.561-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 433 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:48.241-0500 c20012| 2016-04-06T02:52:08.562-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 433 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.244-0500 c20012| 2016-04-06T02:52:08.562-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 436 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.562-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|18, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:48.245-0500 c20012| 2016-04-06T02:52:08.562-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 436 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:48.258-0500 c20012| 2016-04-06T02:52:08.563-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:48.264-0500 c20012| 2016-04-06T02:52:08.563-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 437 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:48.268-0500 s20014| 2016-04-06T02:52:31.652-0500 D ASIO [Balancer] startCommand: RemoteCommand 295 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:01.652-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929151652), up: 24, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.269-0500 c20013| 2016-04-06T02:52:08.870-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll-_id_-91.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:48.273-0500 c20013| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.276-0500 c20013| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.277-0500 c20013| 2016-04-06T02:52:08.869-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.279-0500 c20011| 2016-04-06T02:52:08.914-0500 D QUERY [conn25] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:48.281-0500 c20011| 2016-04-06T02:52:08.914-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.283-0500 c20011| 2016-04-06T02:52:08.914-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|56, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.284-0500 s20014| 2016-04-06T02:52:31.653-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 295 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:48.287-0500 c20011| 2016-04-06T02:52:08.915-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|56, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.290-0500 c20011| 2016-04-06T02:52:08.915-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|57, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|56, t: 1 }, name-id: "136" } [js_test:multi_coll_drop] 2016-04-06T02:52:48.294-0500 c20011| 2016-04-06T02:52:08.916-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:48.294-0500 c20011| 2016-04-06T02:52:08.916-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:48.301-0500 c20011| 2016-04-06T02:52:08.916-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|57, t: 1 } and is durable through: { ts: Timestamp 1459929128000|56, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.303-0500 c20011| 2016-04-06T02:52:08.916-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|57, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|56, t: 1 }, name-id: "136" } [js_test:multi_coll_drop] 2016-04-06T02:52:48.306-0500 c20011| 2016-04-06T02:52:08.916-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.312-0500 c20011| 2016-04-06T02:52:08.916-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.319-0500 c20011| 2016-04-06T02:52:08.916-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:48.320-0500 c20011| 2016-04-06T02:52:08.916-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:48.325-0500 c20011| 2016-04-06T02:52:08.916-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.327-0500 c20012| 2016-04-06T02:52:08.563-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 437 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:48.329-0500 c20012| 2016-04-06T02:52:08.563-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 436 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.330-0500 c20012| 2016-04-06T02:52:08.563-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 437 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.333-0500 c20012| 2016-04-06T02:52:08.563-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|19, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.336-0500 c20012| 2016-04-06T02:52:08.564-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:48.340-0500 c20012| 2016-04-06T02:52:08.564-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 440 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.564-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|19, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:48.342-0500 c20012| 2016-04-06T02:52:08.566-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 440 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:48.346-0500 c20012| 2016-04-06T02:52:08.566-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 440 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|20, t: 1, h: -3904568443163544586, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.350-0500 c20012| 2016-04-06T02:52:08.566-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|20 and ending at ts: Timestamp 1459929128000|20 [js_test:multi_coll_drop] 2016-04-06T02:52:48.352-0500 c20012| 2016-04-06T02:52:08.566-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:48.354-0500 c20012| 2016-04-06T02:52:08.566-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.354-0500 c20012| 2016-04-06T02:52:08.566-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.355-0500 c20012| 2016-04-06T02:52:08.566-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.357-0500 c20012| 2016-04-06T02:52:08.566-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.360-0500 c20012| 2016-04-06T02:52:08.567-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.361-0500 c20012| 2016-04-06T02:52:08.567-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.361-0500 c20012| 2016-04-06T02:52:08.567-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.364-0500 c20012| 2016-04-06T02:52:08.567-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.366-0500 c20012| 2016-04-06T02:52:08.567-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.367-0500 c20012| 2016-04-06T02:52:08.567-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.368-0500 c20012| 2016-04-06T02:52:08.567-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.371-0500 c20012| 2016-04-06T02:52:08.567-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.373-0500 c20012| 2016-04-06T02:52:08.567-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:48.374-0500 c20012| 2016-04-06T02:52:08.567-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.376-0500 c20012| 2016-04-06T02:52:08.567-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.378-0500 c20012| 2016-04-06T02:52:08.567-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:48.384-0500 c20012| 2016-04-06T02:52:08.567-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.387-0500 c20012| 2016-04-06T02:52:08.567-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.389-0500 c20012| 2016-04-06T02:52:08.567-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.391-0500 c20012| 2016-04-06T02:52:08.567-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.392-0500 c20012| 2016-04-06T02:52:08.567-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.394-0500 c20012| 2016-04-06T02:52:08.567-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.396-0500 c20012| 2016-04-06T02:52:08.567-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.396-0500 c20012| 2016-04-06T02:52:08.567-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.398-0500 c20012| 2016-04-06T02:52:08.567-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.399-0500 c20012| 2016-04-06T02:52:08.567-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.400-0500 c20012| 2016-04-06T02:52:08.567-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.402-0500 c20012| 2016-04-06T02:52:08.567-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.402-0500 c20012| 2016-04-06T02:52:08.567-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.406-0500 c20012| 2016-04-06T02:52:08.567-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.407-0500 c20012| 2016-04-06T02:52:08.567-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.408-0500 c20012| 2016-04-06T02:52:08.567-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.412-0500 c20012| 2016-04-06T02:52:08.568-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.414-0500 c20012| 2016-04-06T02:52:08.568-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.419-0500 c20012| 2016-04-06T02:52:08.568-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 442 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.568-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|19, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:48.421-0500 c20012| 2016-04-06T02:52:08.568-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:48.422-0500 c20012| 2016-04-06T02:52:08.569-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 442 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:48.430-0500 c20012| 2016-04-06T02:52:08.569-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:48.442-0500 c20012| 2016-04-06T02:52:08.569-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 443 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|19, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:48.444-0500 c20012| 2016-04-06T02:52:08.569-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 443 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:48.446-0500 c20012| 2016-04-06T02:52:08.569-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 443 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.450-0500 c20012| 2016-04-06T02:52:08.572-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 442 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.451-0500 c20012| 2016-04-06T02:52:08.572-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|20, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.453-0500 c20012| 2016-04-06T02:52:08.572-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:48.461-0500 c20012| 2016-04-06T02:52:08.573-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 446 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.573-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|20, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:48.463-0500 c20012| 2016-04-06T02:52:08.573-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 446 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:48.472-0500 c20012| 2016-04-06T02:52:08.575-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:48.477-0500 c20012| 2016-04-06T02:52:08.575-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 447 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:48.479-0500 c20012| 2016-04-06T02:52:08.575-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 447 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:48.480-0500 c20012| 2016-04-06T02:52:08.575-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 447 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.483-0500 c20012| 2016-04-06T02:52:08.578-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|20, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.486-0500 c20012| 2016-04-06T02:52:08.578-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|20, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:48.490-0500 c20012| 2016-04-06T02:52:08.578-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|20, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.491-0500 c20012| 2016-04-06T02:52:08.578-0500 D QUERY [conn7] Relevant index 0 is kp: { ns: 1, min: 1 } unique name: 'ns_1_min_1' io: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:48.496-0500 c20012| 2016-04-06T02:52:08.578-0500 D QUERY [conn7] Relevant index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: 'ns_1_shard_1_min_1' io: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:48.498-0500 c20012| 2016-04-06T02:52:08.578-0500 D QUERY [conn7] Relevant index 2 is kp: { ns: 1, lastmod: 1 } unique name: 'ns_1_lastmod_1' io: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } [js_test:multi_coll_drop] 2016-04-06T02:52:48.500-0500 c20012| 2016-04-06T02:52:08.579-0500 D QUERY [conn7] Scoring query plan: IXSCAN { ns: 1, min: 1 } planHitEOF=0 [js_test:multi_coll_drop] 2016-04-06T02:52:48.504-0500 c20012| 2016-04-06T02:52:08.579-0500 D QUERY [conn7] score(1.0002) = baseScore(1) + productivity((0 advanced)/(1 works) = 0) + tieBreakers(0.0001 noFetchBonus + 0 noSortBonus + 0.0001 noIxisectBonus = 0.0002) [js_test:multi_coll_drop] 2016-04-06T02:52:48.506-0500 c20012| 2016-04-06T02:52:08.579-0500 D QUERY [conn7] Scoring query plan: IXSCAN { ns: 1, shard: 1, min: 1 } planHitEOF=0 [js_test:multi_coll_drop] 2016-04-06T02:52:48.510-0500 c20012| 2016-04-06T02:52:08.579-0500 D QUERY [conn7] score(1.0002) = baseScore(1) + productivity((0 advanced)/(1 works) = 0) + tieBreakers(0.0001 noFetchBonus + 0 noSortBonus + 0.0001 noIxisectBonus = 0.0002) [js_test:multi_coll_drop] 2016-04-06T02:52:48.512-0500 c20012| 2016-04-06T02:52:08.579-0500 D QUERY [conn7] Scoring query plan: IXSCAN { ns: 1, lastmod: 1 } planHitEOF=1 [js_test:multi_coll_drop] 2016-04-06T02:52:48.513-0500 c20012| 2016-04-06T02:52:08.579-0500 D QUERY [conn7] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:48.514-0500 c20012| 2016-04-06T02:52:08.579-0500 D QUERY [conn7] Winning plan: IXSCAN { ns: 1, lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.518-0500 c20012| 2016-04-06T02:52:08.579-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|20, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 fromMultiPlanner:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.523-0500 c20012| 2016-04-06T02:52:08.580-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 446 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|21, t: 1, h: -7910042500719648602, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f180'), state: 2, when: new Date(1459929128579), why: "splitting chunk [{ _id: -99.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.528-0500 c20012| 2016-04-06T02:52:08.580-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|21 and ending at ts: Timestamp 1459929128000|21 [js_test:multi_coll_drop] 2016-04-06T02:52:48.529-0500 c20012| 2016-04-06T02:52:08.580-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:48.530-0500 c20012| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.532-0500 c20012| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.534-0500 c20012| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.537-0500 c20012| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:48.549-0500 c20011| 2016-04-06T02:52:08.916-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|57, t: 1 } and is durable through: { ts: Timestamp 1459929128000|56, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.552-0500 c20011| 2016-04-06T02:52:08.916-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|57, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|56, t: 1 }, name-id: "136" } [js_test:multi_coll_drop] 2016-04-06T02:52:48.557-0500 c20011| 2016-04-06T02:52:08.916-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.558-0500 c20011| 2016-04-06T02:52:08.917-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|56, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:48.563-0500 c20011| 2016-04-06T02:52:08.917-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|56, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:48.567-0500 c20011| 2016-04-06T02:52:08.918-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:48.569-0500 c20011| 2016-04-06T02:52:08.918-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:48.573-0500 c20011| 2016-04-06T02:52:08.918-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:48.576-0500 c20011| 2016-04-06T02:52:08.918-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.577-0500 c20011| 2016-04-06T02:52:08.918-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:48.579-0500 c20011| 2016-04-06T02:52:08.918-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|57, t: 1 } and is durable through: { ts: Timestamp 1459929128000|57, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.585-0500 c20011| 2016-04-06T02:52:08.918-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|57, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.588-0500 c20011| 2016-04-06T02:52:08.918-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.591-0500 c20011| 2016-04-06T02:52:08.918-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|57, t: 1 } and is durable through: { ts: Timestamp 1459929128000|57, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.594-0500 c20011| 2016-04-06T02:52:08.918-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.599-0500 c20011| 2016-04-06T02:52:08.918-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.616-0500 c20011| 2016-04-06T02:52:08.918-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f192'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128914), why: "splitting chunk [{ _id: -90.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c02865c17830b843f192'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128914), why: "splitting chunk [{ _id: -90.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.621-0500 c20011| 2016-04-06T02:52:08.918-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|56, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.627-0500 c20011| 2016-04-06T02:52:08.918-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|56, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.631-0500 c20011| 2016-04-06T02:52:08.919-0500 D COMMAND [conn25] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|57, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.633-0500 c20011| 2016-04-06T02:52:08.919-0500 D COMMAND [conn25] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|57, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:48.635-0500 c20011| 2016-04-06T02:52:08.919-0500 D COMMAND [conn25] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|57, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.637-0500 c20011| 2016-04-06T02:52:08.919-0500 D QUERY [conn25] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:48.639-0500 c20011| 2016-04-06T02:52:08.919-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|57, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:48.644-0500 c20011| 2016-04-06T02:52:08.919-0500 I COMMAND [conn25] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|57, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:512 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.645-0500 c20011| 2016-04-06T02:52:08.919-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|57, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:48.650-0500 c20011| 2016-04-06T02:52:08.920-0500 D COMMAND [conn25] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-90.0", lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -90.0 }, max: { _id: -89.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-90.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-89.0", lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -89.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-89.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|22 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.653-0500 c20011| 2016-04-06T02:52:08.920-0500 D QUERY [conn25] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:48.655-0500 c20011| 2016-04-06T02:52:08.920-0500 D QUERY [conn25] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:48.658-0500 c20011| 2016-04-06T02:52:08.920-0500 I COMMAND [conn25] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.658-0500 c20011| 2016-04-06T02:52:08.920-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-90.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:48.659-0500 c20011| 2016-04-06T02:52:08.920-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-89.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:48.665-0500 c20011| 2016-04-06T02:52:08.920-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|57, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.667-0500 c20011| 2016-04-06T02:52:08.920-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|57, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.670-0500 c20011| 2016-04-06T02:52:08.922-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|58, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|57, t: 1 }, name-id: "137" } [js_test:multi_coll_drop] 2016-04-06T02:52:48.674-0500 c20011| 2016-04-06T02:52:08.922-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:48.676-0500 c20011| 2016-04-06T02:52:08.922-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:48.678-0500 c20011| 2016-04-06T02:52:08.922-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|58, t: 1 } and is durable through: { ts: Timestamp 1459929128000|57, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.680-0500 c20011| 2016-04-06T02:52:08.922-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|58, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|57, t: 1 }, name-id: "137" } [js_test:multi_coll_drop] 2016-04-06T02:52:48.684-0500 c20011| 2016-04-06T02:52:08.922-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.691-0500 c20011| 2016-04-06T02:52:08.922-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.699-0500 c20011| 2016-04-06T02:52:08.922-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:48.699-0500 c20011| 2016-04-06T02:52:08.922-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:48.705-0500 c20011| 2016-04-06T02:52:08.922-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.710-0500 c20011| 2016-04-06T02:52:08.922-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|58, t: 1 } and is durable through: { ts: Timestamp 1459929128000|57, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.715-0500 c20011| 2016-04-06T02:52:08.922-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|58, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|57, t: 1 }, name-id: "137" } [js_test:multi_coll_drop] 2016-04-06T02:52:48.721-0500 c20011| 2016-04-06T02:52:08.922-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.726-0500 c20011| 2016-04-06T02:52:08.922-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:48.727-0500 c20011| 2016-04-06T02:52:08.922-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:48.731-0500 c20011| 2016-04-06T02:52:08.923-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.740-0500 c20011| 2016-04-06T02:52:08.923-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|58, t: 1 } and is durable through: { ts: Timestamp 1459929128000|58, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.743-0500 c20011| 2016-04-06T02:52:08.923-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|58, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.749-0500 c20011| 2016-04-06T02:52:08.923-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.760-0500 c20011| 2016-04-06T02:52:08.923-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:48.762-0500 c20011| 2016-04-06T02:52:08.923-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:48.774-0500 c20011| 2016-04-06T02:52:08.923-0500 I COMMAND [conn25] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-90.0", lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -90.0 }, max: { _id: -89.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-90.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-89.0", lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -89.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-89.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|22 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.777-0500 c20011| 2016-04-06T02:52:08.923-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|58, t: 1 } and is durable through: { ts: Timestamp 1459929128000|58, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.784-0500 c20011| 2016-04-06T02:52:08.923-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.788-0500 c20011| 2016-04-06T02:52:08.923-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.793-0500 c20011| 2016-04-06T02:52:08.923-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|57, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:48.798-0500 c20011| 2016-04-06T02:52:08.923-0500 D COMMAND [conn25] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.923-0500-5704c02865c17830b843f193", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128923), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -90.0 }, max: { _id: MaxKey } }, left: { min: { _id: -90.0 }, max: { _id: -89.0 }, lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -89.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.801-0500 c20011| 2016-04-06T02:52:08.923-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|57, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:48.805-0500 c20011| 2016-04-06T02:52:08.923-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|57, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.807-0500 c20011| 2016-04-06T02:52:08.923-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|57, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.815-0500 c20011| 2016-04-06T02:52:08.924-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|59, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|58, t: 1 }, name-id: "138" } [js_test:multi_coll_drop] 2016-04-06T02:52:48.821-0500 c20011| 2016-04-06T02:52:08.924-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|59, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:48.821-0500 c20011| 2016-04-06T02:52:08.924-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:48.823-0500 c20011| 2016-04-06T02:52:08.924-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|59, t: 1 } and is durable through: { ts: Timestamp 1459929128000|58, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.825-0500 c20011| 2016-04-06T02:52:08.924-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|59, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|58, t: 1 }, name-id: "138" } [js_test:multi_coll_drop] 2016-04-06T02:52:48.832-0500 c20011| 2016-04-06T02:52:08.924-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.837-0500 c20011| 2016-04-06T02:52:08.924-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|59, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.841-0500 c20011| 2016-04-06T02:52:08.925-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|59, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|59, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:48.841-0500 c20011| 2016-04-06T02:52:08.925-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:48.852-0500 c20011| 2016-04-06T02:52:08.925-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|59, t: 1 } and is durable through: { ts: Timestamp 1459929128000|59, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.853-0500 c20011| 2016-04-06T02:52:08.925-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|59, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.856-0500 c20011| 2016-04-06T02:52:08.925-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.863-0500 c20011| 2016-04-06T02:52:08.925-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|59, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|59, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.871-0500 c20011| 2016-04-06T02:52:08.925-0500 I COMMAND [conn25] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.923-0500-5704c02865c17830b843f193", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128923), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -90.0 }, max: { _id: MaxKey } }, left: { min: { _id: -90.0 }, max: { _id: -89.0 }, lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -89.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.876-0500 c20011| 2016-04-06T02:52:08.926-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f192') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.878-0500 c20011| 2016-04-06T02:52:08.926-0500 D QUERY [conn25] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:48.881-0500 c20011| 2016-04-06T02:52:08.926-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c02865c17830b843f192') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.887-0500 c20011| 2016-04-06T02:52:08.927-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|60, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|59, t: 1 }, name-id: "139" } [js_test:multi_coll_drop] 2016-04-06T02:52:48.890-0500 c20011| 2016-04-06T02:52:08.927-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|58, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:48.894-0500 c20011| 2016-04-06T02:52:08.927-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|58, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.897-0500 c20011| 2016-04-06T02:52:08.929-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|59, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|60, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:48.897-0500 c20011| 2016-04-06T02:52:08.929-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:48.900-0500 c20011| 2016-04-06T02:52:08.929-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|60, t: 1 } and is durable through: { ts: Timestamp 1459929128000|59, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.901-0500 c20011| 2016-04-06T02:52:08.929-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|60, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|59, t: 1 }, name-id: "139" } [js_test:multi_coll_drop] 2016-04-06T02:52:48.905-0500 c20011| 2016-04-06T02:52:08.929-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.911-0500 c20011| 2016-04-06T02:52:08.929-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|59, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|60, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.916-0500 c20011| 2016-04-06T02:52:08.929-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|60, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|60, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:48.917-0500 c20011| 2016-04-06T02:52:08.929-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:48.920-0500 c20011| 2016-04-06T02:52:08.929-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|60, t: 1 } and is durable through: { ts: Timestamp 1459929128000|60, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.922-0500 c20011| 2016-04-06T02:52:08.929-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|60, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.924-0500 c20011| 2016-04-06T02:52:08.929-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.934-0500 c20011| 2016-04-06T02:52:08.929-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|60, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|60, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.937-0500 c20011| 2016-04-06T02:52:08.929-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f192') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.940-0500 c20011| 2016-04-06T02:52:08.929-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|60, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.942-0500 c20011| 2016-04-06T02:52:08.929-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|60, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:48.944-0500 c20011| 2016-04-06T02:52:08.930-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|60, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.949-0500 c20011| 2016-04-06T02:52:08.930-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:48.953-0500 c20011| 2016-04-06T02:52:08.930-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|59, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:48.958-0500 c20011| 2016-04-06T02:52:08.930-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|60, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.964-0500 c20011| 2016-04-06T02:52:08.930-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|59, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:48.968-0500 c20011| 2016-04-06T02:52:08.931-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|60, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:48.976-0500 c20011| 2016-04-06T02:52:08.931-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|60, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.978-0500 c20011| 2016-04-06T02:52:08.931-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|60, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:48.981-0500 c20011| 2016-04-06T02:52:08.931-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|60, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:48.986-0500 c20011| 2016-04-06T02:52:08.931-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:48.990-0500 c20011| 2016-04-06T02:52:08.931-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|60, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.014-0500 c20011| 2016-04-06T02:52:08.932-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f194'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128932), why: "splitting chunk [{ _id: -89.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.027-0500 c20011| 2016-04-06T02:52:08.932-0500 D QUERY [conn25] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:49.030-0500 c20011| 2016-04-06T02:52:08.932-0500 D QUERY [conn25] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:49.038-0500 c20011| 2016-04-06T02:52:08.932-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.051-0500 c20011| 2016-04-06T02:52:08.932-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|60, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.057-0500 c20011| 2016-04-06T02:52:08.933-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|61, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|60, t: 1 }, name-id: "140" } [js_test:multi_coll_drop] 2016-04-06T02:52:49.062-0500 c20011| 2016-04-06T02:52:08.933-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|60, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:49.063-0500 c20011| 2016-04-06T02:52:08.933-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:49.069-0500 c20011| 2016-04-06T02:52:08.933-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|61, t: 1 } and is durable through: { ts: Timestamp 1459929128000|60, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.073-0500 c20011| 2016-04-06T02:52:08.933-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|61, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|60, t: 1 }, name-id: "140" } [js_test:multi_coll_drop] 2016-04-06T02:52:49.077-0500 c20011| 2016-04-06T02:52:08.933-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.080-0500 c20011| 2016-04-06T02:52:08.933-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|60, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.086-0500 c20011| 2016-04-06T02:52:08.934-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:49.090-0500 c20011| 2016-04-06T02:52:08.934-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:49.093-0500 c20011| 2016-04-06T02:52:08.934-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|61, t: 1 } and is durable through: { ts: Timestamp 1459929128000|61, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.096-0500 c20011| 2016-04-06T02:52:08.934-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|61, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.102-0500 c20011| 2016-04-06T02:52:08.934-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.108-0500 c20011| 2016-04-06T02:52:08.934-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.112-0500 c20011| 2016-04-06T02:52:08.934-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|58, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:49.117-0500 c20011| 2016-04-06T02:52:08.934-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f194'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128932), why: "splitting chunk [{ _id: -89.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c02865c17830b843f194'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128932), why: "splitting chunk [{ _id: -89.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.129-0500 c20011| 2016-04-06T02:52:08.934-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|58, t: 1 } } cursorid:17466612721 numYields:0 nreturned:3 reslen:1280 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.133-0500 c20011| 2016-04-06T02:52:08.934-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|60, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:49.140-0500 c20011| 2016-04-06T02:52:08.935-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|60, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.143-0500 c20011| 2016-04-06T02:52:08.935-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|61, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:49.149-0500 c20011| 2016-04-06T02:52:08.936-0500 D COMMAND [conn25] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-89.0", lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -89.0 }, max: { _id: -88.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-89.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-88.0", lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -88.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-88.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|24 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.157-0500 c20011| 2016-04-06T02:52:08.936-0500 D QUERY [conn25] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:49.161-0500 c20011| 2016-04-06T02:52:08.936-0500 D QUERY [conn25] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:49.167-0500 c20011| 2016-04-06T02:52:08.936-0500 I COMMAND [conn25] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.170-0500 c20011| 2016-04-06T02:52:08.936-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-89.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:49.172-0500 c20011| 2016-04-06T02:52:08.936-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-88.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:49.182-0500 c20011| 2016-04-06T02:52:08.936-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:49.182-0500 c20011| 2016-04-06T02:52:08.936-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:49.184-0500 c20011| 2016-04-06T02:52:08.936-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.189-0500 c20011| 2016-04-06T02:52:08.936-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|61, t: 1 } and is durable through: { ts: Timestamp 1459929128000|58, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.193-0500 c20011| 2016-04-06T02:52:08.937-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.196-0500 c20011| 2016-04-06T02:52:08.937-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|61, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:49.202-0500 c20011| 2016-04-06T02:52:08.937-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|61, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.206-0500 c20011| 2016-04-06T02:52:08.937-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|62, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|61, t: 1 }, name-id: "141" } [js_test:multi_coll_drop] 2016-04-06T02:52:49.210-0500 c20011| 2016-04-06T02:52:08.937-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|61, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.214-0500 c20011| 2016-04-06T02:52:08.938-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:49.215-0500 c20011| 2016-04-06T02:52:08.938-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:49.219-0500 c20011| 2016-04-06T02:52:08.938-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|62, t: 1 } and is durable through: { ts: Timestamp 1459929128000|61, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.224-0500 c20011| 2016-04-06T02:52:08.938-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|62, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|61, t: 1 }, name-id: "141" } [js_test:multi_coll_drop] 2016-04-06T02:52:49.230-0500 c20011| 2016-04-06T02:52:08.938-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:49.231-0500 c20011| 2016-04-06T02:52:08.939-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:49.232-0500 c20011| 2016-04-06T02:52:08.938-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.240-0500 c20011| 2016-04-06T02:52:08.939-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.246-0500 c20011| 2016-04-06T02:52:08.939-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.247-0500 c20011| 2016-04-06T02:52:08.939-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|61, t: 1 } and is durable through: { ts: Timestamp 1459929128000|61, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.254-0500 c20011| 2016-04-06T02:52:08.939-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|62, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|61, t: 1 }, name-id: "141" } [js_test:multi_coll_drop] 2016-04-06T02:52:49.260-0500 c20011| 2016-04-06T02:52:08.939-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.266-0500 c20011| 2016-04-06T02:52:08.939-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:49.269-0500 c20011| 2016-04-06T02:52:08.939-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:49.274-0500 c20011| 2016-04-06T02:52:08.939-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.278-0500 c20011| 2016-04-06T02:52:08.939-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|62, t: 1 } and is durable through: { ts: Timestamp 1459929128000|61, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.280-0500 c20011| 2016-04-06T02:52:08.939-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|62, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|61, t: 1 }, name-id: "141" } [js_test:multi_coll_drop] 2016-04-06T02:52:49.289-0500 c20011| 2016-04-06T02:52:08.939-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.295-0500 c20011| 2016-04-06T02:52:08.939-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|61, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:49.302-0500 c20011| 2016-04-06T02:52:08.939-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:49.303-0500 c20011| 2016-04-06T02:52:08.940-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:49.307-0500 c20011| 2016-04-06T02:52:08.940-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|62, t: 1 } and is durable through: { ts: Timestamp 1459929128000|62, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.307-0500 c20011| 2016-04-06T02:52:08.940-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|62, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.311-0500 c20011| 2016-04-06T02:52:08.940-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.321-0500 c20011| 2016-04-06T02:52:08.940-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.324-0500 c20011| 2016-04-06T02:52:08.940-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|61, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:49.330-0500 c20011| 2016-04-06T02:52:08.940-0500 I COMMAND [conn25] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-89.0", lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -89.0 }, max: { _id: -88.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-89.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-88.0", lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -88.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-88.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|24 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.334-0500 c20011| 2016-04-06T02:52:08.940-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|61, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.338-0500 c20011| 2016-04-06T02:52:08.940-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|61, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.341-0500 c20011| 2016-04-06T02:52:08.940-0500 D COMMAND [conn25] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.940-0500-5704c02865c17830b843f195", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128940), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -89.0 }, max: { _id: MaxKey } }, left: { min: { _id: -89.0 }, max: { _id: -88.0 }, lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -88.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.346-0500 c20011| 2016-04-06T02:52:08.941-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|62, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:49.351-0500 c20011| 2016-04-06T02:52:08.941-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:49.352-0500 c20011| 2016-04-06T02:52:08.941-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:49.358-0500 c20011| 2016-04-06T02:52:08.941-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|62, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:49.363-0500 c20011| 2016-04-06T02:52:08.941-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.367-0500 c20011| 2016-04-06T02:52:08.941-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|62, t: 1 } and is durable through: { ts: Timestamp 1459929128000|62, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.375-0500 c20011| 2016-04-06T02:52:08.941-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.384-0500 c20011| 2016-04-06T02:52:08.941-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|62, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.386-0500 c20011| 2016-04-06T02:52:08.941-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|62, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.390-0500 c20011| 2016-04-06T02:52:08.941-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|63, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|62, t: 1 }, name-id: "142" } [js_test:multi_coll_drop] 2016-04-06T02:52:49.396-0500 c20011| 2016-04-06T02:52:08.942-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:49.396-0500 c20011| 2016-04-06T02:52:08.942-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:49.400-0500 c20011| 2016-04-06T02:52:08.942-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.403-0500 c20011| 2016-04-06T02:52:08.942-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|63, t: 1 } and is durable through: { ts: Timestamp 1459929128000|62, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.405-0500 c20011| 2016-04-06T02:52:08.942-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|63, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|62, t: 1 }, name-id: "142" } [js_test:multi_coll_drop] 2016-04-06T02:52:49.413-0500 c20011| 2016-04-06T02:52:08.943-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.418-0500 c20011| 2016-04-06T02:52:08.943-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:49.419-0500 c20011| 2016-04-06T02:52:08.943-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:49.423-0500 c20011| 2016-04-06T02:52:08.943-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|63, t: 1 } and is durable through: { ts: Timestamp 1459929128000|62, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.428-0500 c20011| 2016-04-06T02:52:08.943-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|63, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|62, t: 1 }, name-id: "142" } [js_test:multi_coll_drop] 2016-04-06T02:52:49.434-0500 c20011| 2016-04-06T02:52:08.943-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.438-0500 c20011| 2016-04-06T02:52:08.943-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.440-0500 c20011| 2016-04-06T02:52:08.943-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|62, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:49.444-0500 c20011| 2016-04-06T02:52:08.944-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|62, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:49.448-0500 c20011| 2016-04-06T02:52:08.944-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:49.449-0500 c20011| 2016-04-06T02:52:08.944-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:49.451-0500 c20011| 2016-04-06T02:52:08.944-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.453-0500 c20011| 2016-04-06T02:52:08.944-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|63, t: 1 } and is durable through: { ts: Timestamp 1459929128000|63, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.456-0500 c20011| 2016-04-06T02:52:08.944-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|63, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.467-0500 c20011| 2016-04-06T02:52:08.944-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.476-0500 c20011| 2016-04-06T02:52:08.944-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|62, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.481-0500 c20011| 2016-04-06T02:52:08.944-0500 I COMMAND [conn25] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.940-0500-5704c02865c17830b843f195", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128940), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -89.0 }, max: { _id: MaxKey } }, left: { min: { _id: -89.0 }, max: { _id: -88.0 }, lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -88.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.485-0500 c20011| 2016-04-06T02:52:08.944-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|62, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.487-0500 c20011| 2016-04-06T02:52:08.944-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f194') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.489-0500 c20011| 2016-04-06T02:52:08.944-0500 D QUERY [conn25] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:49.492-0500 c20011| 2016-04-06T02:52:08.944-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c02865c17830b843f194') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.497-0500 c20011| 2016-04-06T02:52:08.944-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|63, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:49.504-0500 c20011| 2016-04-06T02:52:08.944-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:49.505-0500 c20011| 2016-04-06T02:52:08.944-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:49.507-0500 c20011| 2016-04-06T02:52:08.944-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|63, t: 1 } and is durable through: { ts: Timestamp 1459929128000|63, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.510-0500 c20011| 2016-04-06T02:52:08.944-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.516-0500 c20011| 2016-04-06T02:52:08.944-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.518-0500 c20011| 2016-04-06T02:52:08.944-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|63, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:49.521-0500 c20011| 2016-04-06T02:52:08.945-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|63, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.526-0500 c20011| 2016-04-06T02:52:08.945-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|63, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.532-0500 c20011| 2016-04-06T02:52:08.947-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|64, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|63, t: 1 }, name-id: "143" } [js_test:multi_coll_drop] 2016-04-06T02:52:49.535-0500 c20011| 2016-04-06T02:52:08.947-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|63, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:49.545-0500 c20011| 2016-04-06T02:52:08.948-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|63, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:49.554-0500 c20011| 2016-04-06T02:52:08.948-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:49.556-0500 c20011| 2016-04-06T02:52:08.948-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:49.557-0500 c20011| 2016-04-06T02:52:08.948-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|64, t: 1 } and is durable through: { ts: Timestamp 1459929128000|63, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.561-0500 c20011| 2016-04-06T02:52:08.948-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|64, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|63, t: 1 }, name-id: "143" } [js_test:multi_coll_drop] 2016-04-06T02:52:49.567-0500 c20011| 2016-04-06T02:52:08.948-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:49.569-0500 c20011| 2016-04-06T02:52:08.948-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:49.585-0500 c20011| 2016-04-06T02:52:08.948-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.589-0500 c20011| 2016-04-06T02:52:08.948-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.595-0500 c20011| 2016-04-06T02:52:08.948-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.604-0500 c20011| 2016-04-06T02:52:08.948-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|64, t: 1 } and is durable through: { ts: Timestamp 1459929128000|63, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.609-0500 c20011| 2016-04-06T02:52:08.948-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|64, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|63, t: 1 }, name-id: "143" } [js_test:multi_coll_drop] 2016-04-06T02:52:49.615-0500 c20011| 2016-04-06T02:52:08.948-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.629-0500 c20011| 2016-04-06T02:52:08.950-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:49.632-0500 c20011| 2016-04-06T02:52:08.950-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:49.637-0500 c20011| 2016-04-06T02:52:08.950-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|64, t: 1 } and is durable through: { ts: Timestamp 1459929128000|64, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.640-0500 c20011| 2016-04-06T02:52:08.950-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|64, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.643-0500 c20011| 2016-04-06T02:52:08.950-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.649-0500 c20011| 2016-04-06T02:52:08.950-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.650-0500 c20011| 2016-04-06T02:52:08.950-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:49.651-0500 c20011| 2016-04-06T02:52:08.950-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:49.661-0500 c20011| 2016-04-06T02:52:08.950-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.663-0500 c20011| 2016-04-06T02:52:08.950-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|64, t: 1 } and is durable through: { ts: Timestamp 1459929128000|64, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.666-0500 c20011| 2016-04-06T02:52:08.950-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.671-0500 c20011| 2016-04-06T02:52:08.951-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|63, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.678-0500 c20011| 2016-04-06T02:52:08.951-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f194') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.680-0500 c20011| 2016-04-06T02:52:08.951-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|63, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.684-0500 c20011| 2016-04-06T02:52:08.951-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|64, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:49.689-0500 c20011| 2016-04-06T02:52:08.951-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|64, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:49.699-0500 c20011| 2016-04-06T02:52:08.951-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|64, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.704-0500 c20011| 2016-04-06T02:52:08.951-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|64, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:49.707-0500 c20011| 2016-04-06T02:52:08.951-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|64, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.710-0500 c20011| 2016-04-06T02:52:08.951-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:49.715-0500 c20011| 2016-04-06T02:52:08.952-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|64, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.718-0500 c20011| 2016-04-06T02:52:08.952-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|24 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|64, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.722-0500 c20011| 2016-04-06T02:52:08.952-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|64, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:49.738-0500 c20011| 2016-04-06T02:52:08.952-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|24 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|64, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.743-0500 c20011| 2016-04-06T02:52:08.952-0500 D QUERY [conn10] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:49.746-0500 c20011| 2016-04-06T02:52:08.952-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|24 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|64, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:732 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.747-0500 c20011| 2016-04-06T02:52:08.953-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|64, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.799-0500 c20011| 2016-04-06T02:52:08.953-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|64, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:49.806-0500 c20011| 2016-04-06T02:52:08.953-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|64, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.808-0500 c20011| 2016-04-06T02:52:08.953-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:49.813-0500 c20011| 2016-04-06T02:52:08.953-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|64, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.820-0500 c20011| 2016-04-06T02:52:08.953-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f196'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128953), why: "splitting chunk [{ _id: -88.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.823-0500 c20011| 2016-04-06T02:52:08.953-0500 D QUERY [conn25] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:49.830-0500 c20011| 2016-04-06T02:52:08.953-0500 D QUERY [conn25] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:49.834-0500 c20011| 2016-04-06T02:52:08.954-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.841-0500 c20011| 2016-04-06T02:52:08.954-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|64, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.850-0500 c20011| 2016-04-06T02:52:08.954-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|64, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.874-0500 c20011| 2016-04-06T02:52:08.956-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:49.876-0500 c20011| 2016-04-06T02:52:08.956-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:49.878-0500 c20011| 2016-04-06T02:52:08.956-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.881-0500 c20011| 2016-04-06T02:52:08.956-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|65, t: 1 } and is durable through: { ts: Timestamp 1459929128000|64, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.887-0500 c20011| 2016-04-06T02:52:08.956-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.892-0500 c20011| 2016-04-06T02:52:08.956-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|64, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:49.906-0500 c20011| 2016-04-06T02:52:08.956-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:49.907-0500 c20011| 2016-04-06T02:52:08.956-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:49.910-0500 c20011| 2016-04-06T02:52:08.957-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|65, t: 1 } and is durable through: { ts: Timestamp 1459929128000|64, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.913-0500 c20013| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.914-0500 c20012| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.916-0500 c20012| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.918-0500 c20012| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.919-0500 c20012| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.920-0500 c20012| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.921-0500 c20012| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.921-0500 c20012| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.922-0500 c20012| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.923-0500 c20012| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.923-0500 c20012| 2016-04-06T02:52:08.581-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:49.924-0500 c20012| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.924-0500 c20012| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.925-0500 c20012| 2016-04-06T02:52:08.581-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:49.928-0500 c20012| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.930-0500 c20012| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.931-0500 c20012| 2016-04-06T02:52:08.581-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.931-0500 c20012| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.933-0500 c20012| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.934-0500 c20012| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.943-0500 c20012| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.944-0500 c20012| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.945-0500 c20012| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.947-0500 c20012| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.948-0500 c20012| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.950-0500 c20012| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.955-0500 c20012| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.958-0500 c20012| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.958-0500 c20012| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.959-0500 c20012| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.960-0500 c20012| 2016-04-06T02:52:08.582-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:49.962-0500 c20012| 2016-04-06T02:52:08.582-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:49.968-0500 c20012| 2016-04-06T02:52:08.582-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:49.970-0500 c20012| 2016-04-06T02:52:08.582-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 450 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|20, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:49.970-0500 c20012| 2016-04-06T02:52:08.582-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 450 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:49.972-0500 s20015| 2016-04-06T02:52:32.631-0500 D ASIO [Balancer] startCommand: RemoteCommand 62 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:02.631-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929152631), up: 25, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.973-0500 s20015| 2016-04-06T02:52:32.631-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 62 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:49.973-0500 c20011| 2016-04-06T02:52:08.957-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:49.976-0500 c20011| 2016-04-06T02:52:08.957-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:49.977-0500 c20011| 2016-04-06T02:52:08.957-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|64, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:49.978-0500 c20011| 2016-04-06T02:52:08.959-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|65, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|64, t: 1 }, name-id: "144" } [js_test:multi_coll_drop] 2016-04-06T02:52:50.012-0500 c20011| 2016-04-06T02:52:08.960-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:50.013-0500 c20011| 2016-04-06T02:52:08.960-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:50.015-0500 c20011| 2016-04-06T02:52:08.960-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.017-0500 c20011| 2016-04-06T02:52:08.960-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|65, t: 1 } and is durable through: { ts: Timestamp 1459929128000|65, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.017-0500 c20011| 2016-04-06T02:52:08.960-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|65, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.022-0500 c20011| 2016-04-06T02:52:08.960-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.026-0500 c20011| 2016-04-06T02:52:08.960-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|64, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.029-0500 c20011| 2016-04-06T02:52:08.961-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|64, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.034-0500 c20011| 2016-04-06T02:52:08.961-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f196'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128953), why: "splitting chunk [{ _id: -88.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c02865c17830b843f196'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128953), why: "splitting chunk [{ _id: -88.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.039-0500 c20011| 2016-04-06T02:52:08.961-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|65, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:50.049-0500 c20011| 2016-04-06T02:52:08.962-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:50.049-0500 c20011| 2016-04-06T02:52:08.962-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:50.052-0500 c20011| 2016-04-06T02:52:08.962-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|65, t: 1 } and is durable through: { ts: Timestamp 1459929128000|65, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.053-0500 c20011| 2016-04-06T02:52:08.962-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.061-0500 c20011| 2016-04-06T02:52:08.962-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.070-0500 c20011| 2016-04-06T02:52:08.963-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|65, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:50.074-0500 c20011| 2016-04-06T02:52:08.963-0500 D COMMAND [conn25] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|26 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|65, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.081-0500 c20011| 2016-04-06T02:52:08.963-0500 D COMMAND [conn25] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|65, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:50.087-0500 c20011| 2016-04-06T02:52:08.963-0500 D COMMAND [conn25] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|26 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|65, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.091-0500 c20011| 2016-04-06T02:52:08.963-0500 D QUERY [conn25] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:50.098-0500 c20011| 2016-04-06T02:52:08.963-0500 I COMMAND [conn25] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|26 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|65, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.104-0500 c20011| 2016-04-06T02:52:08.963-0500 D COMMAND [conn25] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-88.0", lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -88.0 }, max: { _id: -87.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-88.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-87.0", lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -87.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-87.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|26 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.110-0500 c20011| 2016-04-06T02:52:08.963-0500 D QUERY [conn25] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:50.115-0500 c20011| 2016-04-06T02:52:08.963-0500 D QUERY [conn25] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:50.128-0500 c20011| 2016-04-06T02:52:08.964-0500 I COMMAND [conn25] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.129-0500 c20011| 2016-04-06T02:52:08.964-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-88.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:50.132-0500 c20011| 2016-04-06T02:52:08.964-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-87.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:50.136-0500 c20011| 2016-04-06T02:52:08.964-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|65, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.141-0500 c20011| 2016-04-06T02:52:08.964-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|65, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.148-0500 c20011| 2016-04-06T02:52:08.966-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:50.148-0500 c20011| 2016-04-06T02:52:08.966-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:50.153-0500 c20011| 2016-04-06T02:52:08.966-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.207-0500 c20011| 2016-04-06T02:52:08.966-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|66, t: 1 } and is durable through: { ts: Timestamp 1459929128000|65, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.217-0500 c20011| 2016-04-06T02:52:08.966-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.223-0500 c20011| 2016-04-06T02:52:08.966-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:50.224-0500 c20011| 2016-04-06T02:52:08.966-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:50.230-0500 c20011| 2016-04-06T02:52:08.966-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|66, t: 1 } and is durable through: { ts: Timestamp 1459929128000|65, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.232-0500 c20011| 2016-04-06T02:52:08.966-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.238-0500 c20011| 2016-04-06T02:52:08.966-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.241-0500 c20011| 2016-04-06T02:52:08.966-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|65, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:50.246-0500 c20011| 2016-04-06T02:52:08.967-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|66, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|65, t: 1 }, name-id: "145" } [js_test:multi_coll_drop] 2016-04-06T02:52:50.248-0500 c20011| 2016-04-06T02:52:08.967-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|65, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:50.253-0500 c20011| 2016-04-06T02:52:08.967-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:50.253-0500 c20011| 2016-04-06T02:52:08.967-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:50.260-0500 c20011| 2016-04-06T02:52:08.967-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:50.262-0500 c20011| 2016-04-06T02:52:08.967-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.267-0500 c20011| 2016-04-06T02:52:08.967-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:50.275-0500 c20011| 2016-04-06T02:52:08.967-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|66, t: 1 } and is durable through: { ts: Timestamp 1459929128000|66, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.281-0500 c20011| 2016-04-06T02:52:08.967-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|66, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.288-0500 c20011| 2016-04-06T02:52:08.967-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.291-0500 c20011| 2016-04-06T02:52:08.967-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|66, t: 1 } and is durable through: { ts: Timestamp 1459929128000|66, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.296-0500 c20011| 2016-04-06T02:52:08.967-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.306-0500 c20011| 2016-04-06T02:52:08.967-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.323-0500 c20011| 2016-04-06T02:52:08.967-0500 I COMMAND [conn25] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-88.0", lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -88.0 }, max: { _id: -87.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-88.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-87.0", lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -87.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-87.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|26 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.327-0500 c20011| 2016-04-06T02:52:08.967-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|65, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.330-0500 c20011| 2016-04-06T02:52:08.967-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|65, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.342-0500 c20011| 2016-04-06T02:52:08.968-0500 D COMMAND [conn25] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.967-0500-5704c02865c17830b843f197", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128967), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -88.0 }, max: { _id: MaxKey } }, left: { min: { _id: -88.0 }, max: { _id: -87.0 }, lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -87.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.346-0500 c20011| 2016-04-06T02:52:08.968-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|66, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:50.351-0500 c20011| 2016-04-06T02:52:08.968-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|66, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.357-0500 c20011| 2016-04-06T02:52:08.968-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|66, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:50.359-0500 c20011| 2016-04-06T02:52:08.968-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|67, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|66, t: 1 }, name-id: "146" } [js_test:multi_coll_drop] 2016-04-06T02:52:50.362-0500 c20011| 2016-04-06T02:52:08.968-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|66, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.368-0500 c20011| 2016-04-06T02:52:08.970-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:50.369-0500 c20011| 2016-04-06T02:52:08.970-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:50.373-0500 c20011| 2016-04-06T02:52:08.970-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.377-0500 c20011| 2016-04-06T02:52:08.970-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|67, t: 1 } and is durable through: { ts: Timestamp 1459929128000|66, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.382-0500 c20011| 2016-04-06T02:52:08.970-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|67, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|66, t: 1 }, name-id: "146" } [js_test:multi_coll_drop] 2016-04-06T02:52:50.385-0500 c20011| 2016-04-06T02:52:08.970-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.389-0500 c20011| 2016-04-06T02:52:08.970-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|66, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:50.392-0500 c20011| 2016-04-06T02:52:08.971-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:50.393-0500 c20011| 2016-04-06T02:52:08.971-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:50.394-0500 c20011| 2016-04-06T02:52:08.971-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|67, t: 1 } and is durable through: { ts: Timestamp 1459929128000|66, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.395-0500 c20011| 2016-04-06T02:52:08.971-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|67, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|66, t: 1 }, name-id: "146" } [js_test:multi_coll_drop] 2016-04-06T02:52:50.401-0500 c20011| 2016-04-06T02:52:08.971-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.407-0500 c20011| 2016-04-06T02:52:08.971-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.411-0500 c20011| 2016-04-06T02:52:08.971-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|66, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:50.421-0500 c20011| 2016-04-06T02:52:08.972-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:50.423-0500 c20011| 2016-04-06T02:52:08.972-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:50.430-0500 c20011| 2016-04-06T02:52:08.972-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:50.431-0500 c20011| 2016-04-06T02:52:08.972-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:50.443-0500 c20011| 2016-04-06T02:52:08.972-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.448-0500 c20011| 2016-04-06T02:52:08.972-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|67, t: 1 } and is durable through: { ts: Timestamp 1459929128000|67, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.451-0500 c20011| 2016-04-06T02:52:08.972-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|67, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.471-0500 c20011| 2016-04-06T02:52:08.972-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.472-0500 c20011| 2016-04-06T02:52:08.972-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|67, t: 1 } and is durable through: { ts: Timestamp 1459929128000|67, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.477-0500 c20011| 2016-04-06T02:52:08.972-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.485-0500 c20011| 2016-04-06T02:52:08.972-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.495-0500 c20011| 2016-04-06T02:52:08.972-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|66, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.499-0500 c20011| 2016-04-06T02:52:08.972-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|66, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.505-0500 c20011| 2016-04-06T02:52:08.972-0500 I COMMAND [conn25] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:08.967-0500-5704c02865c17830b843f197", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128967), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -88.0 }, max: { _id: MaxKey } }, left: { min: { _id: -88.0 }, max: { _id: -87.0 }, lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -87.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.511-0500 c20011| 2016-04-06T02:52:08.972-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f196') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.511-0500 c20011| 2016-04-06T02:52:08.972-0500 D QUERY [conn25] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:50.515-0500 c20011| 2016-04-06T02:52:08.972-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|67, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:50.524-0500 c20011| 2016-04-06T02:52:08.972-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c02865c17830b843f196') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.534-0500 c20011| 2016-04-06T02:52:08.972-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|67, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:50.541-0500 c20011| 2016-04-06T02:52:08.973-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|67, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.549-0500 c20011| 2016-04-06T02:52:08.973-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|67, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.555-0500 c20011| 2016-04-06T02:52:08.973-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|68, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|67, t: 1 }, name-id: "147" } [js_test:multi_coll_drop] 2016-04-06T02:52:50.564-0500 c20011| 2016-04-06T02:52:08.975-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:50.566-0500 c20011| 2016-04-06T02:52:08.975-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:50.569-0500 c20011| 2016-04-06T02:52:08.975-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.573-0500 c20011| 2016-04-06T02:52:08.975-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|68, t: 1 } and is durable through: { ts: Timestamp 1459929128000|67, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.577-0500 c20011| 2016-04-06T02:52:08.975-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929128000|68, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|67, t: 1 }, name-id: "147" } [js_test:multi_coll_drop] 2016-04-06T02:52:50.586-0500 c20011| 2016-04-06T02:52:08.975-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.595-0500 c20011| 2016-04-06T02:52:08.975-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:50.599-0500 c20011| 2016-04-06T02:52:08.975-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|67, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:50.602-0500 c20011| 2016-04-06T02:52:08.975-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:50.606-0500 c20011| 2016-04-06T02:52:08.975-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|68, t: 1 } and is durable through: { ts: Timestamp 1459929128000|67, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.607-0500 c20011| 2016-04-06T02:52:08.975-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929128000|68, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|67, t: 1 }, name-id: "147" } [js_test:multi_coll_drop] 2016-04-06T02:52:50.610-0500 c20011| 2016-04-06T02:52:08.975-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.616-0500 c20011| 2016-04-06T02:52:08.975-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.620-0500 c20011| 2016-04-06T02:52:08.975-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|67, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:50.628-0500 c20011| 2016-04-06T02:52:08.976-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:50.630-0500 c20011| 2016-04-06T02:52:08.976-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:50.635-0500 c20011| 2016-04-06T02:52:08.976-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|68, t: 1 } and is durable through: { ts: Timestamp 1459929128000|68, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.636-0500 c20011| 2016-04-06T02:52:08.976-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|68, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.641-0500 c20011| 2016-04-06T02:52:08.976-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.647-0500 c20011| 2016-04-06T02:52:08.976-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.653-0500 c20011| 2016-04-06T02:52:08.976-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|67, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.655-0500 c20011| 2016-04-06T02:52:08.976-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|67, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.660-0500 c20011| 2016-04-06T02:52:08.976-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f196') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.664-0500 c20011| 2016-04-06T02:52:08.976-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|68, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:50.665-0500 c20011| 2016-04-06T02:52:08.977-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|68, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:50.669-0500 c20011| 2016-04-06T02:52:08.977-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|26 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|68, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.671-0500 c20011| 2016-04-06T02:52:08.977-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|68, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:50.674-0500 c20011| 2016-04-06T02:52:08.977-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|26 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|68, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.680-0500 c20011| 2016-04-06T02:52:08.977-0500 D QUERY [conn10] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:50.689-0500 c20011| 2016-04-06T02:52:08.977-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|26 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|68, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:732 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.696-0500 c20011| 2016-04-06T02:52:08.978-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:50.698-0500 c20011| 2016-04-06T02:52:08.978-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:50.701-0500 c20011| 2016-04-06T02:52:08.979-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.704-0500 c20011| 2016-04-06T02:52:08.979-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|68, t: 1 } and is durable through: { ts: Timestamp 1459929128000|68, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.713-0500 c20011| 2016-04-06T02:52:08.979-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.721-0500 c20011| 2016-04-06T02:52:08.981-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f198'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128978), why: "splitting chunk [{ _id: -87.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.729-0500 c20011| 2016-04-06T02:52:08.981-0500 D QUERY [conn25] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:50.732-0500 c20011| 2016-04-06T02:52:08.981-0500 D QUERY [conn25] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:50.741-0500 c20011| 2016-04-06T02:52:08.981-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.753-0500 c20011| 2016-04-06T02:52:08.982-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|68, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.761-0500 c20011| 2016-04-06T02:52:08.982-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|68, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.768-0500 c20011| 2016-04-06T02:52:08.984-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:50.768-0500 c20011| 2016-04-06T02:52:08.984-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:50.772-0500 c20011| 2016-04-06T02:52:08.984-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|68, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:50.776-0500 c20011| 2016-04-06T02:52:08.984-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|69, t: 1 } and is durable through: { ts: Timestamp 1459929128000|68, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.795-0500 c20011| 2016-04-06T02:52:08.984-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.804-0500 c20011| 2016-04-06T02:52:08.984-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.809-0500 c20011| 2016-04-06T02:52:08.985-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:50.810-0500 c20011| 2016-04-06T02:52:08.985-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:50.814-0500 c20011| 2016-04-06T02:52:08.985-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.819-0500 c20011| 2016-04-06T02:52:08.985-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|69, t: 1 } and is durable through: { ts: Timestamp 1459929128000|68, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.827-0500 c20011| 2016-04-06T02:52:08.985-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.832-0500 c20011| 2016-04-06T02:52:08.985-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|68, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:50.834-0500 c20011| 2016-04-06T02:52:08.986-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|69, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|68, t: 1 }, name-id: "148" } [js_test:multi_coll_drop] 2016-04-06T02:52:50.837-0500 c20011| 2016-04-06T02:52:08.987-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:50.837-0500 c20011| 2016-04-06T02:52:08.987-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:50.839-0500 c20011| 2016-04-06T02:52:08.987-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.840-0500 c20011| 2016-04-06T02:52:08.987-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|69, t: 1 } and is durable through: { ts: Timestamp 1459929128000|69, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.841-0500 c20011| 2016-04-06T02:52:08.987-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|69, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.845-0500 c20011| 2016-04-06T02:52:08.987-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.850-0500 c20011| 2016-04-06T02:52:08.987-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:50.852-0500 c20011| 2016-04-06T02:52:08.987-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:50.857-0500 c20011| 2016-04-06T02:52:08.987-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|69, t: 1 } and is durable through: { ts: Timestamp 1459929128000|69, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.861-0500 c20011| 2016-04-06T02:52:08.987-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|68, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.864-0500 c20011| 2016-04-06T02:52:08.987-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.868-0500 c20011| 2016-04-06T02:52:08.987-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.872-0500 c20011| 2016-04-06T02:52:08.987-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|68, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.874-0500 c20011| 2016-04-06T02:52:08.987-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|69, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:50.877-0500 c20011| 2016-04-06T02:52:08.988-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|69, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:50.883-0500 c20011| 2016-04-06T02:52:08.988-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02865c17830b843f198'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128978), why: "splitting chunk [{ _id: -87.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c02865c17830b843f198'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929128978), why: "splitting chunk [{ _id: -87.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.887-0500 c20011| 2016-04-06T02:52:08.989-0500 D COMMAND [conn25] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|28 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|69, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.889-0500 c20011| 2016-04-06T02:52:08.989-0500 D COMMAND [conn25] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|69, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:50.894-0500 c20011| 2016-04-06T02:52:08.989-0500 D COMMAND [conn25] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|28 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|69, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.896-0500 c20011| 2016-04-06T02:52:08.989-0500 D QUERY [conn25] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:50.900-0500 c20011| 2016-04-06T02:52:08.989-0500 I COMMAND [conn25] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|28 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|69, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.909-0500 c20011| 2016-04-06T02:52:08.990-0500 D COMMAND [conn25] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-87.0", lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -87.0 }, max: { _id: -86.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-87.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-86.0", lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -86.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-86.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|28 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.912-0500 c20011| 2016-04-06T02:52:08.990-0500 D QUERY [conn25] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:50.916-0500 c20011| 2016-04-06T02:52:08.990-0500 D QUERY [conn25] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:50.921-0500 c20011| 2016-04-06T02:52:08.990-0500 I COMMAND [conn25] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.924-0500 c20011| 2016-04-06T02:52:08.990-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-87.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:50.924-0500 c20011| 2016-04-06T02:52:08.991-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-86.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:50.928-0500 c20011| 2016-04-06T02:52:08.991-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|69, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.934-0500 c20011| 2016-04-06T02:52:08.991-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|69, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.938-0500 c20011| 2016-04-06T02:52:08.993-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:50.938-0500 c20011| 2016-04-06T02:52:08.993-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:50.940-0500 c20011| 2016-04-06T02:52:08.993-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.944-0500 c20011| 2016-04-06T02:52:08.993-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|70, t: 1 } and is durable through: { ts: Timestamp 1459929128000|69, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.948-0500 c20011| 2016-04-06T02:52:08.993-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.952-0500 c20011| 2016-04-06T02:52:08.993-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:50.955-0500 c20011| 2016-04-06T02:52:08.993-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:50.963-0500 c20011| 2016-04-06T02:52:08.993-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|69, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:50.965-0500 c20011| 2016-04-06T02:52:08.993-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|70, t: 1 } and is durable through: { ts: Timestamp 1459929128000|69, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.969-0500 c20011| 2016-04-06T02:52:08.993-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.973-0500 c20011| 2016-04-06T02:52:08.993-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:50.975-0500 c20011| 2016-04-06T02:52:08.994-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929128000|70, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|69, t: 1 }, name-id: "149" } [js_test:multi_coll_drop] 2016-04-06T02:52:50.978-0500 c20011| 2016-04-06T02:52:08.994-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|69, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:50.989-0500 c20011| 2016-04-06T02:52:08.996-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:50.989-0500 c20011| 2016-04-06T02:52:08.996-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:50.993-0500 c20011| 2016-04-06T02:52:08.996-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.996-0500 c20011| 2016-04-06T02:52:08.997-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|70, t: 1 } and is durable through: { ts: Timestamp 1459929128000|70, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:50.999-0500 c20011| 2016-04-06T02:52:08.997-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|70, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.060-0500 c20011| 2016-04-06T02:52:08.997-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.065-0500 c20011| 2016-04-06T02:52:08.997-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|69, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.073-0500 c20011| 2016-04-06T02:52:08.997-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|69, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.080-0500 c20011| 2016-04-06T02:52:08.997-0500 I COMMAND [conn25] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-87.0", lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -87.0 }, max: { _id: -86.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-87.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-86.0", lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -86.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-86.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|28 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.082-0500 c20011| 2016-04-06T02:52:08.997-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|70, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:51.085-0500 c20011| 2016-04-06T02:52:08.997-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|70, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:51.091-0500 c20011| 2016-04-06T02:52:09.014-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:51.092-0500 c20011| 2016-04-06T02:52:09.014-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:51.096-0500 c20011| 2016-04-06T02:52:09.014-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929128000|70, t: 1 } and is durable through: { ts: Timestamp 1459929128000|70, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.101-0500 c20011| 2016-04-06T02:52:09.014-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.109-0500 c20011| 2016-04-06T02:52:09.014-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.117-0500 c20011| 2016-04-06T02:52:09.014-0500 D COMMAND [conn25] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:09.014-0500-5704c02965c17830b843f199", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929129014), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -87.0 }, max: { _id: MaxKey } }, left: { min: { _id: -87.0 }, max: { _id: -86.0 }, lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -86.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.120-0500 c20011| 2016-04-06T02:52:09.019-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|70, t: 1 } } cursorid:17466612721 numYields:1 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 22ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.128-0500 c20011| 2016-04-06T02:52:09.020-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|70, t: 1 } } cursorid:20785203637 numYields:1 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 22ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.130-0500 c20011| 2016-04-06T02:52:09.022-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929129000|1, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|70, t: 1 }, name-id: "150" } [js_test:multi_coll_drop] 2016-04-06T02:52:51.136-0500 c20011| 2016-04-06T02:52:09.022-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:51.150-0500 c20011| 2016-04-06T02:52:09.022-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:51.154-0500 c20011| 2016-04-06T02:52:09.022-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|1, t: 1 } and is durable through: { ts: Timestamp 1459929128000|70, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.170-0500 c20011| 2016-04-06T02:52:09.022-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929129000|1, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|70, t: 1 }, name-id: "150" } [js_test:multi_coll_drop] 2016-04-06T02:52:51.173-0500 c20011| 2016-04-06T02:52:09.022-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.176-0500 c20011| 2016-04-06T02:52:09.022-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.182-0500 c20011| 2016-04-06T02:52:09.022-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:51.182-0500 c20011| 2016-04-06T02:52:09.022-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:51.189-0500 c20011| 2016-04-06T02:52:09.022-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.192-0500 c20011| 2016-04-06T02:52:09.022-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|1, t: 1 } and is durable through: { ts: Timestamp 1459929128000|70, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.193-0500 c20011| 2016-04-06T02:52:09.022-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929129000|1, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929128000|70, t: 1 }, name-id: "150" } [js_test:multi_coll_drop] 2016-04-06T02:52:51.194-0500 c20011| 2016-04-06T02:52:09.022-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|70, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:51.203-0500 c20011| 2016-04-06T02:52:09.022-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.205-0500 c20011| 2016-04-06T02:52:09.023-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|70, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:51.217-0500 c20011| 2016-04-06T02:52:09.024-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:51.219-0500 c20011| 2016-04-06T02:52:09.024-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:51.224-0500 c20011| 2016-04-06T02:52:09.024-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.228-0500 c20011| 2016-04-06T02:52:09.024-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|1, t: 1 } and is durable through: { ts: Timestamp 1459929129000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.230-0500 c20011| 2016-04-06T02:52:09.024-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.236-0500 c20011| 2016-04-06T02:52:09.024-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:51.237-0500 c20011| 2016-04-06T02:52:09.024-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:51.241-0500 c20011| 2016-04-06T02:52:09.024-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.248-0500 c20011| 2016-04-06T02:52:09.024-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|1, t: 1 } and is durable through: { ts: Timestamp 1459929129000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.250-0500 c20011| 2016-04-06T02:52:09.024-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.259-0500 c20011| 2016-04-06T02:52:09.024-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.268-0500 c20011| 2016-04-06T02:52:09.025-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|70, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.279-0500 c20011| 2016-04-06T02:52:09.025-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|70, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.294-0500 c20011| 2016-04-06T02:52:09.025-0500 I COMMAND [conn25] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:09.014-0500-5704c02965c17830b843f199", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929129014), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -87.0 }, max: { _id: MaxKey } }, left: { min: { _id: -87.0 }, max: { _id: -86.0 }, lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -86.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 10ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.308-0500 c20011| 2016-04-06T02:52:09.025-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:51.314-0500 c20011| 2016-04-06T02:52:09.025-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:51.317-0500 c20011| 2016-04-06T02:52:09.029-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f198') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.319-0500 c20011| 2016-04-06T02:52:09.029-0500 D QUERY [conn25] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:51.322-0500 c20011| 2016-04-06T02:52:09.029-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c02865c17830b843f198') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.347-0500 c20011| 2016-04-06T02:52:09.029-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|1, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.350-0500 c20011| 2016-04-06T02:52:09.029-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|1, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.358-0500 c20011| 2016-04-06T02:52:09.031-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:51.358-0500 c20011| 2016-04-06T02:52:09.031-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:51.363-0500 c20011| 2016-04-06T02:52:09.031-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|2, t: 1 } and is durable through: { ts: Timestamp 1459929129000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.367-0500 c20011| 2016-04-06T02:52:09.031-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.373-0500 c20011| 2016-04-06T02:52:09.031-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.375-0500 c20011| 2016-04-06T02:52:09.031-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929129000|2, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|1, t: 1 }, name-id: "151" } [js_test:multi_coll_drop] 2016-04-06T02:52:51.376-0500 c20011| 2016-04-06T02:52:09.031-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:51.380-0500 c20011| 2016-04-06T02:52:09.032-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:51.384-0500 c20011| 2016-04-06T02:52:09.032-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:51.386-0500 c20011| 2016-04-06T02:52:09.032-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:51.388-0500 c20011| 2016-04-06T02:52:09.032-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.390-0500 c20011| 2016-04-06T02:52:09.032-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|2, t: 1 } and is durable through: { ts: Timestamp 1459929129000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.393-0500 c20011| 2016-04-06T02:52:09.032-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929129000|2, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|1, t: 1 }, name-id: "151" } [js_test:multi_coll_drop] 2016-04-06T02:52:51.395-0500 c20011| 2016-04-06T02:52:09.032-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.399-0500 c20011| 2016-04-06T02:52:09.033-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:51.402-0500 c20011| 2016-04-06T02:52:09.033-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:51.405-0500 c20011| 2016-04-06T02:52:09.033-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|2, t: 1 } and is durable through: { ts: Timestamp 1459929129000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.407-0500 c20011| 2016-04-06T02:52:09.033-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.410-0500 c20011| 2016-04-06T02:52:09.033-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.415-0500 c20011| 2016-04-06T02:52:09.033-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.424-0500 c20011| 2016-04-06T02:52:09.034-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|1, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.469-0500 c20011| 2016-04-06T02:52:09.034-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c02865c17830b843f198') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.471-0500 c20011| 2016-04-06T02:52:09.034-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|1, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.474-0500 c20011| 2016-04-06T02:52:09.034-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:51.478-0500 c20011| 2016-04-06T02:52:09.035-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:51.482-0500 c20011| 2016-04-06T02:52:09.036-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:51.486-0500 c20011| 2016-04-06T02:52:09.036-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:51.488-0500 c20011| 2016-04-06T02:52:09.036-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.492-0500 c20011| 2016-04-06T02:52:09.036-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|2, t: 1 } and is durable through: { ts: Timestamp 1459929129000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.497-0500 c20011| 2016-04-06T02:52:09.036-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.502-0500 c20011| 2016-04-06T02:52:09.036-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02965c17830b843f19a'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929129036), why: "splitting chunk [{ _id: -86.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.504-0500 c20011| 2016-04-06T02:52:09.036-0500 D QUERY [conn25] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:51.510-0500 c20011| 2016-04-06T02:52:09.036-0500 D QUERY [conn25] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:51.513-0500 c20011| 2016-04-06T02:52:09.036-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.513-0500 c20011| 2016-04-06T02:52:09.038-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|2, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.519-0500 c20011| 2016-04-06T02:52:09.038-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|2, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.525-0500 c20011| 2016-04-06T02:52:09.041-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:51.528-0500 c20011| 2016-04-06T02:52:09.042-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:51.529-0500 c20011| 2016-04-06T02:52:09.042-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:51.532-0500 c20011| 2016-04-06T02:52:09.042-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.537-0500 c20011| 2016-04-06T02:52:09.042-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|3, t: 1 } and is durable through: { ts: Timestamp 1459929129000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.552-0500 c20011| 2016-04-06T02:52:09.042-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.557-0500 c20011| 2016-04-06T02:52:09.042-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:51.563-0500 c20011| 2016-04-06T02:52:09.043-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:51.563-0500 c20011| 2016-04-06T02:52:09.043-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:51.568-0500 c20011| 2016-04-06T02:52:09.043-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|3, t: 1 } and is durable through: { ts: Timestamp 1459929129000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.577-0500 c20011| 2016-04-06T02:52:09.044-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.593-0500 c20011| 2016-04-06T02:52:09.044-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.600-0500 c20011| 2016-04-06T02:52:09.047-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929129000|3, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|2, t: 1 }, name-id: "152" } [js_test:multi_coll_drop] 2016-04-06T02:52:51.614-0500 c20011| 2016-04-06T02:52:09.049-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:51.617-0500 c20011| 2016-04-06T02:52:09.049-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:51.624-0500 c20011| 2016-04-06T02:52:09.049-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.626-0500 c20011| 2016-04-06T02:52:09.049-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|3, t: 1 } and is durable through: { ts: Timestamp 1459929129000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.629-0500 c20011| 2016-04-06T02:52:09.049-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.637-0500 c20011| 2016-04-06T02:52:09.049-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.640-0500 c20011| 2016-04-06T02:52:09.050-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|2, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 8ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.648-0500 c20011| 2016-04-06T02:52:09.050-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02965c17830b843f19a'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929129036), why: "splitting chunk [{ _id: -86.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c02965c17830b843f19a'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929129036), why: "splitting chunk [{ _id: -86.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 13ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.652-0500 c20011| 2016-04-06T02:52:09.050-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|2, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.663-0500 c20011| 2016-04-06T02:52:09.050-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:51.669-0500 c20011| 2016-04-06T02:52:09.051-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:51.678-0500 c20011| 2016-04-06T02:52:09.051-0500 D COMMAND [conn25] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-86.0", lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -86.0 }, max: { _id: -85.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-86.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-85.0", lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -85.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-85.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|30 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.679-0500 c20011| 2016-04-06T02:52:09.052-0500 D QUERY [conn25] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:51.684-0500 c20011| 2016-04-06T02:52:09.052-0500 D QUERY [conn25] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:51.694-0500 c20011| 2016-04-06T02:52:09.052-0500 I COMMAND [conn25] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.712-0500 c20011| 2016-04-06T02:52:09.052-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-86.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:51.723-0500 c20011| 2016-04-06T02:52:09.052-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:51.723-0500 c20011| 2016-04-06T02:52:09.052-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:51.724-0500 c20011| 2016-04-06T02:52:09.052-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-85.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:51.727-0500 c20011| 2016-04-06T02:52:09.052-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|3, t: 1 } and is durable through: { ts: Timestamp 1459929129000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.733-0500 c20011| 2016-04-06T02:52:09.052-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.741-0500 c20011| 2016-04-06T02:52:09.052-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.750-0500 c20011| 2016-04-06T02:52:09.055-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|3, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.754-0500 c20011| 2016-04-06T02:52:09.055-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|3, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.763-0500 c20011| 2016-04-06T02:52:09.056-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929129000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|3, t: 1 }, name-id: "153" } [js_test:multi_coll_drop] 2016-04-06T02:52:51.767-0500 c20011| 2016-04-06T02:52:09.058-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:51.768-0500 c20011| 2016-04-06T02:52:09.059-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:51.771-0500 c20011| 2016-04-06T02:52:09.059-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:51.772-0500 c20011| 2016-04-06T02:52:09.059-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:51.791-0500 c20011| 2016-04-06T02:52:09.059-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|4, t: 1 } and is durable through: { ts: Timestamp 1459929129000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.796-0500 c20011| 2016-04-06T02:52:09.059-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929129000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|3, t: 1 }, name-id: "153" } [js_test:multi_coll_drop] 2016-04-06T02:52:51.800-0500 c20011| 2016-04-06T02:52:09.059-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.813-0500 c20011| 2016-04-06T02:52:09.059-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.817-0500 c20011| 2016-04-06T02:52:09.059-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:51.818-0500 c20011| 2016-04-06T02:52:09.059-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:51.825-0500 c20011| 2016-04-06T02:52:09.060-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.838-0500 c20011| 2016-04-06T02:52:09.060-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|4, t: 1 } and is durable through: { ts: Timestamp 1459929129000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.844-0500 c20011| 2016-04-06T02:52:09.060-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929129000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|3, t: 1 }, name-id: "153" } [js_test:multi_coll_drop] 2016-04-06T02:52:51.851-0500 c20011| 2016-04-06T02:52:09.060-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.856-0500 c20011| 2016-04-06T02:52:09.062-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:51.859-0500 c20011| 2016-04-06T02:52:09.062-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:51.883-0500 c20011| 2016-04-06T02:52:09.062-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.919-0500 c20011| 2016-04-06T02:52:09.062-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|4, t: 1 } and is durable through: { ts: Timestamp 1459929129000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.920-0500 c20011| 2016-04-06T02:52:09.062-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.922-0500 c20011| 2016-04-06T02:52:09.062-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.926-0500 c20011| 2016-04-06T02:52:09.062-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|3, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.932-0500 c20011| 2016-04-06T02:52:09.062-0500 I COMMAND [conn25] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-86.0", lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -86.0 }, max: { _id: -85.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-86.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-85.0", lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -85.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-85.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|30 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 10ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.934-0500 c20011| 2016-04-06T02:52:09.062-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|3, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.937-0500 c20011| 2016-04-06T02:52:09.063-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:51.939-0500 c20011| 2016-04-06T02:52:09.063-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:51.945-0500 c20011| 2016-04-06T02:52:09.063-0500 D COMMAND [conn25] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:09.062-0500-5704c02965c17830b843f19b", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929129062), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -86.0 }, max: { _id: MaxKey } }, left: { min: { _id: -86.0 }, max: { _id: -85.0 }, lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -85.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.947-0500 c20011| 2016-04-06T02:52:09.063-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|4, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.953-0500 c20011| 2016-04-06T02:52:09.063-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|4, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:51.956-0500 c20011| 2016-04-06T02:52:09.065-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929129000|5, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|4, t: 1 }, name-id: "154" } [js_test:multi_coll_drop] 2016-04-06T02:52:51.961-0500 c20011| 2016-04-06T02:52:09.066-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:51.962-0500 c20011| 2016-04-06T02:52:09.066-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:51.980-0500 c20011| 2016-04-06T02:52:09.066-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|4, t: 1 } and is durable through: { ts: Timestamp 1459929129000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:51.985-0500 c20011| 2016-04-06T02:52:09.066-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:51.989-0500 c20011| 2016-04-06T02:52:09.066-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929129000|5, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|4, t: 1 }, name-id: "154" } [js_test:multi_coll_drop] 2016-04-06T02:52:51.995-0500 c20011| 2016-04-06T02:52:09.066-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.030-0500 c20011| 2016-04-06T02:52:09.066-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.032-0500 c20011| 2016-04-06T02:52:09.066-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:52.037-0500 c20011| 2016-04-06T02:52:09.066-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:52.039-0500 c20011| 2016-04-06T02:52:09.066-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:52.043-0500 c20011| 2016-04-06T02:52:09.066-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.047-0500 c20011| 2016-04-06T02:52:09.066-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|5, t: 1 } and is durable through: { ts: Timestamp 1459929129000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.052-0500 c20011| 2016-04-06T02:52:09.066-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929129000|5, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|4, t: 1 }, name-id: "154" } [js_test:multi_coll_drop] 2016-04-06T02:52:52.057-0500 c20011| 2016-04-06T02:52:09.066-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.062-0500 c20011| 2016-04-06T02:52:09.067-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:52.064-0500 c20011| 2016-04-06T02:52:09.067-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:52.068-0500 c20011| 2016-04-06T02:52:09.067-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|5, t: 1 } and is durable through: { ts: Timestamp 1459929129000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.071-0500 c20011| 2016-04-06T02:52:09.067-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929129000|5, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|4, t: 1 }, name-id: "154" } [js_test:multi_coll_drop] 2016-04-06T02:52:52.077-0500 c20011| 2016-04-06T02:52:09.067-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.081-0500 c20011| 2016-04-06T02:52:09.067-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.089-0500 c20011| 2016-04-06T02:52:09.073-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:52.090-0500 c20011| 2016-04-06T02:52:09.073-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:52.093-0500 c20011| 2016-04-06T02:52:09.073-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|5, t: 1 } and is durable through: { ts: Timestamp 1459929129000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.093-0500 c20011| 2016-04-06T02:52:09.073-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.096-0500 c20011| 2016-04-06T02:52:09.073-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.100-0500 c20011| 2016-04-06T02:52:09.073-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.104-0500 c20011| 2016-04-06T02:52:09.074-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|4, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.112-0500 c20011| 2016-04-06T02:52:09.074-0500 I COMMAND [conn25] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:09.062-0500-5704c02965c17830b843f19b", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929129062), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -86.0 }, max: { _id: MaxKey } }, left: { min: { _id: -86.0 }, max: { _id: -85.0 }, lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -85.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 10ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.117-0500 c20011| 2016-04-06T02:52:09.074-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|4, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.119-0500 c20011| 2016-04-06T02:52:09.074-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c02965c17830b843f19a') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.123-0500 c20011| 2016-04-06T02:52:09.074-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:52.126-0500 c20011| 2016-04-06T02:52:09.074-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:52.129-0500 c20011| 2016-04-06T02:52:09.074-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:52.135-0500 c20011| 2016-04-06T02:52:09.074-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.144-0500 c20011| 2016-04-06T02:52:09.074-0500 D QUERY [conn25] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:52.147-0500 c20011| 2016-04-06T02:52:09.074-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|5, t: 1 } and is durable through: { ts: Timestamp 1459929129000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.150-0500 c20011| 2016-04-06T02:52:09.074-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:52.155-0500 c20011| 2016-04-06T02:52:09.074-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.160-0500 c20011| 2016-04-06T02:52:09.074-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c02965c17830b843f19a') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.167-0500 c20011| 2016-04-06T02:52:09.074-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|5, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.173-0500 c20011| 2016-04-06T02:52:09.074-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|5, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.178-0500 c20011| 2016-04-06T02:52:09.076-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:52.178-0500 c20011| 2016-04-06T02:52:09.076-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:52.180-0500 c20011| 2016-04-06T02:52:09.076-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|6, t: 1 } and is durable through: { ts: Timestamp 1459929129000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.186-0500 c20011| 2016-04-06T02:52:09.077-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.193-0500 c20011| 2016-04-06T02:52:09.077-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.195-0500 c20011| 2016-04-06T02:52:09.077-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:52.215-0500 c20011| 2016-04-06T02:52:09.077-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:52.217-0500 c20011| 2016-04-06T02:52:09.077-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:52.223-0500 c20011| 2016-04-06T02:52:09.077-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.230-0500 c20011| 2016-04-06T02:52:09.077-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|6, t: 1 } and is durable through: { ts: Timestamp 1459929129000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.234-0500 c20011| 2016-04-06T02:52:09.077-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.236-0500 c20011| 2016-04-06T02:52:09.079-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:52.243-0500 c20011| 2016-04-06T02:52:09.079-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929129000|6, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|5, t: 1 }, name-id: "155" } [js_test:multi_coll_drop] 2016-04-06T02:52:52.248-0500 c20011| 2016-04-06T02:52:09.080-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:52.248-0500 c20011| 2016-04-06T02:52:09.080-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:52.249-0500 c20011| 2016-04-06T02:52:09.080-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.251-0500 c20011| 2016-04-06T02:52:09.080-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|6, t: 1 } and is durable through: { ts: Timestamp 1459929129000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.259-0500 c20011| 2016-04-06T02:52:09.080-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.266-0500 c20011| 2016-04-06T02:52:09.080-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.272-0500 c20011| 2016-04-06T02:52:09.085-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c02965c17830b843f19a') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 11ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.275-0500 c20011| 2016-04-06T02:52:09.085-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|5, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 8ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.278-0500 c20011| 2016-04-06T02:52:09.086-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:52.286-0500 c20011| 2016-04-06T02:52:09.087-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:52.288-0500 c20011| 2016-04-06T02:52:09.087-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:52.291-0500 c20011| 2016-04-06T02:52:09.087-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|6, t: 1 } and is durable through: { ts: Timestamp 1459929129000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.294-0500 c20011| 2016-04-06T02:52:09.087-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.301-0500 c20011| 2016-04-06T02:52:09.087-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.305-0500 c20011| 2016-04-06T02:52:09.088-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|5, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 9ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.310-0500 c20011| 2016-04-06T02:52:09.088-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:52.316-0500 c20011| 2016-04-06T02:52:09.089-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|30 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|6, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.320-0500 c20011| 2016-04-06T02:52:09.089-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|6, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:52.347-0500 c20011| 2016-04-06T02:52:09.089-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|30 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|6, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.348-0500 c20011| 2016-04-06T02:52:09.089-0500 D QUERY [conn10] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:52.352-0500 c20011| 2016-04-06T02:52:09.090-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|30 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|6, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:732 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.363-0500 c20011| 2016-04-06T02:52:09.091-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|6, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.367-0500 c20011| 2016-04-06T02:52:09.091-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|6, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:52.369-0500 c20011| 2016-04-06T02:52:09.091-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|6, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.372-0500 c20011| 2016-04-06T02:52:09.091-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:52.375-0500 c20011| 2016-04-06T02:52:09.093-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|6, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.382-0500 c20011| 2016-04-06T02:52:09.093-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02965c17830b843f19c'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929129093), why: "splitting chunk [{ _id: -85.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.385-0500 c20011| 2016-04-06T02:52:09.093-0500 D QUERY [conn25] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:52.397-0500 c20011| 2016-04-06T02:52:09.093-0500 D QUERY [conn25] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:52.399-0500 c20011| 2016-04-06T02:52:09.094-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.404-0500 c20011| 2016-04-06T02:52:09.094-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|6, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.407-0500 c20011| 2016-04-06T02:52:09.094-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|6, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 8ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.412-0500 c20011| 2016-04-06T02:52:09.096-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:52.414-0500 c20011| 2016-04-06T02:52:09.096-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:52.420-0500 c20011| 2016-04-06T02:52:09.096-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|7, t: 1 } and is durable through: { ts: Timestamp 1459929129000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.422-0500 c20011| 2016-04-06T02:52:09.096-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.431-0500 c20011| 2016-04-06T02:52:09.096-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.436-0500 c20011| 2016-04-06T02:52:09.096-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:52.444-0500 c20011| 2016-04-06T02:52:09.097-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:52.455-0500 c20011| 2016-04-06T02:52:09.098-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929129000|7, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|6, t: 1 }, name-id: "156" } [js_test:multi_coll_drop] 2016-04-06T02:52:52.460-0500 c20011| 2016-04-06T02:52:09.099-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:52.461-0500 c20011| 2016-04-06T02:52:09.099-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:52.463-0500 c20011| 2016-04-06T02:52:09.099-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.470-0500 c20011| 2016-04-06T02:52:09.099-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|7, t: 1 } and is durable through: { ts: Timestamp 1459929129000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.474-0500 c20011| 2016-04-06T02:52:09.099-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929129000|7, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|6, t: 1 }, name-id: "156" } [js_test:multi_coll_drop] 2016-04-06T02:52:52.478-0500 c20011| 2016-04-06T02:52:09.099-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.484-0500 c20011| 2016-04-06T02:52:09.100-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:52.486-0500 c20011| 2016-04-06T02:52:09.100-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:52.493-0500 c20011| 2016-04-06T02:52:09.100-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|7, t: 1 } and is durable through: { ts: Timestamp 1459929129000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.493-0500 c20011| 2016-04-06T02:52:09.100-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.503-0500 c20011| 2016-04-06T02:52:09.100-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.512-0500 c20011| 2016-04-06T02:52:09.100-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.521-0500 c20011| 2016-04-06T02:52:09.100-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|6, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.528-0500 c20011| 2016-04-06T02:52:09.100-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|6, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.532-0500 c20011| 2016-04-06T02:52:09.100-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02965c17830b843f19c'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929129093), why: "splitting chunk [{ _id: -85.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c02965c17830b843f19c'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929129093), why: "splitting chunk [{ _id: -85.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.539-0500 c20011| 2016-04-06T02:52:09.100-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:52.541-0500 c20011| 2016-04-06T02:52:09.100-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:52.545-0500 c20011| 2016-04-06T02:52:09.101-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.551-0500 c20011| 2016-04-06T02:52:09.101-0500 D COMMAND [conn25] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|7, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.553-0500 c20011| 2016-04-06T02:52:09.101-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|7, t: 1 } and is durable through: { ts: Timestamp 1459929129000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.556-0500 c20011| 2016-04-06T02:52:09.101-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:52.589-0500 c20011| 2016-04-06T02:52:09.101-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.593-0500 c20011| 2016-04-06T02:52:09.101-0500 D COMMAND [conn25] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|7, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:52.594-0500 c20011| 2016-04-06T02:52:09.101-0500 D COMMAND [conn25] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|7, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.599-0500 c20011| 2016-04-06T02:52:09.101-0500 D QUERY [conn25] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:52.604-0500 c20011| 2016-04-06T02:52:09.101-0500 I COMMAND [conn25] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|7, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:512 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.608-0500 c20011| 2016-04-06T02:52:09.101-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:52.613-0500 c20011| 2016-04-06T02:52:09.101-0500 D COMMAND [conn25] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|32 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|7, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.616-0500 c20011| 2016-04-06T02:52:09.101-0500 D COMMAND [conn25] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|7, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:52.622-0500 c20011| 2016-04-06T02:52:09.101-0500 D COMMAND [conn25] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|32 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|7, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.623-0500 c20011| 2016-04-06T02:52:09.101-0500 D QUERY [conn25] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:52.627-0500 c20011| 2016-04-06T02:52:09.102-0500 I COMMAND [conn25] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|32 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|7, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.633-0500 c20011| 2016-04-06T02:52:09.102-0500 D COMMAND [conn25] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-85.0", lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -85.0 }, max: { _id: -84.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-85.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-84.0", lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -84.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-84.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|32 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.635-0500 c20011| 2016-04-06T02:52:09.102-0500 D QUERY [conn25] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:52.646-0500 c20011| 2016-04-06T02:52:09.102-0500 D QUERY [conn25] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:52.651-0500 c20011| 2016-04-06T02:52:09.102-0500 I COMMAND [conn25] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.652-0500 c20011| 2016-04-06T02:52:09.102-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-85.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:52.653-0500 c20011| 2016-04-06T02:52:09.102-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-84.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:52.657-0500 c20011| 2016-04-06T02:52:09.102-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|7, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.676-0500 c20011| 2016-04-06T02:52:09.102-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|7, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.693-0500 c20011| 2016-04-06T02:52:09.103-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929129000|8, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|7, t: 1 }, name-id: "157" } [js_test:multi_coll_drop] 2016-04-06T02:52:52.698-0500 c20011| 2016-04-06T02:52:09.104-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:52.699-0500 c20011| 2016-04-06T02:52:09.104-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:52.699-0500 c20011| 2016-04-06T02:52:09.104-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|8, t: 1 } and is durable through: { ts: Timestamp 1459929129000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.703-0500 c20011| 2016-04-06T02:52:09.104-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929129000|8, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|7, t: 1 }, name-id: "157" } [js_test:multi_coll_drop] 2016-04-06T02:52:52.708-0500 c20011| 2016-04-06T02:52:09.104-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.714-0500 c20011| 2016-04-06T02:52:09.104-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.715-0500 c20011| 2016-04-06T02:52:09.105-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:52.718-0500 c20011| 2016-04-06T02:52:09.105-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:52.718-0500 c20011| 2016-04-06T02:52:09.105-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:52.720-0500 c20011| 2016-04-06T02:52:09.105-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.722-0500 c20011| 2016-04-06T02:52:09.105-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|8, t: 1 } and is durable through: { ts: Timestamp 1459929129000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.724-0500 c20011| 2016-04-06T02:52:09.105-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929129000|8, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|7, t: 1 }, name-id: "157" } [js_test:multi_coll_drop] 2016-04-06T02:52:52.726-0500 c20011| 2016-04-06T02:52:09.105-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.731-0500 c20011| 2016-04-06T02:52:09.107-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:52.731-0500 c20011| 2016-04-06T02:52:09.107-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:52.732-0500 c20011| 2016-04-06T02:52:09.107-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|8, t: 1 } and is durable through: { ts: Timestamp 1459929129000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.733-0500 c20011| 2016-04-06T02:52:09.107-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.735-0500 c20011| 2016-04-06T02:52:09.107-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.740-0500 c20011| 2016-04-06T02:52:09.107-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.751-0500 c20011| 2016-04-06T02:52:09.107-0500 I COMMAND [conn25] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-85.0", lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -85.0 }, max: { _id: -84.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-85.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-84.0", lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -84.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-84.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|32 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.760-0500 c20011| 2016-04-06T02:52:09.107-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|7, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.767-0500 c20011| 2016-04-06T02:52:09.107-0500 D COMMAND [conn25] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:09.107-0500-5704c02965c17830b843f19d", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929129107), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -85.0 }, max: { _id: MaxKey } }, left: { min: { _id: -85.0 }, max: { _id: -84.0 }, lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -84.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.770-0500 c20011| 2016-04-06T02:52:09.107-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:52.773-0500 c20011| 2016-04-06T02:52:09.107-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|8, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.774-0500 c20011| 2016-04-06T02:52:09.109-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:52.775-0500 s20014| 2016-04-06T02:52:33.720-0500 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:52.777-0500 s20014| 2016-04-06T02:52:33.720-0500 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:52.779-0500 s20014| 2016-04-06T02:52:33.720-0500 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.100.28:20012, no events [js_test:multi_coll_drop] 2016-04-06T02:52:52.781-0500 c20012| 2016-04-06T02:52:08.583-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 451 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.583-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|20, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:52.782-0500 c20012| 2016-04-06T02:52:08.583-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 450 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.783-0500 c20012| 2016-04-06T02:52:08.583-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 451 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:52.785-0500 c20012| 2016-04-06T02:52:08.584-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:52.789-0500 c20012| 2016-04-06T02:52:08.584-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 453 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:52.790-0500 c20012| 2016-04-06T02:52:08.584-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 453 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:52.791-0500 c20012| 2016-04-06T02:52:08.585-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 453 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.794-0500 c20012| 2016-04-06T02:52:08.585-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 451 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.795-0500 c20012| 2016-04-06T02:52:08.585-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|21, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.798-0500 c20012| 2016-04-06T02:52:08.585-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:52.809-0500 c20012| 2016-04-06T02:52:08.585-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 456 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.585-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|21, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:52.820-0500 s20014| 2016-04-06T02:52:36.593-0500 D ASIO [replSetDistLockPinger] startCommand: RemoteCommand 296 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:06.593-0500 cmd:{ findAndModify: "lockpings", query: { _id: "mongovm16:20014:1459929123:-665935931" }, update: { $set: { ping: new Date(1459929156593) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.822-0500 c20011| 2016-04-06T02:52:09.110-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:52.824-0500 c20011| 2016-04-06T02:52:09.111-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|7, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.830-0500 c20011| 2016-04-06T02:52:09.111-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:52.831-0500 c20011| 2016-04-06T02:52:09.111-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:52.835-0500 c20011| 2016-04-06T02:52:09.111-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.839-0500 c20011| 2016-04-06T02:52:09.111-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|8, t: 1 } and is durable through: { ts: Timestamp 1459929129000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.845-0500 c20011| 2016-04-06T02:52:09.111-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.849-0500 c20011| 2016-04-06T02:52:09.111-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929129000|9, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|8, t: 1 }, name-id: "158" } [js_test:multi_coll_drop] 2016-04-06T02:52:52.852-0500 c20011| 2016-04-06T02:52:09.112-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:52.859-0500 c20011| 2016-04-06T02:52:09.112-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:52.859-0500 c20011| 2016-04-06T02:52:09.112-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:52.859-0500 c20011| 2016-04-06T02:52:09.112-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:52.862-0500 c20011| 2016-04-06T02:52:09.112-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.866-0500 c20012| 2016-04-06T02:52:08.585-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 456 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:52.872-0500 c20012| 2016-04-06T02:52:08.585-0500 D COMMAND [conn11] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|4 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|21, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.880-0500 c20012| 2016-04-06T02:52:08.585-0500 D COMMAND [conn11] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|21, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:52.883-0500 c20012| 2016-04-06T02:52:08.586-0500 D COMMAND [conn11] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|4 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|21, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.886-0500 c20011| 2016-04-06T02:52:09.112-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|9, t: 1 } and is durable through: { ts: Timestamp 1459929129000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.888-0500 c20011| 2016-04-06T02:52:09.112-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929129000|9, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|8, t: 1 }, name-id: "158" } [js_test:multi_coll_drop] 2016-04-06T02:52:52.892-0500 c20011| 2016-04-06T02:52:09.112-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.897-0500 c20011| 2016-04-06T02:52:09.112-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|9, t: 1 } and is durable through: { ts: Timestamp 1459929129000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.902-0500 c20011| 2016-04-06T02:52:09.112-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929129000|9, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|8, t: 1 }, name-id: "158" } [js_test:multi_coll_drop] 2016-04-06T02:52:52.905-0500 c20012| 2016-04-06T02:52:08.586-0500 D QUERY [conn11] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:52.912-0500 c20012| 2016-04-06T02:52:08.586-0500 I COMMAND [conn11] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|4 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|21, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.919-0500 c20012| 2016-04-06T02:52:08.587-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 456 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|22, t: 1, h: 8266891418716651152, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-99.0", lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -99.0 }, max: { _id: -98.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-99.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-98.0", lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -98.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-98.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.924-0500 c20012| 2016-04-06T02:52:08.587-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|22 and ending at ts: Timestamp 1459929128000|22 [js_test:multi_coll_drop] 2016-04-06T02:52:52.927-0500 c20012| 2016-04-06T02:52:08.588-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:52.927-0500 c20012| 2016-04-06T02:52:08.589-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:52.930-0500 c20012| 2016-04-06T02:52:08.589-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:52.931-0500 c20012| 2016-04-06T02:52:08.589-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:52.932-0500 c20012| 2016-04-06T02:52:08.589-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:52.940-0500 c20011| 2016-04-06T02:52:09.113-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.946-0500 c20011| 2016-04-06T02:52:09.113-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:52.957-0500 c20011| 2016-04-06T02:52:09.113-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:52.960-0500 c20011| 2016-04-06T02:52:09.113-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:52.960-0500 c20011| 2016-04-06T02:52:09.113-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:52.962-0500 c20011| 2016-04-06T02:52:09.113-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.967-0500 c20011| 2016-04-06T02:52:09.113-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|9, t: 1 } and is durable through: { ts: Timestamp 1459929129000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:52.968-0500 c20012| 2016-04-06T02:52:08.589-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:52.970-0500 c20012| 2016-04-06T02:52:08.589-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:52.971-0500 c20012| 2016-04-06T02:52:08.589-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:52.972-0500 c20012| 2016-04-06T02:52:08.589-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:52.973-0500 c20012| 2016-04-06T02:52:08.589-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:52.977-0500 c20012| 2016-04-06T02:52:08.589-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:52.978-0500 c20012| 2016-04-06T02:52:08.589-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:52.980-0500 c20012| 2016-04-06T02:52:08.589-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:52.983-0500 c20012| 2016-04-06T02:52:08.589-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:52.985-0500 c20012| 2016-04-06T02:52:08.589-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:52.985-0500 c20012| 2016-04-06T02:52:08.589-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:52.989-0500 c20012| 2016-04-06T02:52:08.589-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:52.992-0500 c20012| 2016-04-06T02:52:08.589-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 458 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.589-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|21, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:52.997-0500 c20012| 2016-04-06T02:52:08.589-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 458 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:52.999-0500 c20012| 2016-04-06T02:52:08.590-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-99.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:53.000-0500 c20012| 2016-04-06T02:52:08.590-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-98.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:53.005-0500 c20011| 2016-04-06T02:52:09.113-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.010-0500 c20011| 2016-04-06T02:52:09.113-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:53.012-0500 c20011| 2016-04-06T02:52:09.113-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:53.017-0500 c20011| 2016-04-06T02:52:09.113-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.033-0500 c20011| 2016-04-06T02:52:09.113-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|9, t: 1 } and is durable through: { ts: Timestamp 1459929129000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.044-0500 c20011| 2016-04-06T02:52:09.114-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.050-0500 c20011| 2016-04-06T02:52:09.114-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.053-0500 c20011| 2016-04-06T02:52:09.114-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|8, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.065-0500 c20011| 2016-04-06T02:52:09.114-0500 I COMMAND [conn25] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:09.107-0500-5704c02965c17830b843f19d", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929129107), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -85.0 }, max: { _id: MaxKey } }, left: { min: { _id: -85.0 }, max: { _id: -84.0 }, lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -84.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.072-0500 c20011| 2016-04-06T02:52:09.114-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|8, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.074-0500 c20011| 2016-04-06T02:52:09.114-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c02965c17830b843f19c') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.075-0500 c20011| 2016-04-06T02:52:09.114-0500 D QUERY [conn25] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:53.081-0500 c20011| 2016-04-06T02:52:09.114-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c02965c17830b843f19c') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.083-0500 c20011| 2016-04-06T02:52:09.114-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:53.087-0500 c20011| 2016-04-06T02:52:09.114-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:53.095-0500 c20011| 2016-04-06T02:52:09.114-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|9, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.101-0500 c20011| 2016-04-06T02:52:09.114-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|9, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.105-0500 c20011| 2016-04-06T02:52:09.115-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929129000|10, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|9, t: 1 }, name-id: "159" } [js_test:multi_coll_drop] 2016-04-06T02:52:53.112-0500 c20011| 2016-04-06T02:52:09.116-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:53.114-0500 c20011| 2016-04-06T02:52:09.116-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:53.122-0500 c20011| 2016-04-06T02:52:09.116-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|10, t: 1 } and is durable through: { ts: Timestamp 1459929129000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.126-0500 c20011| 2016-04-06T02:52:09.116-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929129000|10, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|9, t: 1 }, name-id: "159" } [js_test:multi_coll_drop] 2016-04-06T02:52:53.131-0500 c20011| 2016-04-06T02:52:09.116-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.151-0500 c20011| 2016-04-06T02:52:09.116-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:53.152-0500 c20011| 2016-04-06T02:52:09.116-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:53.154-0500 c20011| 2016-04-06T02:52:09.116-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.158-0500 c20011| 2016-04-06T02:52:09.116-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.166-0500 c20011| 2016-04-06T02:52:09.116-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|10, t: 1 } and is durable through: { ts: Timestamp 1459929129000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.167-0500 c20011| 2016-04-06T02:52:09.116-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929129000|10, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|9, t: 1 }, name-id: "159" } [js_test:multi_coll_drop] 2016-04-06T02:52:53.175-0500 c20011| 2016-04-06T02:52:09.116-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.181-0500 c20011| 2016-04-06T02:52:09.117-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:53.182-0500 c20012| 2016-04-06T02:52:08.590-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:53.184-0500 c20012| 2016-04-06T02:52:08.590-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:53.188-0500 c20012| 2016-04-06T02:52:08.590-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:53.190-0500 c20012| 2016-04-06T02:52:08.591-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:53.190-0500 c20012| 2016-04-06T02:52:08.591-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:53.196-0500 c20012| 2016-04-06T02:52:08.591-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:53.198-0500 s20014| 2016-04-06T02:52:36.593-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:53.204-0500 s20014| 2016-04-06T02:52:36.593-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 297 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:53.211-0500 s20014| 2016-04-06T02:52:37.132-0500 D ASIO [UserCacheInvalidator] startCommand: RemoteCommand 298 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:53:07.132-0500 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.212-0500 s20014| 2016-04-06T02:52:37.132-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:53.215-0500 s20014| 2016-04-06T02:52:37.132-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 299 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:53.221-0500 c20012| 2016-04-06T02:52:08.591-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:53.221-0500 c20012| 2016-04-06T02:52:08.591-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:53.222-0500 c20012| 2016-04-06T02:52:08.591-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:53.223-0500 c20012| 2016-04-06T02:52:08.591-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:53.224-0500 c20012| 2016-04-06T02:52:08.591-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:53.225-0500 c20012| 2016-04-06T02:52:08.591-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:53.229-0500 c20012| 2016-04-06T02:52:08.591-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:53.238-0500 c20011| 2016-04-06T02:52:09.117-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:53.242-0500 c20011| 2016-04-06T02:52:09.125-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:53.242-0500 c20011| 2016-04-06T02:52:09.125-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:53.253-0500 c20011| 2016-04-06T02:52:09.125-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|10, t: 1 } and is durable through: { ts: Timestamp 1459929129000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.253-0500 c20011| 2016-04-06T02:52:09.125-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.259-0500 c20011| 2016-04-06T02:52:09.126-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.266-0500 c20011| 2016-04-06T02:52:09.126-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.277-0500 c20011| 2016-04-06T02:52:09.126-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:53.279-0500 c20011| 2016-04-06T02:52:09.126-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:53.283-0500 c20011| 2016-04-06T02:52:09.126-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.290-0500 c20011| 2016-04-06T02:52:09.126-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|10, t: 1 } and is durable through: { ts: Timestamp 1459929129000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.297-0500 c20011| 2016-04-06T02:52:09.126-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.301-0500 c20011| 2016-04-06T02:52:09.126-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|9, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 9ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.310-0500 c20011| 2016-04-06T02:52:09.126-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c02965c17830b843f19c') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 12ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.313-0500 c20011| 2016-04-06T02:52:09.126-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|9, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 9ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.325-0500 c20011| 2016-04-06T02:52:09.127-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:53.330-0500 c20011| 2016-04-06T02:52:09.127-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:53.337-0500 c20011| 2016-04-06T02:52:09.128-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|10, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.345-0500 c20011| 2016-04-06T02:52:09.128-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|10, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:53.349-0500 c20011| 2016-04-06T02:52:09.128-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|10, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.350-0500 c20011| 2016-04-06T02:52:09.129-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:53.358-0500 c20011| 2016-04-06T02:52:09.129-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|10, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.361-0500 c20011| 2016-04-06T02:52:09.129-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02965c17830b843f19e'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929129129), why: "splitting chunk [{ _id: -84.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.362-0500 c20011| 2016-04-06T02:52:09.129-0500 D QUERY [conn25] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:53.366-0500 c20011| 2016-04-06T02:52:09.129-0500 D QUERY [conn25] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:53.369-0500 c20011| 2016-04-06T02:52:09.129-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.372-0500 c20011| 2016-04-06T02:52:09.129-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|10, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.376-0500 c20011| 2016-04-06T02:52:09.129-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|10, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.379-0500 c20011| 2016-04-06T02:52:09.131-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929129000|11, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|10, t: 1 }, name-id: "160" } [js_test:multi_coll_drop] 2016-04-06T02:52:53.382-0500 c20011| 2016-04-06T02:52:09.131-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:53.383-0500 c20011| 2016-04-06T02:52:09.131-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:53.387-0500 c20011| 2016-04-06T02:52:09.131-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|11, t: 1 } and is durable through: { ts: Timestamp 1459929129000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.388-0500 c20011| 2016-04-06T02:52:09.131-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929129000|11, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|10, t: 1 }, name-id: "160" } [js_test:multi_coll_drop] 2016-04-06T02:52:53.391-0500 c20011| 2016-04-06T02:52:09.131-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.394-0500 c20011| 2016-04-06T02:52:09.131-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.396-0500 c20011| 2016-04-06T02:52:09.132-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:53.397-0500 c20011| 2016-04-06T02:52:09.132-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:53.400-0500 c20011| 2016-04-06T02:52:09.132-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.401-0500 c20011| 2016-04-06T02:52:09.132-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:53.402-0500 c20011| 2016-04-06T02:52:09.132-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|11, t: 1 } and is durable through: { ts: Timestamp 1459929129000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.404-0500 c20011| 2016-04-06T02:52:09.132-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929129000|11, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|10, t: 1 }, name-id: "160" } [js_test:multi_coll_drop] 2016-04-06T02:52:53.409-0500 c20011| 2016-04-06T02:52:09.132-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.412-0500 c20011| 2016-04-06T02:52:09.133-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:53.413-0500 c20011| 2016-04-06T02:52:09.133-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:53.414-0500 c20011| 2016-04-06T02:52:09.133-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|11, t: 1 } and is durable through: { ts: Timestamp 1459929129000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.416-0500 c20011| 2016-04-06T02:52:09.133-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.420-0500 c20011| 2016-04-06T02:52:09.133-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.424-0500 c20011| 2016-04-06T02:52:09.133-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.429-0500 c20011| 2016-04-06T02:52:09.133-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:53.435-0500 c20011| 2016-04-06T02:52:09.133-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|10, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.438-0500 c20011| 2016-04-06T02:52:09.139-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|10, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.442-0500 c20011| 2016-04-06T02:52:09.139-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 41 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:19.139-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.444-0500 c20011| 2016-04-06T02:52:09.139-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 41 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:53.451-0500 c20011| 2016-04-06T02:52:09.139-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:53.453-0500 c20011| 2016-04-06T02:52:09.140-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 41 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, opTime: { ts: Timestamp 1459929129000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:53.454-0500 c20011| 2016-04-06T02:52:09.140-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:11.140Z [js_test:multi_coll_drop] 2016-04-06T02:52:53.460-0500 c20011| 2016-04-06T02:52:09.141-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02965c17830b843f19e'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929129129), why: "splitting chunk [{ _id: -84.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c02965c17830b843f19e'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929129129), why: "splitting chunk [{ _id: -84.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 12ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.462-0500 c20011| 2016-04-06T02:52:09.143-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:53.467-0500 c20011| 2016-04-06T02:52:09.144-0500 D COMMAND [conn25] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-84.0", lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -84.0 }, max: { _id: -83.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-84.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-83.0", lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -83.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-83.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|34 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.469-0500 c20011| 2016-04-06T02:52:09.144-0500 D QUERY [conn25] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:53.473-0500 c20011| 2016-04-06T02:52:09.144-0500 D QUERY [conn25] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:53.475-0500 c20011| 2016-04-06T02:52:09.144-0500 I COMMAND [conn25] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.476-0500 c20011| 2016-04-06T02:52:09.144-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-84.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:53.478-0500 c20011| 2016-04-06T02:52:09.144-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-83.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:53.484-0500 c20011| 2016-04-06T02:52:09.144-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|11, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.493-0500 c20011| 2016-04-06T02:52:09.144-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|11, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.497-0500 c20011| 2016-04-06T02:52:09.145-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:53.498-0500 c20011| 2016-04-06T02:52:09.145-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:53.502-0500 c20011| 2016-04-06T02:52:09.145-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.504-0500 c20011| 2016-04-06T02:52:09.145-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|11, t: 1 } and is durable through: { ts: Timestamp 1459929129000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.508-0500 c20011| 2016-04-06T02:52:09.145-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.517-0500 c20011| 2016-04-06T02:52:09.147-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:53.517-0500 c20011| 2016-04-06T02:52:09.147-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:53.519-0500 c20011| 2016-04-06T02:52:09.147-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.522-0500 c20011| 2016-04-06T02:52:09.147-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.531-0500 c20011| 2016-04-06T02:52:09.147-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.537-0500 c20011| 2016-04-06T02:52:09.151-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929129000|12, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|11, t: 1 }, name-id: "161" } [js_test:multi_coll_drop] 2016-04-06T02:52:53.542-0500 c20011| 2016-04-06T02:52:09.152-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:53.548-0500 c20011| 2016-04-06T02:52:09.153-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:53.552-0500 c20011| 2016-04-06T02:52:09.153-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:53.553-0500 c20011| 2016-04-06T02:52:09.153-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.557-0500 c20011| 2016-04-06T02:52:09.153-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.565-0500 c20011| 2016-04-06T02:52:09.153-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:53.569-0500 c20011| 2016-04-06T02:52:09.153-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929129000|12, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|11, t: 1 }, name-id: "161" } [js_test:multi_coll_drop] 2016-04-06T02:52:53.574-0500 c20011| 2016-04-06T02:52:09.153-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.580-0500 c20011| 2016-04-06T02:52:09.156-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:53.582-0500 c20011| 2016-04-06T02:52:09.156-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:53.598-0500 c20011| 2016-04-06T02:52:09.156-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.599-0500 c20011| 2016-04-06T02:52:09.156-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.601-0500 c20011| 2016-04-06T02:52:09.156-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.607-0500 c20011| 2016-04-06T02:52:09.156-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.611-0500 c20011| 2016-04-06T02:52:09.156-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|11, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.615-0500 c20011| 2016-04-06T02:52:09.157-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:53.616-0500 c20011| 2016-04-06T02:52:09.157-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 43 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:19.157-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.618-0500 c20011| 2016-04-06T02:52:09.158-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:53.618-0500 c20011| 2016-04-06T02:52:09.276-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:59154 #27 (23 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:53.619-0500 c20011| 2016-04-06T02:52:10.163-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:53.623-0500 c20011| 2016-04-06T02:52:10.074-0500 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.625-0500 c20011| 2016-04-06T02:52:10.163-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929127000|16, t: 1 } and is durable through: { ts: Timestamp 1459929127000|16, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.628-0500 c20011| 2016-04-06T02:52:10.163-0500 D COMMAND [conn2] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:53.631-0500 c20011| 2016-04-06T02:52:10.075-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.632-0500 c20011| 2016-04-06T02:52:10.164-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:53.634-0500 c20011| 2016-04-06T02:52:10.163-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.641-0500 c20011| 2016-04-06T02:52:10.164-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.650-0500 c20011| 2016-04-06T02:52:10.164-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|11, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1011ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.674-0500 c20011| 2016-04-06T02:52:10.164-0500 I COMMAND [conn25] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-84.0", lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -84.0 }, max: { _id: -83.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-84.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-83.0", lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -83.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-83.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|34 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 1020ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.676-0500 c20011| 2016-04-06T02:52:10.164-0500 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } numYields:0 reslen:480 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.682-0500 c20011| 2016-04-06T02:52:10.165-0500 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } numYields:0 reslen:480 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.689-0500 c20011| 2016-04-06T02:52:10.165-0500 D COMMAND [conn25] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:10.165-0500-5704c02a65c17830b843f19f", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929130165), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -84.0 }, max: { _id: MaxKey } }, left: { min: { _id: -84.0 }, max: { _id: -83.0 }, lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -83.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.692-0500 c20011| 2016-04-06T02:52:10.165-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 43 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:53.705-0500 c20011| 2016-04-06T02:52:10.165-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|12, t: 1 } } cursorid:17466612721 numYields:1 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 1008ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.712-0500 c20011| 2016-04-06T02:52:10.174-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929130000|1, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|12, t: 1 }, name-id: "162" } [js_test:multi_coll_drop] 2016-04-06T02:52:53.712-0500 c20011| 2016-04-06T02:52:10.183-0500 D COMMAND [conn27] run command admin.$cmd { isMaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.714-0500 c20011| 2016-04-06T02:52:10.183-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:53.725-0500 c20011| 2016-04-06T02:52:10.183-0500 I COMMAND [conn27] command admin.$cmd command: isMaster { isMaster: 1 } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.746-0500 c20011| 2016-04-06T02:52:10.183-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 43 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, opTime: { ts: Timestamp 1459929129000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:53.754-0500 c20011| 2016-04-06T02:52:10.184-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:12.184Z [js_test:multi_coll_drop] 2016-04-06T02:52:53.761-0500 c20011| 2016-04-06T02:52:10.184-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|12, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.762-0500 c20011| 2016-04-06T02:52:10.185-0500 D COMMAND [conn27] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.768-0500 c20011| 2016-04-06T02:52:10.185-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:53.778-0500 c20011| 2016-04-06T02:52:10.186-0500 I COMMAND [conn27] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.783-0500 c20011| 2016-04-06T02:52:10.187-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:53.788-0500 c20011| 2016-04-06T02:52:10.192-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:53.789-0500 c20011| 2016-04-06T02:52:10.192-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:53.791-0500 c20011| 2016-04-06T02:52:10.192-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|1, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.795-0500 c20011| 2016-04-06T02:52:10.192-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:53.799-0500 c20011| 2016-04-06T02:52:10.192-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929130000|1, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|12, t: 1 }, name-id: "162" } [js_test:multi_coll_drop] 2016-04-06T02:52:53.800-0500 c20011| 2016-04-06T02:52:10.192-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:53.804-0500 c20011| 2016-04-06T02:52:10.192-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.812-0500 c20011| 2016-04-06T02:52:10.192-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.816-0500 c20011| 2016-04-06T02:52:10.192-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.818-0500 c20011| 2016-04-06T02:52:10.192-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|1, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.824-0500 c20011| 2016-04-06T02:52:10.192-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929130000|1, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929129000|12, t: 1 }, name-id: "162" } [js_test:multi_coll_drop] 2016-04-06T02:52:53.838-0500 c20011| 2016-04-06T02:52:10.192-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.842-0500 c20011| 2016-04-06T02:52:10.216-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:53.843-0500 c20011| 2016-04-06T02:52:10.216-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:53.845-0500 c20011| 2016-04-06T02:52:10.216-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|1, t: 1 } and is durable through: { ts: Timestamp 1459929130000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.846-0500 c20011| 2016-04-06T02:52:10.216-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.849-0500 c20011| 2016-04-06T02:52:10.216-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.851-0500 c20011| 2016-04-06T02:52:10.216-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.854-0500 c20011| 2016-04-06T02:52:10.216-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|12, t: 1 } } cursorid:20785203637 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 29ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.856-0500 c20011| 2016-04-06T02:52:10.216-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|12, t: 1 } } cursorid:17466612721 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 30ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.859-0500 c20011| 2016-04-06T02:52:10.216-0500 I COMMAND [conn25] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:10.165-0500-5704c02a65c17830b843f19f", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929130165), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -84.0 }, max: { _id: MaxKey } }, left: { min: { _id: -84.0 }, max: { _id: -83.0 }, lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -83.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 51ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.860-0500 c20011| 2016-04-06T02:52:10.217-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:53.862-0500 c20011| 2016-04-06T02:52:10.217-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:53.864-0500 c20011| 2016-04-06T02:52:10.217-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c02965c17830b843f19e') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.866-0500 c20011| 2016-04-06T02:52:10.217-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:53.872-0500 c20011| 2016-04-06T02:52:10.217-0500 D QUERY [conn25] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:53.872-0500 c20011| 2016-04-06T02:52:10.217-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:53.878-0500 c20011| 2016-04-06T02:52:10.217-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.880-0500 c20011| 2016-04-06T02:52:10.217-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|1, t: 1 } and is durable through: { ts: Timestamp 1459929130000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.881-0500 c20011| 2016-04-06T02:52:10.217-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c02965c17830b843f19e') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.885-0500 c20011| 2016-04-06T02:52:10.217-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.888-0500 c20011| 2016-04-06T02:52:10.217-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|1, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.890-0500 c20011| 2016-04-06T02:52:10.217-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|1, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.893-0500 c20011| 2016-04-06T02:52:10.219-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:53.899-0500 c20011| 2016-04-06T02:52:10.219-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:53.902-0500 c20011| 2016-04-06T02:52:10.219-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|2, t: 1 } and is durable through: { ts: Timestamp 1459929130000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.924-0500 c20011| 2016-04-06T02:52:10.219-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.933-0500 c20011| 2016-04-06T02:52:10.219-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.937-0500 c20011| 2016-04-06T02:52:10.220-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:53.939-0500 c20011| 2016-04-06T02:52:10.220-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:53.940-0500 c20011| 2016-04-06T02:52:10.220-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929130000|2, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|1, t: 1 }, name-id: "163" } [js_test:multi_coll_drop] 2016-04-06T02:52:53.945-0500 c20011| 2016-04-06T02:52:10.221-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:53.946-0500 c20011| 2016-04-06T02:52:10.221-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:53.949-0500 c20011| 2016-04-06T02:52:10.221-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|2, t: 1 } and is durable through: { ts: Timestamp 1459929130000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.949-0500 c20011| 2016-04-06T02:52:10.221-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.956-0500 c20011| 2016-04-06T02:52:10.221-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:53.958-0500 c20011| 2016-04-06T02:52:10.221-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.960-0500 c20011| 2016-04-06T02:52:10.221-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|1, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.963-0500 c20011| 2016-04-06T02:52:10.221-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|1, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.965-0500 c20011| 2016-04-06T02:52:10.221-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:53.980-0500 c20011| 2016-04-06T02:52:10.221-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c02965c17830b843f19e') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:53.985-0500 c20011| 2016-04-06T02:52:10.222-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:54.011-0500 c20011| 2016-04-06T02:52:10.225-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:54.013-0500 c20011| 2016-04-06T02:52:10.225-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:54.014-0500 c20011| 2016-04-06T02:52:10.225-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.026-0500 c20011| 2016-04-06T02:52:10.225-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|2, t: 1 } and is durable through: { ts: Timestamp 1459929130000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.036-0500 c20011| 2016-04-06T02:52:10.225-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.049-0500 c20011| 2016-04-06T02:52:10.227-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:54.049-0500 c20011| 2016-04-06T02:52:10.227-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:54.052-0500 c20011| 2016-04-06T02:52:10.227-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.059-0500 c20011| 2016-04-06T02:52:10.227-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|2, t: 1 } and is durable through: { ts: Timestamp 1459929130000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.084-0500 c20011| 2016-04-06T02:52:10.227-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.095-0500 c20011| 2016-04-06T02:52:10.228-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02a65c17830b843f1a0'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929130228), why: "splitting chunk [{ _id: -83.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.100-0500 c20011| 2016-04-06T02:52:10.228-0500 D QUERY [conn25] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:54.101-0500 c20011| 2016-04-06T02:52:10.228-0500 D QUERY [conn25] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:54.101-0500 c20011| 2016-04-06T02:52:10.228-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.104-0500 c20011| 2016-04-06T02:52:10.228-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|2, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.108-0500 c20011| 2016-04-06T02:52:10.228-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|2, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.110-0500 c20011| 2016-04-06T02:52:10.230-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929130000|3, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|2, t: 1 }, name-id: "164" } [js_test:multi_coll_drop] 2016-04-06T02:52:54.113-0500 c20011| 2016-04-06T02:52:10.230-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:54.114-0500 c20011| 2016-04-06T02:52:10.230-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:54.115-0500 c20011| 2016-04-06T02:52:10.230-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|3, t: 1 } and is durable through: { ts: Timestamp 1459929130000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.116-0500 c20011| 2016-04-06T02:52:10.230-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929130000|3, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|2, t: 1 }, name-id: "164" } [js_test:multi_coll_drop] 2016-04-06T02:52:54.121-0500 c20011| 2016-04-06T02:52:10.230-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.128-0500 c20011| 2016-04-06T02:52:10.230-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.135-0500 c20011| 2016-04-06T02:52:10.230-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:54.140-0500 c20011| 2016-04-06T02:52:10.230-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:54.151-0500 c20011| 2016-04-06T02:52:10.230-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.153-0500 c20011| 2016-04-06T02:52:10.230-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|3, t: 1 } and is durable through: { ts: Timestamp 1459929130000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.159-0500 c20011| 2016-04-06T02:52:10.230-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929130000|3, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|2, t: 1 }, name-id: "164" } [js_test:multi_coll_drop] 2016-04-06T02:52:54.164-0500 c20011| 2016-04-06T02:52:10.230-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.175-0500 c20011| 2016-04-06T02:52:10.231-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:54.179-0500 c20011| 2016-04-06T02:52:10.231-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:54.205-0500 c20011| 2016-04-06T02:52:10.232-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:54.206-0500 c20011| 2016-04-06T02:52:10.232-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:54.215-0500 c20011| 2016-04-06T02:52:10.232-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.219-0500 c20011| 2016-04-06T02:52:10.232-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|3, t: 1 } and is durable through: { ts: Timestamp 1459929130000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.222-0500 c20011| 2016-04-06T02:52:10.232-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:54.225-0500 c20011| 2016-04-06T02:52:10.232-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:54.228-0500 c20011| 2016-04-06T02:52:10.232-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.236-0500 c20011| 2016-04-06T02:52:10.232-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.239-0500 c20011| 2016-04-06T02:52:10.232-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|3, t: 1 } and is durable through: { ts: Timestamp 1459929130000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.246-0500 c20011| 2016-04-06T02:52:10.232-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.249-0500 c20011| 2016-04-06T02:52:10.232-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.253-0500 c20011| 2016-04-06T02:52:10.232-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02a65c17830b843f1a0'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929130228), why: "splitting chunk [{ _id: -83.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c02a65c17830b843f1a0'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929130228), why: "splitting chunk [{ _id: -83.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.262-0500 c20011| 2016-04-06T02:52:10.232-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|2, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.270-0500 c20011| 2016-04-06T02:52:10.232-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|2, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.271-0500 c20011| 2016-04-06T02:52:10.232-0500 D COMMAND [conn25] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|3, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.275-0500 c20011| 2016-04-06T02:52:10.232-0500 D COMMAND [conn25] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|3, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:54.280-0500 c20011| 2016-04-06T02:52:10.232-0500 D COMMAND [conn25] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|3, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.286-0500 c20011| 2016-04-06T02:52:10.232-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:54.286-0500 c20011| 2016-04-06T02:52:10.232-0500 D QUERY [conn25] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:54.288-0500 c20011| 2016-04-06T02:52:10.232-0500 I COMMAND [conn25] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|3, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:512 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.290-0500 c20011| 2016-04-06T02:52:10.233-0500 D COMMAND [conn25] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|36 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|3, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.294-0500 c20011| 2016-04-06T02:52:10.233-0500 D COMMAND [conn25] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|3, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:54.299-0500 c20011| 2016-04-06T02:52:10.233-0500 D COMMAND [conn25] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|36 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|3, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.300-0500 c20011| 2016-04-06T02:52:10.233-0500 D QUERY [conn25] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:54.328-0500 c20011| 2016-04-06T02:52:10.233-0500 I COMMAND [conn25] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|36 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|3, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.338-0500 c20011| 2016-04-06T02:52:10.233-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:54.356-0500 c20011| 2016-04-06T02:52:10.233-0500 D COMMAND [conn25] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-83.0", lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -83.0 }, max: { _id: -82.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-83.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-82.0", lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -82.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-82.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|36 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.358-0500 c20011| 2016-04-06T02:52:10.233-0500 D QUERY [conn25] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:54.364-0500 c20011| 2016-04-06T02:52:10.233-0500 D QUERY [conn25] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:54.420-0500 c20011| 2016-04-06T02:52:10.233-0500 I COMMAND [conn25] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.421-0500 c20011| 2016-04-06T02:52:10.233-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-83.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:54.421-0500 c20011| 2016-04-06T02:52:10.233-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-82.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:54.427-0500 c20011| 2016-04-06T02:52:10.234-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|3, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 281 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.429-0500 c20011| 2016-04-06T02:52:10.234-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|3, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.434-0500 c20011| 2016-04-06T02:52:10.235-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:54.434-0500 c20011| 2016-04-06T02:52:10.235-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:54.438-0500 c20011| 2016-04-06T02:52:10.236-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|4, t: 1 } and is durable through: { ts: Timestamp 1459929130000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.443-0500 c20011| 2016-04-06T02:52:10.236-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.450-0500 c20011| 2016-04-06T02:52:10.236-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.452-0500 c20011| 2016-04-06T02:52:10.236-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:54.460-0500 c20011| 2016-04-06T02:52:10.236-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:54.460-0500 c20011| 2016-04-06T02:52:10.236-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:54.464-0500 c20011| 2016-04-06T02:52:10.236-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.467-0500 c20011| 2016-04-06T02:52:10.236-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|4, t: 1 } and is durable through: { ts: Timestamp 1459929130000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.478-0500 c20011| 2016-04-06T02:52:10.236-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.480-0500 c20011| 2016-04-06T02:52:10.236-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:54.483-0500 c20011| 2016-04-06T02:52:10.237-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929130000|4, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|3, t: 1 }, name-id: "165" } [js_test:multi_coll_drop] 2016-04-06T02:52:54.500-0500 c20011| 2016-04-06T02:52:10.238-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:54.502-0500 c20011| 2016-04-06T02:52:10.238-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:54.505-0500 c20011| 2016-04-06T02:52:10.238-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.510-0500 c20011| 2016-04-06T02:52:10.238-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:54.514-0500 c20011| 2016-04-06T02:52:10.238-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|4, t: 1 } and is durable through: { ts: Timestamp 1459929130000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.517-0500 c20011| 2016-04-06T02:52:10.238-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.517-0500 c20011| 2016-04-06T02:52:10.238-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:54.525-0500 c20011| 2016-04-06T02:52:10.238-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.529-0500 c20011| 2016-04-06T02:52:10.238-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|4, t: 1 } and is durable through: { ts: Timestamp 1459929130000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.530-0500 c20011| 2016-04-06T02:52:10.238-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.535-0500 c20011| 2016-04-06T02:52:10.238-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.541-0500 c20011| 2016-04-06T02:52:10.238-0500 I COMMAND [conn25] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-83.0", lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -83.0 }, max: { _id: -82.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-83.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-82.0", lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -82.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-82.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|36 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.548-0500 c20011| 2016-04-06T02:52:10.238-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|3, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.557-0500 c20011| 2016-04-06T02:52:10.239-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|3, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.568-0500 c20011| 2016-04-06T02:52:10.239-0500 D COMMAND [conn25] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:10.239-0500-5704c02a65c17830b843f1a1", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929130239), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -83.0 }, max: { _id: MaxKey } }, left: { min: { _id: -83.0 }, max: { _id: -82.0 }, lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -82.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.574-0500 c20011| 2016-04-06T02:52:10.239-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:54.576-0500 c20011| 2016-04-06T02:52:10.239-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|4, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.577-0500 c20011| 2016-04-06T02:52:10.239-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:54.579-0500 c20011| 2016-04-06T02:52:10.240-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|4, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.580-0500 c20011| 2016-04-06T02:52:10.240-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929130000|5, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|4, t: 1 }, name-id: "166" } [js_test:multi_coll_drop] 2016-04-06T02:52:54.582-0500 c20011| 2016-04-06T02:52:10.241-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:54.582-0500 c20011| 2016-04-06T02:52:10.241-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:54.584-0500 c20011| 2016-04-06T02:52:10.241-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.588-0500 c20011| 2016-04-06T02:52:10.241-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|5, t: 1 } and is durable through: { ts: Timestamp 1459929130000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.589-0500 c20011| 2016-04-06T02:52:10.241-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929130000|5, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|4, t: 1 }, name-id: "166" } [js_test:multi_coll_drop] 2016-04-06T02:52:54.591-0500 c20011| 2016-04-06T02:52:10.241-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.593-0500 c20011| 2016-04-06T02:52:10.241-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:54.595-0500 c20011| 2016-04-06T02:52:10.242-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:54.597-0500 c20011| 2016-04-06T02:52:10.242-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:54.597-0500 c20011| 2016-04-06T02:52:10.242-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|5, t: 1 } and is durable through: { ts: Timestamp 1459929130000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.600-0500 c20011| 2016-04-06T02:52:10.242-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929130000|5, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|4, t: 1 }, name-id: "166" } [js_test:multi_coll_drop] 2016-04-06T02:52:54.605-0500 c20011| 2016-04-06T02:52:10.242-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.607-0500 c20011| 2016-04-06T02:52:10.242-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.615-0500 c20011| 2016-04-06T02:52:10.242-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:54.619-0500 c20011| 2016-04-06T02:52:10.243-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:54.622-0500 c20011| 2016-04-06T02:52:10.243-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:54.626-0500 c20011| 2016-04-06T02:52:10.243-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.629-0500 c20011| 2016-04-06T02:52:10.243-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|5, t: 1 } and is durable through: { ts: Timestamp 1459929130000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.633-0500 c20011| 2016-04-06T02:52:10.243-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.637-0500 c20011| 2016-04-06T02:52:10.243-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.642-0500 c20011| 2016-04-06T02:52:10.243-0500 I COMMAND [conn25] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:10.239-0500-5704c02a65c17830b843f1a1", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929130239), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -83.0 }, max: { _id: MaxKey } }, left: { min: { _id: -83.0 }, max: { _id: -82.0 }, lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -82.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.644-0500 c20011| 2016-04-06T02:52:10.243-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c02a65c17830b843f1a0') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.648-0500 c20011| 2016-04-06T02:52:10.243-0500 D QUERY [conn25] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:54.653-0500 c20011| 2016-04-06T02:52:10.243-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c02a65c17830b843f1a0') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.661-0500 c20011| 2016-04-06T02:52:10.244-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|4, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.670-0500 c20011| 2016-04-06T02:52:10.244-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|4, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.673-0500 c20011| 2016-04-06T02:52:10.244-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:54.673-0500 c20011| 2016-04-06T02:52:10.244-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:54.676-0500 c20011| 2016-04-06T02:52:10.244-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|5, t: 1 } and is durable through: { ts: Timestamp 1459929130000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.694-0500 c20011| 2016-04-06T02:52:10.244-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.701-0500 c20011| 2016-04-06T02:52:10.244-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.704-0500 c20011| 2016-04-06T02:52:10.244-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:54.706-0500 c20011| 2016-04-06T02:52:10.244-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|5, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.708-0500 c20011| 2016-04-06T02:52:10.244-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:54.715-0500 c20011| 2016-04-06T02:52:10.245-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|5, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.720-0500 c20011| 2016-04-06T02:52:10.246-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:54.721-0500 c20011| 2016-04-06T02:52:10.246-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:54.722-0500 c20011| 2016-04-06T02:52:10.246-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.726-0500 c20011| 2016-04-06T02:52:10.246-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|6, t: 1 } and is durable through: { ts: Timestamp 1459929130000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.732-0500 c20011| 2016-04-06T02:52:10.246-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.737-0500 c20011| 2016-04-06T02:52:10.246-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:54.738-0500 c20011| 2016-04-06T02:52:10.247-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:54.739-0500 c20011| 2016-04-06T02:52:10.247-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:54.742-0500 c20011| 2016-04-06T02:52:10.247-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|6, t: 1 } and is durable through: { ts: Timestamp 1459929130000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.744-0500 c20011| 2016-04-06T02:52:10.247-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.747-0500 c20011| 2016-04-06T02:52:10.247-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.753-0500 c20011| 2016-04-06T02:52:10.248-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:54.758-0500 c20011| 2016-04-06T02:52:10.253-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929130000|6, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|5, t: 1 }, name-id: "167" } [js_test:multi_coll_drop] 2016-04-06T02:52:54.767-0500 c20011| 2016-04-06T02:52:10.253-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:54.768-0500 c20011| 2016-04-06T02:52:10.253-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:54.774-0500 c20011| 2016-04-06T02:52:10.253-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|6, t: 1 } and is durable through: { ts: Timestamp 1459929130000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.775-0500 c20011| 2016-04-06T02:52:10.254-0500 D REPL [conn12] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.783-0500 c20011| 2016-04-06T02:52:10.254-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.798-0500 c20011| 2016-04-06T02:52:10.254-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.809-0500 c20011| 2016-04-06T02:52:10.254-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|5, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.810-0500 c20011| 2016-04-06T02:52:10.254-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|5, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.831-0500 c20011| 2016-04-06T02:52:10.254-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c02a65c17830b843f1a0') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 10ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.836-0500 c20011| 2016-04-06T02:52:10.254-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:54.850-0500 c20011| 2016-04-06T02:52:10.254-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:54.853-0500 c20011| 2016-04-06T02:52:10.254-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:54.859-0500 c20011| 2016-04-06T02:52:10.254-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.867-0500 c20011| 2016-04-06T02:52:10.254-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|6, t: 1 } and is durable through: { ts: Timestamp 1459929130000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.896-0500 c20011| 2016-04-06T02:52:10.254-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.900-0500 c20011| 2016-04-06T02:52:10.255-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:54.907-0500 c20011| 2016-04-06T02:52:10.256-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|6, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.926-0500 c20011| 2016-04-06T02:52:10.256-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|6, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:54.929-0500 c20011| 2016-04-06T02:52:10.256-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|6, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:54.935-0500 c20011| 2016-04-06T02:52:10.256-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:54.948-0500 c20011| 2016-04-06T02:52:10.256-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|6, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:54.954-0500 c20012| 2016-04-06T02:52:08.591-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:54.955-0500 c20012| 2016-04-06T02:52:08.591-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:54.960-0500 c20012| 2016-04-06T02:52:08.595-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:54.963-0500 c20012| 2016-04-06T02:52:08.595-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:54.971-0500 c20012| 2016-04-06T02:52:08.595-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:54.981-0500 c20012| 2016-04-06T02:52:08.595-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:54.997-0500 c20012| 2016-04-06T02:52:08.595-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 459 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|21, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:54.998-0500 c20012| 2016-04-06T02:52:08.595-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 459 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:55.008-0500 c20012| 2016-04-06T02:52:08.595-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 459 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.009-0500 c20012| 2016-04-06T02:52:08.597-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:55.011-0500 c20012| 2016-04-06T02:52:08.597-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 461 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:55.012-0500 c20012| 2016-04-06T02:52:08.597-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 461 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:55.013-0500 c20012| 2016-04-06T02:52:08.597-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 461 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.016-0500 c20012| 2016-04-06T02:52:08.597-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 458 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.026-0500 c20012| 2016-04-06T02:52:08.598-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|22, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.031-0500 c20011| 2016-04-06T02:52:10.256-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02a65c17830b843f1a2'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929130256), why: "splitting chunk [{ _id: -82.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.032-0500 c20012| 2016-04-06T02:52:08.598-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:55.035-0500 c20012| 2016-04-06T02:52:08.598-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 464 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.598-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|22, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:55.035-0500 s20015| 2016-04-06T02:52:37.336-0500 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:55.036-0500 s20015| 2016-04-06T02:52:37.336-0500 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:52:55.037-0500 s20015| 2016-04-06T02:52:37.336-0500 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.100.28:20012, no events [js_test:multi_coll_drop] 2016-04-06T02:52:55.037-0500 c20013| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.038-0500 c20013| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.039-0500 c20013| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.041-0500 s20015| 2016-04-06T02:52:37.364-0500 D ASIO [replSetDistLockPinger] startCommand: RemoteCommand 63 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:07.364-0500 cmd:{ findAndModify: "lockpings", query: { _id: "mongovm16:20015:1459929127:-1485108316" }, update: { $set: { ping: new Date(1459929157363) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.042-0500 s20015| 2016-04-06T02:52:37.364-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:55.042-0500 s20015| 2016-04-06T02:52:37.364-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 64 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:55.044-0500 s20015| 2016-04-06T02:52:37.373-0500 D ASIO [UserCacheInvalidator] startCommand: RemoteCommand 65 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:53:07.373-0500 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.045-0500 c20011| 2016-04-06T02:52:10.256-0500 D QUERY [conn25] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:55.045-0500 c20012| 2016-04-06T02:52:08.598-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 464 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:55.047-0500 c20011| 2016-04-06T02:52:10.256-0500 D QUERY [conn25] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:55.047-0500 c20011| 2016-04-06T02:52:10.256-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.050-0500 c20011| 2016-04-06T02:52:10.257-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|6, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:55.053-0500 c20011| 2016-04-06T02:52:10.257-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|6, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:52:55.056-0500 c20011| 2016-04-06T02:52:10.259-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:55.057-0500 c20011| 2016-04-06T02:52:10.259-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:55.061-0500 c20011| 2016-04-06T02:52:10.259-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.066-0500 c20011| 2016-04-06T02:52:10.259-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|7, t: 1 } and is durable through: { ts: Timestamp 1459929130000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.072-0500 c20011| 2016-04-06T02:52:10.259-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:55.092-0500 c20011| 2016-04-06T02:52:10.259-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:55.100-0500 c20011| 2016-04-06T02:52:10.260-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:55.101-0500 c20011| 2016-04-06T02:52:10.260-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:55.105-0500 c20011| 2016-04-06T02:52:10.260-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|7, t: 1 } and is durable through: { ts: Timestamp 1459929130000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.108-0500 c20011| 2016-04-06T02:52:10.260-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.111-0500 c20011| 2016-04-06T02:52:10.260-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:55.112-0500 c20011| 2016-04-06T02:52:10.260-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:55.113-0500 c20011| 2016-04-06T02:52:10.264-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929130000|7, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|6, t: 1 }, name-id: "168" } [js_test:multi_coll_drop] 2016-04-06T02:52:55.117-0500 c20011| 2016-04-06T02:52:10.265-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:55.118-0500 c20011| 2016-04-06T02:52:10.265-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:55.119-0500 c20011| 2016-04-06T02:52:10.265-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.120-0500 c20011| 2016-04-06T02:52:10.265-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:55.121-0500 c20011| 2016-04-06T02:52:10.265-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:55.123-0500 c20011| 2016-04-06T02:52:10.265-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|7, t: 1 } and is durable through: { ts: Timestamp 1459929130000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.124-0500 c20011| 2016-04-06T02:52:10.265-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.127-0500 c20011| 2016-04-06T02:52:10.265-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:55.138-0500 c20011| 2016-04-06T02:52:10.265-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|7, t: 1 } and is durable through: { ts: Timestamp 1459929130000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.140-0500 c20011| 2016-04-06T02:52:10.265-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.147-0500 c20011| 2016-04-06T02:52:10.265-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:55.154-0500 c20011| 2016-04-06T02:52:10.265-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02a65c17830b843f1a2'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929130256), why: "splitting chunk [{ _id: -82.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c02a65c17830b843f1a2'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929130256), why: "splitting chunk [{ _id: -82.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 8ms [js_test:multi_coll_drop] 2016-04-06T02:52:55.159-0500 c20011| 2016-04-06T02:52:10.265-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|6, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:55.163-0500 c20011| 2016-04-06T02:52:10.265-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|6, t: 1 } } cursorid:20785203637 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:52:55.164-0500 c20011| 2016-04-06T02:52:10.265-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:55.169-0500 c20011| 2016-04-06T02:52:10.266-0500 D COMMAND [conn25] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|7, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.170-0500 c20011| 2016-04-06T02:52:10.266-0500 D COMMAND [conn25] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|7, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:55.175-0500 c20011| 2016-04-06T02:52:10.266-0500 D COMMAND [conn25] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|7, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.177-0500 c20011| 2016-04-06T02:52:10.266-0500 D QUERY [conn25] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:55.178-0500 c20011| 2016-04-06T02:52:10.266-0500 I COMMAND [conn25] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|7, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:512 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:55.180-0500 c20011| 2016-04-06T02:52:10.266-0500 D COMMAND [conn25] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|38 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|7, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.181-0500 c20011| 2016-04-06T02:52:10.266-0500 D COMMAND [conn25] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|7, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:55.184-0500 c20011| 2016-04-06T02:52:10.266-0500 D COMMAND [conn25] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|38 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|7, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.187-0500 c20011| 2016-04-06T02:52:10.266-0500 D QUERY [conn25] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:55.191-0500 c20011| 2016-04-06T02:52:10.266-0500 I COMMAND [conn25] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|38 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|7, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:55.204-0500 c20011| 2016-04-06T02:52:10.266-0500 D COMMAND [conn25] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-82.0", lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -82.0 }, max: { _id: -81.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-82.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-81.0", lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -81.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-81.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|38 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.206-0500 c20011| 2016-04-06T02:52:10.267-0500 D QUERY [conn25] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:52:55.224-0500 c20011| 2016-04-06T02:52:10.267-0500 D QUERY [conn25] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:55.242-0500 c20011| 2016-04-06T02:52:10.267-0500 I COMMAND [conn25] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:55.246-0500 c20011| 2016-04-06T02:52:10.267-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-82.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:55.258-0500 c20011| 2016-04-06T02:52:10.267-0500 D QUERY [conn25] Using idhack: { _id: "multidrop.coll-_id_-81.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:55.260-0500 c20011| 2016-04-06T02:52:10.267-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|7, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:55.262-0500 c20011| 2016-04-06T02:52:10.268-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929130000|8, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|7, t: 1 }, name-id: "169" } [js_test:multi_coll_drop] 2016-04-06T02:52:55.266-0500 c20011| 2016-04-06T02:52:10.269-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:55.266-0500 c20011| 2016-04-06T02:52:10.269-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:55.268-0500 c20011| 2016-04-06T02:52:10.269-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.273-0500 c20011| 2016-04-06T02:52:10.269-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|8, t: 1 } and is durable through: { ts: Timestamp 1459929130000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.274-0500 c20011| 2016-04-06T02:52:10.269-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929130000|8, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|7, t: 1 }, name-id: "169" } [js_test:multi_coll_drop] 2016-04-06T02:52:55.280-0500 c20011| 2016-04-06T02:52:10.269-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:55.284-0500 c20011| 2016-04-06T02:52:10.269-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:55.287-0500 c20011| 2016-04-06T02:52:10.271-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:55.289-0500 c20011| 2016-04-06T02:52:10.271-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|7, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:55.302-0500 c20011| 2016-04-06T02:52:10.276-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:55.302-0500 c20011| 2016-04-06T02:52:10.276-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:55.304-0500 c20011| 2016-04-06T02:52:10.276-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.305-0500 c20011| 2016-04-06T02:52:10.276-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|8, t: 1 } and is durable through: { ts: Timestamp 1459929130000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.308-0500 c20011| 2016-04-06T02:52:10.276-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.311-0500 c20011| 2016-04-06T02:52:10.276-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:55.331-0500 c20011| 2016-04-06T02:52:10.276-0500 I COMMAND [conn25] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-82.0", lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -82.0 }, max: { _id: -81.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-82.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-81.0", lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -81.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-81.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|38 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 9ms [js_test:multi_coll_drop] 2016-04-06T02:52:55.342-0500 c20011| 2016-04-06T02:52:10.276-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|7, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:52:55.349-0500 c20011| 2016-04-06T02:52:10.277-0500 D COMMAND [conn25] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:10.276-0500-5704c02a65c17830b843f1a3", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929130276), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -82.0 }, max: { _id: MaxKey } }, left: { min: { _id: -82.0 }, max: { _id: -81.0 }, lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -81.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.355-0500 c20011| 2016-04-06T02:52:10.277-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:55.359-0500 s20014| 2016-04-06T02:52:38.714-0500 I NETWORK [ReplicaSetMonitorWatcher] Socket recv() timeout 192.168.100.28:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:55.373-0500 c20012| 2016-04-06T02:52:08.599-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 464 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|23, t: 1, h: 6062546662183075299, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.598-0500-5704c02865c17830b843f181", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128598), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -99.0 }, max: { _id: MaxKey } }, left: { min: { _id: -99.0 }, max: { _id: -98.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -98.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.373-0500 c20012| 2016-04-06T02:52:08.599-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|23 and ending at ts: Timestamp 1459929128000|23 [js_test:multi_coll_drop] 2016-04-06T02:52:55.376-0500 c20012| 2016-04-06T02:52:08.599-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:55.377-0500 c20012| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.378-0500 c20012| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.381-0500 c20012| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.383-0500 c20012| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.395-0500 c20012| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.397-0500 c20012| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.397-0500 c20012| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.398-0500 c20012| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.399-0500 c20012| 2016-04-06T02:52:08.599-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.401-0500 c20012| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.402-0500 c20012| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.403-0500 c20012| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.404-0500 c20012| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.406-0500 c20012| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.409-0500 c20012| 2016-04-06T02:52:08.600-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:55.410-0500 c20012| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.414-0500 c20012| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.420-0500 c20012| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.420-0500 c20012| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.421-0500 c20012| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.423-0500 c20012| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.423-0500 c20012| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.424-0500 s20015| 2016-04-06T02:52:37.373-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:55.426-0500 s20015| 2016-04-06T02:52:37.373-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 66 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:55.435-0500 c20011| 2016-04-06T02:52:10.277-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|8, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:55.452-0500 c20011| 2016-04-06T02:52:10.277-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:55.456-0500 c20011| 2016-04-06T02:52:10.277-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:55.458-0500 c20011| 2016-04-06T02:52:10.277-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|8, t: 1 } and is durable through: { ts: Timestamp 1459929130000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.459-0500 c20011| 2016-04-06T02:52:10.277-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.468-0500 c20011| 2016-04-06T02:52:10.277-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:55.478-0500 c20013| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.479-0500 c20012| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.483-0500 c20012| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.484-0500 c20012| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.487-0500 c20012| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.524-0500 c20012| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.526-0500 c20012| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.527-0500 c20012| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.535-0500 c20012| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.536-0500 c20012| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.542-0500 c20012| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.556-0500 c20012| 2016-04-06T02:52:08.600-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.568-0500 c20012| 2016-04-06T02:52:08.601-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:55.584-0500 c20012| 2016-04-06T02:52:08.601-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:55.587-0500 c20012| 2016-04-06T02:52:08.601-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 466 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|22, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:55.593-0500 c20012| 2016-04-06T02:52:08.601-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 466 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:55.597-0500 c20012| 2016-04-06T02:52:08.601-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 466 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.599-0500 c20012| 2016-04-06T02:52:08.601-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 468 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.601-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|22, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:55.602-0500 c20012| 2016-04-06T02:52:08.601-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 468 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:55.605-0500 c20012| 2016-04-06T02:52:08.602-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:55.611-0500 c20012| 2016-04-06T02:52:08.602-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 469 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:55.612-0500 c20012| 2016-04-06T02:52:08.602-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 469 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:55.613-0500 c20012| 2016-04-06T02:52:08.602-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 469 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.615-0500 c20012| 2016-04-06T02:52:08.603-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 468 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.616-0500 c20012| 2016-04-06T02:52:08.603-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|23, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.618-0500 c20012| 2016-04-06T02:52:08.603-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:55.621-0500 c20012| 2016-04-06T02:52:08.603-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 472 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.603-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|23, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:55.621-0500 c20012| 2016-04-06T02:52:08.603-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 472 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:55.623-0500 c20012| 2016-04-06T02:52:08.603-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 472 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|24, t: 1, h: 3786699700518885231, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:55.628-0500 c20012| 2016-04-06T02:52:08.604-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|24 and ending at ts: Timestamp 1459929128000|24 [js_test:multi_coll_drop] 2016-04-06T02:52:55.631-0500 c20012| 2016-04-06T02:52:08.604-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:55.633-0500 c20012| 2016-04-06T02:52:08.604-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.634-0500 c20012| 2016-04-06T02:52:08.604-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.636-0500 c20012| 2016-04-06T02:52:08.604-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.637-0500 c20012| 2016-04-06T02:52:08.604-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.638-0500 c20012| 2016-04-06T02:52:08.604-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.643-0500 c20012| 2016-04-06T02:52:08.604-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.646-0500 c20012| 2016-04-06T02:52:08.604-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.646-0500 c20012| 2016-04-06T02:52:08.604-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.647-0500 c20012| 2016-04-06T02:52:08.604-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.648-0500 c20012| 2016-04-06T02:52:08.604-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.651-0500 c20012| 2016-04-06T02:52:08.604-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.655-0500 c20012| 2016-04-06T02:52:08.604-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.656-0500 c20012| 2016-04-06T02:52:08.604-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.658-0500 c20013| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.659-0500 c20013| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.662-0500 c20013| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.664-0500 c20013| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.669-0500 c20013| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.671-0500 c20013| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.672-0500 c20013| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:55.674-0500 c20013| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.035-0500 c20013| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.046-0500 c20013| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.052-0500 c20013| 2016-04-06T02:52:08.870-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:56.054-0500 c20013| 2016-04-06T02:52:08.870-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.060-0500 c20013| 2016-04-06T02:52:08.871-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 678 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.062-0500 c20013| 2016-04-06T02:52:08.871-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 678 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:56.064-0500 c20013| 2016-04-06T02:52:08.871-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 678 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.067-0500 c20013| 2016-04-06T02:52:08.871-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 680 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.871-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|49, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:56.070-0500 c20013| 2016-04-06T02:52:08.871-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 680 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:56.072-0500 c20013| 2016-04-06T02:52:08.871-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.073-0500 c20012| 2016-04-06T02:52:08.604-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.074-0500 c20012| 2016-04-06T02:52:08.604-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.075-0500 c20012| 2016-04-06T02:52:08.604-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:56.077-0500 c20012| 2016-04-06T02:52:08.604-0500 D QUERY [repl writer worker 3] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:56.078-0500 s20014| 2016-04-06T02:52:38.716-0500 I NETWORK [ReplicaSetMonitorWatcher] SocketException: remote: (NONE):0 error: 9001 socket exception [RECV_TIMEOUT] server [192.168.100.28:20012] [js_test:multi_coll_drop] 2016-04-06T02:52:56.080-0500 s20014| 2016-04-06T02:52:38.716-0500 D - [ReplicaSetMonitorWatcher] User Assertion: 6:network error while attempting to run command 'ismaster' on host 'mongovm16:20012' [js_test:multi_coll_drop] 2016-04-06T02:52:56.083-0500 s20014| 2016-04-06T02:52:38.716-0500 I NETWORK [ReplicaSetMonitorWatcher] Detected bad connection created at 1459929123724358 microSec, clearing pool for mongovm16:20012 of 0 connections [js_test:multi_coll_drop] 2016-04-06T02:52:56.083-0500 s20014| 2016-04-06T02:52:38.716-0500 D NETWORK [ReplicaSetMonitorWatcher] Marking host mongovm16:20012 as failed [js_test:multi_coll_drop] 2016-04-06T02:52:56.085-0500 s20014| 2016-04-06T02:52:38.716-0500 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.100.28:20011, no events [js_test:multi_coll_drop] 2016-04-06T02:52:56.089-0500 s20014| 2016-04-06T02:52:38.716-0500 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.100.28:20013, no events [js_test:multi_coll_drop] 2016-04-06T02:52:56.090-0500 c20012| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.095-0500 c20012| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.100-0500 c20012| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.101-0500 c20012| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.105-0500 c20012| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.114-0500 c20012| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.115-0500 c20012| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.116-0500 c20012| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.117-0500 c20012| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.117-0500 c20012| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.123-0500 c20012| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.124-0500 c20012| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.125-0500 c20012| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.126-0500 c20012| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.127-0500 c20012| 2016-04-06T02:52:08.605-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.132-0500 c20012| 2016-04-06T02:52:08.606-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 474 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.606-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|23, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:56.132-0500 c20012| 2016-04-06T02:52:08.606-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.134-0500 c20012| 2016-04-06T02:52:08.606-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.137-0500 c20012| 2016-04-06T02:52:08.606-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 474 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:56.144-0500 c20013| 2016-04-06T02:52:08.871-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 681 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.149-0500 c20012| 2016-04-06T02:52:08.606-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:56.155-0500 c20012| 2016-04-06T02:52:08.607-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.161-0500 c20012| 2016-04-06T02:52:08.607-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 475 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|23, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.163-0500 c20012| 2016-04-06T02:52:08.607-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 475 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:56.167-0500 c20012| 2016-04-06T02:52:08.607-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 475 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.168-0500 c20013| 2016-04-06T02:52:08.871-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 681 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:56.176-0500 c20012| 2016-04-06T02:52:08.609-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.184-0500 c20011| 2016-04-06T02:52:10.279-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.184-0500 c20011| 2016-04-06T02:52:10.279-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:56.194-0500 c20011| 2016-04-06T02:52:10.279-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.197-0500 c20011| 2016-04-06T02:52:10.279-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|9, t: 1 } and is durable through: { ts: Timestamp 1459929130000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.205-0500 c20011| 2016-04-06T02:52:10.279-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:56.209-0500 c20011| 2016-04-06T02:52:10.279-0500 D REPL [conn25] Required snapshot optime: { ts: Timestamp 1459929130000|9, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|8, t: 1 }, name-id: "170" } [js_test:multi_coll_drop] 2016-04-06T02:52:56.212-0500 c20011| 2016-04-06T02:52:10.279-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:56.215-0500 c20011| 2016-04-06T02:52:10.280-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.216-0500 c20011| 2016-04-06T02:52:10.280-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:56.216-0500 c20011| 2016-04-06T02:52:10.280-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|8, t: 1 } and is durable through: { ts: Timestamp 1459929130000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.223-0500 c20012| 2016-04-06T02:52:08.609-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 477 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.225-0500 c20011| 2016-04-06T02:52:10.280-0500 D REPL [conn12] Required snapshot optime: { ts: Timestamp 1459929130000|9, t: 1 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|8, t: 1 }, name-id: "170" } [js_test:multi_coll_drop] 2016-04-06T02:52:56.226-0500 c20011| 2016-04-06T02:52:10.280-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.228-0500 c20011| 2016-04-06T02:52:10.280-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:56.230-0500 c20011| 2016-04-06T02:52:10.281-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:56.230-0500 c20012| 2016-04-06T02:52:08.609-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 477 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:56.232-0500 c20011| 2016-04-06T02:52:10.281-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|7, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:56.234-0500 c20011| 2016-04-06T02:52:10.281-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.235-0500 c20011| 2016-04-06T02:52:10.281-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:56.235-0500 c20011| 2016-04-06T02:52:10.281-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.236-0500 c20011| 2016-04-06T02:52:10.281-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|9, t: 1 } and is durable through: { ts: Timestamp 1459929130000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.238-0500 c20013| 2016-04-06T02:52:08.871-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 681 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.240-0500 c20013| 2016-04-06T02:52:08.872-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 680 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.241-0500 c20013| 2016-04-06T02:52:08.872-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|50, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.242-0500 c20013| 2016-04-06T02:52:08.873-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:56.246-0500 c20013| 2016-04-06T02:52:08.873-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 684 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.873-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|50, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:56.247-0500 c20013| 2016-04-06T02:52:08.873-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 684 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:56.260-0500 c20013| 2016-04-06T02:52:08.873-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 684 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|51, t: 1, h: -8522370368222023966, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.872-0500-5704c02865c17830b843f18f", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128872), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -92.0 }, max: { _id: MaxKey } }, left: { min: { _id: -92.0 }, max: { _id: -91.0 }, lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -91.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.268-0500 c20013| 2016-04-06T02:52:08.873-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|51 and ending at ts: Timestamp 1459929128000|51 [js_test:multi_coll_drop] 2016-04-06T02:52:56.271-0500 c20013| 2016-04-06T02:52:08.873-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:56.279-0500 c20013| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.280-0500 c20013| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.281-0500 c20013| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.283-0500 c20013| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.283-0500 c20013| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.284-0500 c20013| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.287-0500 c20013| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.289-0500 c20013| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.293-0500 c20013| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.295-0500 c20013| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.295-0500 c20013| 2016-04-06T02:52:08.874-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:56.302-0500 c20013| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.311-0500 c20013| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.311-0500 c20013| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.314-0500 c20013| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.325-0500 c20013| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.325-0500 c20013| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.347-0500 c20013| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.361-0500 c20013| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.367-0500 c20013| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.368-0500 c20013| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.371-0500 c20013| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.375-0500 c20013| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.376-0500 c20013| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.376-0500 c20013| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.377-0500 c20013| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.383-0500 c20013| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.387-0500 c20013| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.389-0500 c20013| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.392-0500 c20013| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.400-0500 c20013| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.401-0500 c20013| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.401-0500 c20013| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.402-0500 c20013| 2016-04-06T02:52:08.875-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:56.412-0500 c20013| 2016-04-06T02:52:08.876-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.422-0500 c20013| 2016-04-06T02:52:08.876-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 686 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.425-0500 c20013| 2016-04-06T02:52:08.876-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 687 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.876-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|50, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:56.427-0500 c20013| 2016-04-06T02:52:08.876-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 686 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:56.429-0500 c20013| 2016-04-06T02:52:08.876-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 687 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:56.436-0500 c20013| 2016-04-06T02:52:08.878-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 686 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.441-0500 c20013| 2016-04-06T02:52:08.878-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.454-0500 c20013| 2016-04-06T02:52:08.878-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 689 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.456-0500 c20013| 2016-04-06T02:52:08.878-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 689 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:56.457-0500 c20013| 2016-04-06T02:52:08.879-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 689 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.460-0500 c20013| 2016-04-06T02:52:08.881-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 687 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.461-0500 c20013| 2016-04-06T02:52:08.881-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|51, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.463-0500 c20013| 2016-04-06T02:52:08.882-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:56.466-0500 c20013| 2016-04-06T02:52:08.882-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 692 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.882-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|51, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:56.469-0500 c20013| 2016-04-06T02:52:08.882-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 692 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:56.472-0500 c20013| 2016-04-06T02:52:08.882-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 692 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|52, t: 1, h: -8575808186857473367, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.481-0500 c20013| 2016-04-06T02:52:08.882-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|52 and ending at ts: Timestamp 1459929128000|52 [js_test:multi_coll_drop] 2016-04-06T02:52:56.486-0500 c20013| 2016-04-06T02:52:08.882-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:56.492-0500 c20013| 2016-04-06T02:52:08.882-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.502-0500 c20013| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.512-0500 c20013| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.516-0500 c20013| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.517-0500 c20013| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.517-0500 c20013| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.518-0500 c20013| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.519-0500 c20013| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.520-0500 c20013| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.522-0500 c20013| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.523-0500 c20013| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.523-0500 c20013| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.525-0500 c20013| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.525-0500 c20013| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.532-0500 c20013| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.532-0500 c20013| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.536-0500 c20013| 2016-04-06T02:52:08.883-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:56.543-0500 c20013| 2016-04-06T02:52:08.883-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:56.549-0500 c20013| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.556-0500 c20013| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.557-0500 c20013| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.558-0500 c20013| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.561-0500 c20013| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.561-0500 c20013| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.563-0500 c20013| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.565-0500 c20013| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.566-0500 c20013| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.566-0500 c20013| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.567-0500 c20013| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.567-0500 c20013| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.569-0500 c20013| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.569-0500 c20013| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.571-0500 c20013| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.571-0500 c20013| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.572-0500 c20013| 2016-04-06T02:52:08.884-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:56.574-0500 c20013| 2016-04-06T02:52:08.884-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.597-0500 c20013| 2016-04-06T02:52:08.884-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 694 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.598-0500 c20013| 2016-04-06T02:52:08.884-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 694 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:56.601-0500 c20013| 2016-04-06T02:52:08.884-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 694 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.607-0500 c20013| 2016-04-06T02:52:08.884-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 696 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.884-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|51, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:56.609-0500 c20013| 2016-04-06T02:52:08.884-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 696 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:56.619-0500 c20013| 2016-04-06T02:52:08.885-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.623-0500 c20013| 2016-04-06T02:52:08.885-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 697 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.624-0500 c20013| 2016-04-06T02:52:08.885-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 697 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:56.628-0500 c20013| 2016-04-06T02:52:08.885-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 697 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.631-0500 c20013| 2016-04-06T02:52:08.886-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 696 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.635-0500 c20013| 2016-04-06T02:52:08.886-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|52, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.637-0500 c20013| 2016-04-06T02:52:08.886-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:56.648-0500 c20013| 2016-04-06T02:52:08.886-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 700 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.886-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|52, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:56.653-0500 c20013| 2016-04-06T02:52:08.886-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 700 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:56.657-0500 c20013| 2016-04-06T02:52:08.886-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|52, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.660-0500 c20013| 2016-04-06T02:52:08.886-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|52, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:56.661-0500 c20013| 2016-04-06T02:52:08.886-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|52, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.664-0500 c20013| 2016-04-06T02:52:08.886-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:56.671-0500 c20013| 2016-04-06T02:52:08.886-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|52, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:56.676-0500 c20013| 2016-04-06T02:52:08.889-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 700 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|53, t: 1, h: 6499600219381119724, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f190'), state: 2, when: new Date(1459929128888), why: "splitting chunk [{ _id: -91.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.679-0500 c20013| 2016-04-06T02:52:08.889-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|53 and ending at ts: Timestamp 1459929128000|53 [js_test:multi_coll_drop] 2016-04-06T02:52:56.680-0500 c20013| 2016-04-06T02:52:08.889-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:56.683-0500 c20013| 2016-04-06T02:52:08.889-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.689-0500 c20013| 2016-04-06T02:52:08.889-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.691-0500 c20013| 2016-04-06T02:52:08.889-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.692-0500 c20013| 2016-04-06T02:52:08.889-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.693-0500 c20013| 2016-04-06T02:52:08.889-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.695-0500 c20013| 2016-04-06T02:52:08.889-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.697-0500 c20013| 2016-04-06T02:52:08.889-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.698-0500 c20013| 2016-04-06T02:52:08.889-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.701-0500 c20013| 2016-04-06T02:52:08.889-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.702-0500 c20013| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.703-0500 c20013| 2016-04-06T02:52:08.890-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:56.705-0500 c20013| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.710-0500 c20013| 2016-04-06T02:52:08.890-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:56.712-0500 c20013| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.713-0500 c20013| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.714-0500 c20013| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.715-0500 c20013| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.719-0500 c20013| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.721-0500 c20013| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.722-0500 c20013| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.722-0500 c20013| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.723-0500 c20013| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.724-0500 c20013| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.726-0500 c20013| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.727-0500 c20013| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.729-0500 c20013| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.729-0500 c20013| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.730-0500 c20013| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.731-0500 c20013| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.731-0500 c20013| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.732-0500 c20013| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.735-0500 c20013| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.737-0500 c20013| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.738-0500 c20013| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.739-0500 c20013| 2016-04-06T02:52:08.891-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:56.747-0500 c20013| 2016-04-06T02:52:08.891-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.750-0500 c20013| 2016-04-06T02:52:08.891-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 702 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.751-0500 c20013| 2016-04-06T02:52:08.891-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 702 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:56.754-0500 c20013| 2016-04-06T02:52:08.891-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 703 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.891-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|52, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:56.754-0500 c20013| 2016-04-06T02:52:08.891-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 703 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:56.755-0500 c20013| 2016-04-06T02:52:08.891-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 702 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.757-0500 c20013| 2016-04-06T02:52:08.892-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.759-0500 c20013| 2016-04-06T02:52:08.892-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 705 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.761-0500 c20013| 2016-04-06T02:52:08.892-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 705 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:56.762-0500 c20013| 2016-04-06T02:52:08.892-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 705 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.764-0500 c20013| 2016-04-06T02:52:08.892-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 703 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.765-0500 c20013| 2016-04-06T02:52:08.892-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|53, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.767-0500 c20013| 2016-04-06T02:52:08.892-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:56.772-0500 c20013| 2016-04-06T02:52:08.892-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 708 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.892-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|53, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:56.773-0500 c20013| 2016-04-06T02:52:08.892-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 708 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:56.785-0500 c20013| 2016-04-06T02:52:08.894-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 708 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|54, t: 1, h: -140542895342390815, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-91.0", lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -91.0 }, max: { _id: -90.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-91.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-90.0", lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -90.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-90.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.792-0500 c20013| 2016-04-06T02:52:08.895-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|54 and ending at ts: Timestamp 1459929128000|54 [js_test:multi_coll_drop] 2016-04-06T02:52:56.793-0500 c20013| 2016-04-06T02:52:08.895-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:56.796-0500 c20013| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.796-0500 c20013| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.796-0500 c20013| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.801-0500 c20013| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.801-0500 c20013| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.804-0500 c20013| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.806-0500 c20013| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.809-0500 c20013| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.810-0500 c20013| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.813-0500 c20013| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.815-0500 c20013| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.818-0500 c20013| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.820-0500 c20013| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.821-0500 c20013| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.821-0500 c20013| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.822-0500 c20013| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.822-0500 c20013| 2016-04-06T02:52:08.895-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:56.823-0500 c20013| 2016-04-06T02:52:08.896-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll-_id_-91.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:56.825-0500 c20013| 2016-04-06T02:52:08.896-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll-_id_-90.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:56.825-0500 c20013| 2016-04-06T02:52:08.896-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.826-0500 c20013| 2016-04-06T02:52:08.896-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.827-0500 c20013| 2016-04-06T02:52:08.896-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.829-0500 c20013| 2016-04-06T02:52:08.896-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.831-0500 c20013| 2016-04-06T02:52:08.896-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.834-0500 c20013| 2016-04-06T02:52:08.896-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.834-0500 c20013| 2016-04-06T02:52:08.896-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.836-0500 c20013| 2016-04-06T02:52:08.896-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.836-0500 c20013| 2016-04-06T02:52:08.896-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.838-0500 c20013| 2016-04-06T02:52:08.896-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.839-0500 c20013| 2016-04-06T02:52:08.896-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.840-0500 c20013| 2016-04-06T02:52:08.896-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.842-0500 c20013| 2016-04-06T02:52:08.896-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.842-0500 c20013| 2016-04-06T02:52:08.896-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.844-0500 c20013| 2016-04-06T02:52:08.896-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.846-0500 c20013| 2016-04-06T02:52:08.896-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.847-0500 c20013| 2016-04-06T02:52:08.897-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:56.851-0500 c20013| 2016-04-06T02:52:08.897-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.854-0500 c20013| 2016-04-06T02:52:08.897-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 710 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.854-0500 c20013| 2016-04-06T02:52:08.897-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 710 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:56.855-0500 c20013| 2016-04-06T02:52:08.897-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 710 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.858-0500 c20013| 2016-04-06T02:52:08.898-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 712 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.898-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|53, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:56.860-0500 c20013| 2016-04-06T02:52:08.898-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 712 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:56.861-0500 c20013| 2016-04-06T02:52:08.900-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 712 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.862-0500 c20013| 2016-04-06T02:52:08.900-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|54, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.863-0500 c20013| 2016-04-06T02:52:08.900-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:56.870-0500 c20013| 2016-04-06T02:52:08.900-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.877-0500 c20013| 2016-04-06T02:52:08.900-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 714 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:56.878-0500 c20013| 2016-04-06T02:52:08.900-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 714 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:56.883-0500 c20013| 2016-04-06T02:52:08.900-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 715 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.900-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|54, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:56.886-0500 c20013| 2016-04-06T02:52:08.900-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 714 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.892-0500 c20013| 2016-04-06T02:52:08.900-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 715 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:56.905-0500 c20013| 2016-04-06T02:52:08.901-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 715 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|55, t: 1, h: -853858200892887985, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.900-0500-5704c02865c17830b843f191", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128900), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -91.0 }, max: { _id: MaxKey } }, left: { min: { _id: -91.0 }, max: { _id: -90.0 }, lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -90.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.928-0500 c20013| 2016-04-06T02:52:08.901-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|55 and ending at ts: Timestamp 1459929128000|55 [js_test:multi_coll_drop] 2016-04-06T02:52:56.930-0500 c20013| 2016-04-06T02:52:08.901-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:56.934-0500 c20013| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.936-0500 c20013| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:56.954-0500 c20011| 2016-04-06T02:52:10.281-0500 D REPL [conn15] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.964-0500 c20011| 2016-04-06T02:52:10.281-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:56.967-0500 c20011| 2016-04-06T02:52:10.281-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|8, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:56.975-0500 c20011| 2016-04-06T02:52:10.281-0500 I COMMAND [conn25] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:10.276-0500-5704c02a65c17830b843f1a3", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929130276), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -82.0 }, max: { _id: MaxKey } }, left: { min: { _id: -82.0 }, max: { _id: -81.0 }, lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -81.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:56.979-0500 c20011| 2016-04-06T02:52:10.281-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:56.983-0500 c20011| 2016-04-06T02:52:10.281-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c02a65c17830b843f1a2') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:56.998-0500 c20012| 2016-04-06T02:52:08.609-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 477 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.003-0500 c20011| 2016-04-06T02:52:10.281-0500 D QUERY [conn25] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:52:57.003-0500 c20013| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.004-0500 c20012| 2016-04-06T02:52:08.609-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 474 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.004-0500 c20011| 2016-04-06T02:52:10.282-0500 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c02a65c17830b843f1a2') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.004-0500 c20012| 2016-04-06T02:52:08.610-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|24, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.005-0500 c20012| 2016-04-06T02:52:08.610-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:57.029-0500 c20012| 2016-04-06T02:52:08.610-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 480 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.610-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|24, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:57.030-0500 c20012| 2016-04-06T02:52:08.610-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 480 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:57.032-0500 c20012| 2016-04-06T02:52:08.611-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|24, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.037-0500 c20012| 2016-04-06T02:52:08.611-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|24, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:57.038-0500 c20012| 2016-04-06T02:52:08.611-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|24, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.041-0500 c20012| 2016-04-06T02:52:08.612-0500 D QUERY [conn7] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:57.052-0500 c20011| 2016-04-06T02:52:10.282-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|9, t: 1 } } cursorid:17466612721 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:57.063-0500 c20012| 2016-04-06T02:52:08.613-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|24, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:57.067-0500 c20013| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.069-0500 c20013| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.074-0500 c20013| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.077-0500 c20013| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.079-0500 c20013| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.081-0500 c20013| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.087-0500 c20013| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.089-0500 c20013| 2016-04-06T02:52:08.902-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:57.090-0500 c20013| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.092-0500 c20012| 2016-04-06T02:52:08.614-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|24, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.097-0500 c20011| 2016-04-06T02:52:10.283-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:57.100-0500 c20012| 2016-04-06T02:52:08.614-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|24, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:57.102-0500 c20012| 2016-04-06T02:52:08.614-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|24, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.103-0500 c20011| 2016-04-06T02:52:10.283-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:57.106-0500 c20012| 2016-04-06T02:52:08.614-0500 D QUERY [conn7] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:57.109-0500 c20012| 2016-04-06T02:52:08.614-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|24, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:57.109-0500 c20012| 2016-04-06T02:52:08.616-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 480 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|25, t: 1, h: -2372094527379662980, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f182'), state: 2, when: new Date(1459929128615), why: "splitting chunk [{ _id: -98.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.110-0500 c20012| 2016-04-06T02:52:08.616-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|25 and ending at ts: Timestamp 1459929128000|25 [js_test:multi_coll_drop] 2016-04-06T02:52:57.110-0500 c20012| 2016-04-06T02:52:08.616-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:57.125-0500 c20012| 2016-04-06T02:52:08.616-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.127-0500 c20011| 2016-04-06T02:52:10.284-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.130-0500 c20011| 2016-04-06T02:52:10.284-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.141-0500 c20011| 2016-04-06T02:52:10.284-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:57.143-0500 c20012| 2016-04-06T02:52:08.616-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.143-0500 c20012| 2016-04-06T02:52:08.616-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.144-0500 c20012| 2016-04-06T02:52:08.616-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.146-0500 c20012| 2016-04-06T02:52:08.616-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.148-0500 c20012| 2016-04-06T02:52:08.616-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.148-0500 c20012| 2016-04-06T02:52:08.616-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.151-0500 c20012| 2016-04-06T02:52:08.616-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.152-0500 c20012| 2016-04-06T02:52:08.616-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.154-0500 c20012| 2016-04-06T02:52:08.616-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.155-0500 c20012| 2016-04-06T02:52:08.616-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.157-0500 c20012| 2016-04-06T02:52:08.616-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.158-0500 c20012| 2016-04-06T02:52:08.616-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.160-0500 c20012| 2016-04-06T02:52:08.616-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.161-0500 c20012| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.161-0500 c20012| 2016-04-06T02:52:08.617-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:57.164-0500 c20012| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.165-0500 c20012| 2016-04-06T02:52:08.617-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:57.165-0500 c20012| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.166-0500 c20012| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.166-0500 c20012| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.167-0500 c20012| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.168-0500 c20012| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.169-0500 c20012| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.170-0500 c20012| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.173-0500 c20012| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.185-0500 c20012| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.186-0500 c20012| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.188-0500 c20012| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.190-0500 c20012| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.199-0500 c20012| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.201-0500 c20012| 2016-04-06T02:52:08.617-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.202-0500 c20012| 2016-04-06T02:52:08.618-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.209-0500 c20012| 2016-04-06T02:52:08.618-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.225-0500 c20012| 2016-04-06T02:52:08.618-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:57.235-0500 c20012| 2016-04-06T02:52:08.618-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:57.243-0500 c20012| 2016-04-06T02:52:08.618-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 482 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|24, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:57.269-0500 c20012| 2016-04-06T02:52:08.618-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 482 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:57.274-0500 c20012| 2016-04-06T02:52:08.618-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 483 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.618-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|24, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:57.276-0500 c20012| 2016-04-06T02:52:08.618-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 483 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:57.281-0500 c20012| 2016-04-06T02:52:08.618-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 482 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.290-0500 c20012| 2016-04-06T02:52:08.626-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:57.295-0500 c20012| 2016-04-06T02:52:08.626-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 485 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:57.301-0500 c20012| 2016-04-06T02:52:08.626-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 485 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:57.302-0500 c20012| 2016-04-06T02:52:08.626-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 485 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.303-0500 c20012| 2016-04-06T02:52:08.626-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 483 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.303-0500 c20012| 2016-04-06T02:52:08.626-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|25, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.303-0500 c20012| 2016-04-06T02:52:08.626-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:57.305-0500 c20012| 2016-04-06T02:52:08.626-0500 D COMMAND [conn11] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|25, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.315-0500 c20012| 2016-04-06T02:52:08.626-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 488 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.626-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|25, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:57.322-0500 c20012| 2016-04-06T02:52:08.626-0500 D COMMAND [conn11] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|25, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:57.324-0500 c20012| 2016-04-06T02:52:08.626-0500 D COMMAND [conn11] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|25, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.329-0500 c20012| 2016-04-06T02:52:08.626-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 488 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:57.331-0500 c20012| 2016-04-06T02:52:08.626-0500 D QUERY [conn11] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:57.333-0500 c20012| 2016-04-06T02:52:08.627-0500 I COMMAND [conn11] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|25, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:492 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:57.342-0500 c20012| 2016-04-06T02:52:08.629-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 488 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|26, t: 1, h: 4415888972038189494, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-98.0", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -98.0 }, max: { _id: -97.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-98.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-97.0", lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -97.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-97.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.344-0500 c20013| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.346-0500 c20013| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.347-0500 c20013| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.347-0500 c20013| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.351-0500 c20013| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.352-0500 c20013| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.352-0500 c20013| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.353-0500 c20013| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.354-0500 c20013| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.354-0500 c20013| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.355-0500 c20013| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.355-0500 c20013| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.356-0500 c20013| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.357-0500 c20013| 2016-04-06T02:52:08.903-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.358-0500 c20013| 2016-04-06T02:52:08.903-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.361-0500 c20013| 2016-04-06T02:52:08.903-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.362-0500 c20013| 2016-04-06T02:52:08.903-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.363-0500 c20013| 2016-04-06T02:52:08.903-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.363-0500 c20013| 2016-04-06T02:52:08.903-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.364-0500 c20013| 2016-04-06T02:52:08.903-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.364-0500 c20013| 2016-04-06T02:52:08.903-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.365-0500 c20013| 2016-04-06T02:52:08.903-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:57.367-0500 c20013| 2016-04-06T02:52:08.903-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:57.370-0500 c20013| 2016-04-06T02:52:08.903-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 718 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:57.372-0500 c20013| 2016-04-06T02:52:08.903-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 718 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:57.372-0500 c20013| 2016-04-06T02:52:08.903-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 719 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.903-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|54, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:57.373-0500 c20013| 2016-04-06T02:52:08.903-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 719 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:57.373-0500 c20013| 2016-04-06T02:52:08.904-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 718 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.375-0500 c20013| 2016-04-06T02:52:08.906-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 719 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.376-0500 c20013| 2016-04-06T02:52:08.906-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|55, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.376-0500 c20013| 2016-04-06T02:52:08.906-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:57.377-0500 c20012| 2016-04-06T02:52:08.629-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|26 and ending at ts: Timestamp 1459929128000|26 [js_test:multi_coll_drop] 2016-04-06T02:52:57.378-0500 c20012| 2016-04-06T02:52:08.629-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:57.378-0500 c20012| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.379-0500 c20012| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.379-0500 c20012| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.381-0500 c20012| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.383-0500 c20012| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.384-0500 c20012| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.385-0500 c20012| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.387-0500 c20012| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.388-0500 c20012| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.389-0500 c20012| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.391-0500 c20012| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.394-0500 c20012| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.395-0500 c20012| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.396-0500 c20012| 2016-04-06T02:52:08.629-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:57.396-0500 c20012| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.398-0500 c20012| 2016-04-06T02:52:08.629-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.399-0500 c20012| 2016-04-06T02:52:08.630-0500 D QUERY [repl writer worker 6] Using idhack: { _id: "multidrop.coll-_id_-98.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:57.403-0500 c20012| 2016-04-06T02:52:08.630-0500 D QUERY [repl writer worker 6] Using idhack: { _id: "multidrop.coll-_id_-97.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:57.406-0500 c20012| 2016-04-06T02:52:08.630-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.407-0500 c20012| 2016-04-06T02:52:08.630-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.408-0500 c20012| 2016-04-06T02:52:08.630-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.413-0500 c20012| 2016-04-06T02:52:08.630-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.413-0500 c20012| 2016-04-06T02:52:08.630-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.413-0500 c20012| 2016-04-06T02:52:08.630-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.414-0500 c20012| 2016-04-06T02:52:08.630-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.415-0500 c20012| 2016-04-06T02:52:08.630-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.417-0500 c20012| 2016-04-06T02:52:08.630-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.417-0500 c20012| 2016-04-06T02:52:08.630-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.420-0500 c20012| 2016-04-06T02:52:08.630-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.420-0500 c20012| 2016-04-06T02:52:08.630-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.426-0500 c20012| 2016-04-06T02:52:08.630-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.426-0500 c20012| 2016-04-06T02:52:08.630-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.428-0500 c20012| 2016-04-06T02:52:08.630-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.429-0500 c20012| 2016-04-06T02:52:08.631-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 490 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.631-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|25, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:57.431-0500 c20012| 2016-04-06T02:52:08.631-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 490 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:57.432-0500 c20012| 2016-04-06T02:52:08.631-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.433-0500 c20012| 2016-04-06T02:52:08.631-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.434-0500 c20012| 2016-04-06T02:52:08.631-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:57.439-0500 c20012| 2016-04-06T02:52:08.632-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:57.444-0500 c20012| 2016-04-06T02:52:08.632-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 491 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|25, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:57.445-0500 c20012| 2016-04-06T02:52:08.632-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 491 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:57.446-0500 c20012| 2016-04-06T02:52:08.632-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 491 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.452-0500 c20012| 2016-04-06T02:52:08.634-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:57.456-0500 c20012| 2016-04-06T02:52:08.634-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 493 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:57.457-0500 c20012| 2016-04-06T02:52:08.634-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 493 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:57.458-0500 c20012| 2016-04-06T02:52:08.634-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 493 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.459-0500 c20012| 2016-04-06T02:52:08.635-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 490 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.460-0500 c20012| 2016-04-06T02:52:08.635-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|26, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.462-0500 c20012| 2016-04-06T02:52:08.635-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:57.464-0500 c20012| 2016-04-06T02:52:08.635-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 496 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.635-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|26, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:57.465-0500 c20012| 2016-04-06T02:52:08.635-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 496 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:57.468-0500 c20012| 2016-04-06T02:52:08.636-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 496 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|27, t: 1, h: -3202951646012415608, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.635-0500-5704c02865c17830b843f183", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128635), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -98.0 }, max: { _id: MaxKey } }, left: { min: { _id: -98.0 }, max: { _id: -97.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -97.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.469-0500 c20012| 2016-04-06T02:52:08.636-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|27 and ending at ts: Timestamp 1459929128000|27 [js_test:multi_coll_drop] 2016-04-06T02:52:57.472-0500 c20012| 2016-04-06T02:52:08.636-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:57.472-0500 c20012| 2016-04-06T02:52:08.636-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.472-0500 c20012| 2016-04-06T02:52:08.636-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.473-0500 c20012| 2016-04-06T02:52:08.636-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.474-0500 c20012| 2016-04-06T02:52:08.636-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.475-0500 c20012| 2016-04-06T02:52:08.636-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.477-0500 c20012| 2016-04-06T02:52:08.636-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.477-0500 c20012| 2016-04-06T02:52:08.636-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.477-0500 c20012| 2016-04-06T02:52:08.636-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.478-0500 c20012| 2016-04-06T02:52:08.636-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.478-0500 c20012| 2016-04-06T02:52:08.636-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.479-0500 c20012| 2016-04-06T02:52:08.636-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.479-0500 c20012| 2016-04-06T02:52:08.636-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.481-0500 c20012| 2016-04-06T02:52:08.636-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.481-0500 c20012| 2016-04-06T02:52:08.636-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.482-0500 c20012| 2016-04-06T02:52:08.636-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.482-0500 c20012| 2016-04-06T02:52:08.636-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:57.483-0500 c20012| 2016-04-06T02:52:08.636-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.485-0500 c20012| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.485-0500 c20012| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.490-0500 c20012| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.491-0500 c20012| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.492-0500 c20012| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.493-0500 c20012| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.494-0500 c20012| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.496-0500 c20012| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.508-0500 c20012| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.509-0500 c20012| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.510-0500 c20012| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.512-0500 c20012| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.514-0500 c20012| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.514-0500 c20012| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.515-0500 c20012| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.516-0500 c20012| 2016-04-06T02:52:08.637-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:57.517-0500 c20012| 2016-04-06T02:52:08.637-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:57.518-0500 c20012| 2016-04-06T02:52:08.638-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:57.521-0500 c20012| 2016-04-06T02:52:08.638-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 498 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|26, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:57.522-0500 c20012| 2016-04-06T02:52:08.638-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 498 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:57.526-0500 c20012| 2016-04-06T02:52:08.638-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 498 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.527-0500 c20012| 2016-04-06T02:52:08.638-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 500 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.638-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|26, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:57.528-0500 c20011| 2016-04-06T02:52:10.284-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:57.532-0500 c20011| 2016-04-06T02:52:10.284-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:57.532-0500 c20011| 2016-04-06T02:52:10.284-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:57.538-0500 c20011| 2016-04-06T02:52:10.284-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.540-0500 c20011| 2016-04-06T02:52:10.284-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.544-0500 c20011| 2016-04-06T02:52:10.284-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:57.545-0500 c20011| 2016-04-06T02:52:10.285-0500 D REPL [conn25] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.551-0500 c20011| 2016-04-06T02:52:10.285-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c02a65c17830b843f1a2') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:57.555-0500 c20011| 2016-04-06T02:52:10.285-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|9, t: 1 } } cursorid:17466612721 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:57.558-0500 c20011| 2016-04-06T02:52:10.285-0500 D COMMAND [conn14] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:57.559-0500 c20011| 2016-04-06T02:52:10.288-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:57.560-0500 c20011| 2016-04-06T02:52:10.288-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|8, t: 1 } } cursorid:20785203637 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:57.563-0500 c20011| 2016-04-06T02:52:10.288-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:57.567-0500 c20011| 2016-04-06T02:52:10.288-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:57.574-0500 c20011| 2016-04-06T02:52:10.288-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|9, t: 1 } and is durable through: { ts: Timestamp 1459929130000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.577-0500 c20011| 2016-04-06T02:52:10.288-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.581-0500 c20011| 2016-04-06T02:52:10.288-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:57.586-0500 c20011| 2016-04-06T02:52:10.290-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:57.588-0500 c20011| 2016-04-06T02:52:10.290-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:57.600-0500 c20011| 2016-04-06T02:52:10.290-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.602-0500 c20011| 2016-04-06T02:52:10.290-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.604-0500 c20011| 2016-04-06T02:52:10.290-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:57.605-0500 c20011| 2016-04-06T02:52:10.291-0500 D COMMAND [conn13] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:57.612-0500 c20011| 2016-04-06T02:52:10.291-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:57.613-0500 c20011| 2016-04-06T02:52:10.291-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:57.616-0500 c20011| 2016-04-06T02:52:10.291-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.619-0500 c20011| 2016-04-06T02:52:10.291-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.623-0500 c20011| 2016-04-06T02:52:10.291-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:57.625-0500 c20011| 2016-04-06T02:52:10.292-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|10, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.633-0500 c20011| 2016-04-06T02:52:10.292-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|10, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:57.635-0500 c20011| 2016-04-06T02:52:10.292-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|10, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.636-0500 c20011| 2016-04-06T02:52:10.292-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:57.637-0500 c20011| 2016-04-06T02:52:10.292-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|10, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:57.639-0500 c20011| 2016-04-06T02:52:10.292-0500 D COMMAND [conn12] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:57.640-0500 c20011| 2016-04-06T02:52:11.052-0500 D COMMAND [conn27] run command admin.$cmd { replSetStepDown: 10.0, force: true } [js_test:multi_coll_drop] 2016-04-06T02:52:57.641-0500 c20011| 2016-04-06T02:52:14.044-0500 D COMMAND [conn12] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:57.641-0500 c20011| 2016-04-06T02:52:14.044-0500 D COMMAND [conn27] command: replSetStepDown [js_test:multi_coll_drop] 2016-04-06T02:52:57.644-0500 c20011| 2016-04-06T02:52:14.044-0500 D REPL [conn12] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.646-0500 c20011| 2016-04-06T02:52:12.166-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.647-0500 c20011| 2016-04-06T02:52:14.044-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:57.648-0500 c20011| 2016-04-06T02:52:12.184-0500 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.650-0500 c20011| 2016-04-06T02:52:14.044-0500 D COMMAND [conn2] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:57.661-0500 2016-04-06T02:52:41.724-0500 I c20011| 2016-04-06T02:52:12.784-0500 D COMMAND [conn15] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:57.661-0500 c20011| 2016-04-06T02:52:14.044-0500 D COMMAND [conn15] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:57.668-0500 NETWORK [thread2] trying reconnect to mongovm16:20012 (192.168.100.28) failed [js_test:multi_coll_drop] 2016-04-06T02:52:57.676-0500 c20011| 2016-04-06T02:52:14.044-0500 D COMMAND [conn25] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02e65c17830b843f1a4'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929134044), why: "splitting chunk [{ _id: -81.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.704-0500 c20011| 2016-04-06T02:52:11.140-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 45 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:21.140-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.717-0500 c20011| 2016-04-06T02:52:14.044-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 45 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:57.727-0500 c20011| 2016-04-06T02:52:14.044-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 46 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:24.044-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.728-0500 c20011| 2016-04-06T02:52:14.044-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 46 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:57.732-0500 c20011| 2016-04-06T02:52:14.045-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 45 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:57.736-0500 c20011| 2016-04-06T02:52:14.045-0500 D REPL [conn12] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929129000|12, t: 1 } and is durable through: { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.737-0500 c20011| 2016-04-06T02:52:14.045-0500 I COMMAND [conn12] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:57.740-0500 c20011| 2016-04-06T02:52:14.045-0500 I COMMAND [conn27] Attempting to step down in response to replSetStepDown command [js_test:multi_coll_drop] 2016-04-06T02:52:57.755-0500 c20011| 2016-04-06T02:52:14.045-0500 D QUERY [conn27] received interrupt request for unknown op: 1462 known ops: [js_test:multi_coll_drop] 2016-04-06T02:52:57.759-0500 c20011| 2016-04-06T02:52:14.045-0500 D QUERY [conn27] received interrupt request for unknown op: 1449 known ops: [js_test:multi_coll_drop] 2016-04-06T02:52:57.759-0500 c20011| 2016-04-06T02:52:14.045-0500 D QUERY [conn27] received interrupt request for unknown op: 1458 known ops: [js_test:multi_coll_drop] 2016-04-06T02:52:57.765-0500 c20011| 2016-04-06T02:52:14.045-0500 D QUERY [conn27] received interrupt request for unknown op: 1460 known ops: [js_test:multi_coll_drop] 2016-04-06T02:52:57.799-0500 c20011| 2016-04-06T02:52:14.045-0500 D QUERY [conn27] received interrupt request for unknown op: 1459 known ops: [js_test:multi_coll_drop] 2016-04-06T02:52:57.800-0500 c20011| 2016-04-06T02:52:14.045-0500 D QUERY [conn27] received interrupt request for unknown op: 1445 known ops: [js_test:multi_coll_drop] 2016-04-06T02:52:57.802-0500 c20011| 2016-04-06T02:52:14.046-0500 I COMMAND [ftdc] serverStatus was very slow: { after basic: 0, after asserts: 0, after connections: 0, after extra_info: 0, after globalLock: 0, after locks: 0, after network: 0, after opcounters: 0, after opcountersRepl: 0, after repl: 1950, after storageEngine: 1950, after tcmalloc: 1950, after wiredTiger: 1950, at end: 1950 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.803-0500 c20011| 2016-04-06T02:52:14.046-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:16.046Z [js_test:multi_coll_drop] 2016-04-06T02:52:57.805-0500 c20011| 2016-04-06T02:52:14.046-0500 D REPL [conn15] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.807-0500 c20011| 2016-04-06T02:52:14.046-0500 D REPL [conn15] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.837-0500 c20011| 2016-04-06T02:52:14.046-0500 I COMMAND [conn15] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:57.854-0500 c20011| 2016-04-06T02:52:14.046-0500 I COMMAND [conn14] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|10, t: 1 } } cursorid:17466612721 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 3761ms [js_test:multi_coll_drop] 2016-04-06T02:52:57.854-0500 c20011| 2016-04-06T02:52:14.046-0500 I REPL [ReplicationExecutor] transition to SECONDARY [js_test:multi_coll_drop] 2016-04-06T02:52:57.870-0500 c20011| 2016-04-06T02:52:14.046-0500 I COMMAND [conn13] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|10, t: 1 } } cursorid:20785203637 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 3755ms [js_test:multi_coll_drop] 2016-04-06T02:52:57.875-0500 c20011| 2016-04-06T02:52:14.047-0500 D NETWORK [conn12] SocketException: remote: 192.168.100.28:58940 error: 9001 socket exception [CLOSED] server [192.168.100.28:58940] [js_test:multi_coll_drop] 2016-04-06T02:52:57.878-0500 c20011| 2016-04-06T02:52:14.047-0500 D NETWORK [conn1] SocketException: remote: 127.0.0.1:33447 error: 9001 socket exception [CLOSED] server [127.0.0.1:33447] [js_test:multi_coll_drop] 2016-04-06T02:52:57.879-0500 c20011| 2016-04-06T02:52:14.047-0500 D NETWORK [conn15] SocketException: remote: 192.168.100.28:58943 error: 9001 socket exception [CLOSED] server [192.168.100.28:58943] [js_test:multi_coll_drop] 2016-04-06T02:52:57.879-0500 c20011| 2016-04-06T02:52:14.047-0500 D NETWORK [conn11] SocketException: remote: 192.168.100.28:58722 error: 9001 socket exception [CLOSED] server [192.168.100.28:58722] [js_test:multi_coll_drop] 2016-04-06T02:52:57.883-0500 c20011| 2016-04-06T02:52:14.047-0500 D NETWORK [conn14] SocketException: remote: 192.168.100.28:58942 error: 9001 socket exception [CLOSED] server [192.168.100.28:58942] [js_test:multi_coll_drop] 2016-04-06T02:52:57.885-0500 c20011| 2016-04-06T02:52:14.047-0500 I NETWORK [conn11] end connection 192.168.100.28:58722 (22 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:57.892-0500 c20011| 2016-04-06T02:52:14.047-0500 D NETWORK [conn16] SocketException: remote: 192.168.100.28:58945 error: 9001 socket exception [CLOSED] server [192.168.100.28:58945] [js_test:multi_coll_drop] 2016-04-06T02:52:57.895-0500 c20011| 2016-04-06T02:52:14.047-0500 I NETWORK [conn14] end connection 192.168.100.28:58942 (22 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:57.896-0500 c20011| 2016-04-06T02:52:14.047-0500 I NETWORK [conn16] end connection 192.168.100.28:58945 (22 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:57.898-0500 c20011| 2016-04-06T02:52:14.047-0500 D NETWORK [conn13] SocketException: remote: 192.168.100.28:58941 error: 9001 socket exception [CLOSED] server [192.168.100.28:58941] [js_test:multi_coll_drop] 2016-04-06T02:52:57.898-0500 c20011| 2016-04-06T02:52:14.047-0500 I NETWORK [conn13] end connection 192.168.100.28:58941 (19 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:57.900-0500 c20011| 2016-04-06T02:52:14.047-0500 I NETWORK [conn1] end connection 127.0.0.1:33447 (22 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:57.901-0500 c20011| 2016-04-06T02:52:14.047-0500 D NETWORK [conn21] SocketException: remote: 192.168.100.28:58990 error: 9001 socket exception [CLOSED] server [192.168.100.28:58990] [js_test:multi_coll_drop] 2016-04-06T02:52:57.903-0500 c20011| 2016-04-06T02:52:14.047-0500 I NETWORK [conn21] end connection 192.168.100.28:58990 (17 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:57.907-0500 c20011| 2016-04-06T02:52:14.047-0500 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } numYields:0 reslen:480 locks:{} protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:52:57.908-0500 c20011| 2016-04-06T02:52:14.047-0500 D NETWORK [conn3] Socket say send() Bad file descriptor 192.168.100.28:58405 [js_test:multi_coll_drop] 2016-04-06T02:52:57.910-0500 c20011| 2016-04-06T02:52:14.047-0500 D NETWORK [conn20] SocketException: remote: 192.168.100.28:58979 error: 9001 socket exception [CLOSED] server [192.168.100.28:58979] [js_test:multi_coll_drop] 2016-04-06T02:52:57.910-0500 c20011| 2016-04-06T02:52:14.047-0500 I NETWORK [conn20] end connection 192.168.100.28:58979 (16 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:57.912-0500 c20011| 2016-04-06T02:52:14.047-0500 I NETWORK [conn3] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [192.168.100.28:58405] [js_test:multi_coll_drop] 2016-04-06T02:52:57.913-0500 c20011| 2016-04-06T02:52:14.047-0500 D NETWORK [conn17] SocketException: remote: 192.168.100.28:58968 error: 9001 socket exception [CLOSED] server [192.168.100.28:58968] [js_test:multi_coll_drop] 2016-04-06T02:52:57.915-0500 c20011| 2016-04-06T02:52:14.047-0500 I NETWORK [conn17] end connection 192.168.100.28:58968 (14 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:57.918-0500 c20011| 2016-04-06T02:52:14.047-0500 D NETWORK [conn10] SocketException: remote: 192.168.100.28:58721 error: 9001 socket exception [CLOSED] server [192.168.100.28:58721] [js_test:multi_coll_drop] 2016-04-06T02:52:57.919-0500 c20011| 2016-04-06T02:52:14.047-0500 D NETWORK [conn18] SocketException: remote: 192.168.100.28:58976 error: 9001 socket exception [CLOSED] server [192.168.100.28:58976] [js_test:multi_coll_drop] 2016-04-06T02:52:57.919-0500 c20011| 2016-04-06T02:52:14.047-0500 I NETWORK [conn15] end connection 192.168.100.28:58943 (22 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:57.920-0500 c20011| 2016-04-06T02:52:14.047-0500 I NETWORK [conn18] end connection 192.168.100.28:58976 (13 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:57.921-0500 c20011| 2016-04-06T02:52:14.047-0500 I NETWORK [conn12] end connection 192.168.100.28:58940 (22 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:57.923-0500 c20011| 2016-04-06T02:52:14.047-0500 D NETWORK [conn22] SocketException: remote: 192.168.100.28:58991 error: 9001 socket exception [CLOSED] server [192.168.100.28:58991] [js_test:multi_coll_drop] 2016-04-06T02:52:57.924-0500 c20011| 2016-04-06T02:52:14.047-0500 I NETWORK [conn22] end connection 192.168.100.28:58991 (10 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:57.925-0500 c20011| 2016-04-06T02:52:14.047-0500 I NETWORK [conn10] end connection 192.168.100.28:58721 (13 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:57.927-0500 c20011| 2016-04-06T02:52:14.047-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:59434 #28 (10 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:57.928-0500 c20011| 2016-04-06T02:52:14.047-0500 D NETWORK [conn19] SocketException: remote: 192.168.100.28:58977 error: 9001 socket exception [CLOSED] server [192.168.100.28:58977] [js_test:multi_coll_drop] 2016-04-06T02:52:57.928-0500 c20011| 2016-04-06T02:52:14.047-0500 I NETWORK [conn19] end connection 192.168.100.28:58977 (9 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:57.929-0500 c20011| 2016-04-06T02:52:14.048-0500 D NETWORK [conn24] SocketException: remote: 192.168.100.28:59101 error: 9001 socket exception [CLOSED] server [192.168.100.28:59101] [js_test:multi_coll_drop] 2016-04-06T02:52:57.930-0500 c20011| 2016-04-06T02:52:14.048-0500 I NETWORK [conn24] end connection 192.168.100.28:59101 (8 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:57.935-0500 c20011| 2016-04-06T02:52:14.048-0500 D NETWORK [conn23] SocketException: remote: 192.168.100.28:59096 error: 9001 socket exception [CLOSED] server [192.168.100.28:59096] [js_test:multi_coll_drop] 2016-04-06T02:52:57.937-0500 c20011| 2016-04-06T02:52:14.048-0500 I NETWORK [conn23] end connection 192.168.100.28:59096 (7 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:57.937-0500 c20011| 2016-04-06T02:52:14.048-0500 I COMMAND [conn27] command admin.$cmd command: replSetStepDown { replSetStepDown: 10.0, force: true } numYields:0 reslen:82 locks:{ Global: { acquireCount: { r: 1, R: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:57.938-0500 c20011| 2016-04-06T02:52:14.048-0500 D NETWORK [conn27] Socket say send() Bad file descriptor 192.168.100.28:59154 [js_test:multi_coll_drop] 2016-04-06T02:52:57.939-0500 c20011| 2016-04-06T02:52:14.048-0500 I NETWORK [conn27] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [192.168.100.28:59154] [js_test:multi_coll_drop] 2016-04-06T02:52:57.940-0500 c20011| 2016-04-06T02:52:14.048-0500 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } numYields:0 reslen:480 locks:{} protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:52:57.941-0500 c20011| 2016-04-06T02:52:14.048-0500 D NETWORK [conn2] Socket say send() Bad file descriptor 192.168.100.28:58404 [js_test:multi_coll_drop] 2016-04-06T02:52:57.944-0500 c20011| 2016-04-06T02:52:14.048-0500 I NETWORK [conn2] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [192.168.100.28:58404] [js_test:multi_coll_drop] 2016-04-06T02:52:57.946-0500 c20011| 2016-04-06T02:52:14.048-0500 D NETWORK [conn9] SocketException: remote: 192.168.100.28:58719 error: 9001 socket exception [CLOSED] server [192.168.100.28:58719] [js_test:multi_coll_drop] 2016-04-06T02:52:57.946-0500 c20011| 2016-04-06T02:52:14.048-0500 I NETWORK [conn9] end connection 192.168.100.28:58719 (4 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:57.948-0500 c20011| 2016-04-06T02:52:14.050-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:59436 #29 (5 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:57.949-0500 c20011| 2016-04-06T02:52:14.050-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 46 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:57.949-0500 c20011| 2016-04-06T02:52:14.050-0500 D COMMAND [conn29] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20012" } [js_test:multi_coll_drop] 2016-04-06T02:52:57.970-0500 c20011| 2016-04-06T02:52:14.050-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:59437 #30 (6 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:57.972-0500 c20011| 2016-04-06T02:52:14.051-0500 D COMMAND [conn30] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20013" } [js_test:multi_coll_drop] 2016-04-06T02:52:57.975-0500 c20011| 2016-04-06T02:52:14.051-0500 D COMMAND [conn28] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20013" } [js_test:multi_coll_drop] 2016-04-06T02:52:57.978-0500 c20011| 2016-04-06T02:52:14.051-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:16.551Z [js_test:multi_coll_drop] 2016-04-06T02:52:57.980-0500 c20011| 2016-04-06T02:52:14.051-0500 I COMMAND [conn25] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c02e65c17830b843f1a4'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929134044), why: "splitting chunk [{ _id: -81.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:405 locks:{ Global: { acquireCount: { r: 1, w: 1 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 4932 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:52:57.982-0500 c20011| 2016-04-06T02:52:14.051-0500 D NETWORK [conn25] Socket say send() Bad file descriptor 192.168.100.28:59103 [js_test:multi_coll_drop] 2016-04-06T02:52:57.983-0500 c20011| 2016-04-06T02:52:14.051-0500 I COMMAND [conn30] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20013" } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:57.986-0500 c20011| 2016-04-06T02:52:14.051-0500 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20012" } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:57.987-0500 c20011| 2016-04-06T02:52:14.051-0500 I NETWORK [conn25] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [192.168.100.28:59103] [js_test:multi_coll_drop] 2016-04-06T02:52:57.987-0500 c20011| 2016-04-06T02:52:14.051-0500 D COMMAND [conn29] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.989-0500 c20011| 2016-04-06T02:52:14.051-0500 D COMMAND [conn29] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:57.992-0500 c20011| 2016-04-06T02:52:14.051-0500 I COMMAND [conn29] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:57.993-0500 c20011| 2016-04-06T02:52:14.051-0500 D COMMAND [conn30] run command local.$cmd { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:57.994-0500 c20011| 2016-04-06T02:52:14.052-0500 I COMMAND [conn28] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20013" } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:57.996-0500 c20011| 2016-04-06T02:52:14.052-0500 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:57.996-0500 c20011| 2016-04-06T02:52:14.052-0500 D COMMAND [conn28] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:57.998-0500 c20011| 2016-04-06T02:52:14.052-0500 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:57.999-0500 c20011| 2016-04-06T02:52:14.053-0500 D NETWORK [conn26] SocketException: remote: 192.168.100.28:59104 error: 9001 socket exception [CLOSED] server [192.168.100.28:59104] [js_test:multi_coll_drop] 2016-04-06T02:52:57.999-0500 c20011| 2016-04-06T02:52:14.053-0500 I NETWORK [conn26] end connection 192.168.100.28:59104 (4 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:58.007-0500 c20011| 2016-04-06T02:52:14.053-0500 D NETWORK [conn8] SocketException: remote: 192.168.100.28:58715 error: 9001 socket exception [CLOSED] server [192.168.100.28:58715] [js_test:multi_coll_drop] 2016-04-06T02:52:58.008-0500 c20011| 2016-04-06T02:52:14.053-0500 I NETWORK [conn8] end connection 192.168.100.28:58715 (3 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:58.008-0500 c20011| 2016-04-06T02:52:14.054-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:59438 #31 (4 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:58.019-0500 c20011| 2016-04-06T02:52:14.054-0500 D COMMAND [conn31] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20012" } [js_test:multi_coll_drop] 2016-04-06T02:52:58.024-0500 c20011| 2016-04-06T02:52:14.054-0500 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20012" } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.029-0500 c20011| 2016-04-06T02:52:14.054-0500 D COMMAND [conn31] run command local.$cmd { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:58.033-0500 c20011| 2016-04-06T02:52:14.552-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:59473 #32 (5 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:58.033-0500 c20011| 2016-04-06T02:52:14.552-0500 D COMMAND [conn32] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20010" } [js_test:multi_coll_drop] 2016-04-06T02:52:58.034-0500 c20011| 2016-04-06T02:52:14.552-0500 I COMMAND [conn32] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20010" } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.036-0500 c20011| 2016-04-06T02:52:14.552-0500 D COMMAND [conn32] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.037-0500 c20011| 2016-04-06T02:52:14.553-0500 I COMMAND [conn32] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.038-0500 c20011| 2016-04-06T02:52:14.553-0500 D COMMAND [conn32] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.039-0500 c20011| 2016-04-06T02:52:14.553-0500 I COMMAND [conn32] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.040-0500 c20011| 2016-04-06T02:52:14.613-0500 D REPL [rsBackgroundSync] bgsync fetch queue set to: { ts: Timestamp 1459929130000|10, t: 1 } 3135197531614568333 [js_test:multi_coll_drop] 2016-04-06T02:52:58.040-0500 c20011| 2016-04-06T02:52:15.054-0500 D COMMAND [conn32] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.041-0500 c20011| 2016-04-06T02:52:15.054-0500 I COMMAND [conn32] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.042-0500 c20011| 2016-04-06T02:52:15.555-0500 D COMMAND [conn32] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.065-0500 c20011| 2016-04-06T02:52:15.555-0500 I COMMAND [conn32] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.065-0500 c20011| 2016-04-06T02:52:15.850-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:59567 #33 (6 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:58.069-0500 c20011| 2016-04-06T02:52:15.850-0500 D COMMAND [conn33] run command admin.$cmd { isMaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.073-0500 c20011| 2016-04-06T02:52:15.850-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1 } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.073-0500 c20011| 2016-04-06T02:52:15.851-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.074-0500 c20011| 2016-04-06T02:52:15.851-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.079-0500 c20011| 2016-04-06T02:52:16.046-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 49 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:26.046-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.080-0500 c20011| 2016-04-06T02:52:16.047-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 49 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:58.085-0500 c20011| 2016-04-06T02:52:16.047-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 49 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 1, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:58.086-0500 c20011| 2016-04-06T02:52:16.047-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:18.547Z [js_test:multi_coll_drop] 2016-04-06T02:52:58.088-0500 c20011| 2016-04-06T02:52:16.052-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.090-0500 c20011| 2016-04-06T02:52:16.052-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.091-0500 c20011| 2016-04-06T02:52:16.052-0500 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.091-0500 c20011| 2016-04-06T02:52:16.052-0500 D COMMAND [conn28] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:58.096-0500 c20011| 2016-04-06T02:52:16.053-0500 D COMMAND [conn29] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.097-0500 c20011| 2016-04-06T02:52:16.053-0500 D COMMAND [conn29] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:58.102-0500 c20011| 2016-04-06T02:52:16.054-0500 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } numYields:0 reslen:439 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.102-0500 c20011| 2016-04-06T02:52:16.055-0500 I COMMAND [conn29] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } numYields:0 reslen:439 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.102-0500 c20011| 2016-04-06T02:52:16.056-0500 D COMMAND [conn32] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.103-0500 c20011| 2016-04-06T02:52:16.057-0500 I COMMAND [conn32] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.103-0500 c20011| 2016-04-06T02:52:16.253-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.106-0500 c20011| 2016-04-06T02:52:16.254-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.106-0500 c20011| 2016-04-06T02:52:16.454-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.108-0500 c20011| 2016-04-06T02:52:16.455-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.110-0500 c20011| 2016-04-06T02:52:16.545-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:59591 #34 (7 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:58.112-0500 c20011| 2016-04-06T02:52:16.545-0500 D COMMAND [conn34] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20012" } [js_test:multi_coll_drop] 2016-04-06T02:52:58.114-0500 c20011| 2016-04-06T02:52:16.545-0500 I COMMAND [conn34] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20012" } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.130-0500 c20011| 2016-04-06T02:52:16.546-0500 D COMMAND [conn34] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:58.131-0500 c20011| 2016-04-06T02:52:16.546-0500 D COMMAND [conn34] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:58.138-0500 c20011| 2016-04-06T02:52:16.546-0500 D REPL [conn34] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.143-0500 c20011| 2016-04-06T02:52:16.546-0500 D REPL [conn34] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.151-0500 c20011| 2016-04-06T02:52:16.546-0500 I COMMAND [conn34] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.153-0500 c20011| 2016-04-06T02:52:16.546-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:59592 #35 (8 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:52:58.157-0500 c20011| 2016-04-06T02:52:16.546-0500 D COMMAND [conn35] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20013" } [js_test:multi_coll_drop] 2016-04-06T02:52:58.160-0500 c20011| 2016-04-06T02:52:16.546-0500 I COMMAND [conn35] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20013" } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.165-0500 c20011| 2016-04-06T02:52:16.546-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:58.165-0500 c20011| 2016-04-06T02:52:16.546-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:52:58.175-0500 c20012| 2016-04-06T02:52:08.638-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 500 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:58.183-0500 c20012| 2016-04-06T02:52:08.639-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:58.190-0500 c20012| 2016-04-06T02:52:08.639-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 501 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:58.194-0500 c20012| 2016-04-06T02:52:08.639-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 501 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:58.207-0500 c20012| 2016-04-06T02:52:08.639-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 501 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.208-0500 c20012| 2016-04-06T02:52:08.645-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 500 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.211-0500 c20012| 2016-04-06T02:52:08.645-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|27, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.212-0500 c20012| 2016-04-06T02:52:08.645-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:58.219-0500 c20012| 2016-04-06T02:52:08.645-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 504 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.645-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|27, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:58.221-0500 c20012| 2016-04-06T02:52:08.645-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 504 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:58.225-0500 c20012| 2016-04-06T02:52:08.646-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 504 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|28, t: 1, h: -3132328473915241474, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.231-0500 c20012| 2016-04-06T02:52:08.646-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|28 and ending at ts: Timestamp 1459929128000|28 [js_test:multi_coll_drop] 2016-04-06T02:52:58.234-0500 c20012| 2016-04-06T02:52:08.647-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:58.235-0500 c20012| 2016-04-06T02:52:08.647-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.236-0500 c20012| 2016-04-06T02:52:08.647-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.238-0500 c20012| 2016-04-06T02:52:08.647-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.241-0500 c20012| 2016-04-06T02:52:08.647-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.242-0500 c20012| 2016-04-06T02:52:08.647-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.242-0500 c20012| 2016-04-06T02:52:08.647-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.245-0500 c20012| 2016-04-06T02:52:08.647-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.251-0500 c20012| 2016-04-06T02:52:08.647-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.254-0500 c20012| 2016-04-06T02:52:08.647-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.256-0500 c20012| 2016-04-06T02:52:08.647-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.260-0500 c20012| 2016-04-06T02:52:08.647-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.261-0500 c20012| 2016-04-06T02:52:08.647-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.276-0500 c20012| 2016-04-06T02:52:08.647-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.277-0500 c20012| 2016-04-06T02:52:08.647-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:58.278-0500 c20012| 2016-04-06T02:52:08.647-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.279-0500 c20012| 2016-04-06T02:52:08.647-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:58.280-0500 c20012| 2016-04-06T02:52:08.647-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.281-0500 c20012| 2016-04-06T02:52:08.647-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.282-0500 c20012| 2016-04-06T02:52:08.648-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.282-0500 c20012| 2016-04-06T02:52:08.648-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.283-0500 c20012| 2016-04-06T02:52:08.648-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.288-0500 c20012| 2016-04-06T02:52:08.648-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.289-0500 c20012| 2016-04-06T02:52:08.648-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.290-0500 c20012| 2016-04-06T02:52:08.648-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.293-0500 c20012| 2016-04-06T02:52:08.648-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.293-0500 c20012| 2016-04-06T02:52:08.648-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.294-0500 c20012| 2016-04-06T02:52:08.648-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.295-0500 c20012| 2016-04-06T02:52:08.648-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.297-0500 c20012| 2016-04-06T02:52:08.648-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.299-0500 c20012| 2016-04-06T02:52:08.648-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.300-0500 c20012| 2016-04-06T02:52:08.648-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.300-0500 c20012| 2016-04-06T02:52:08.648-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.311-0500 c20012| 2016-04-06T02:52:08.648-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.317-0500 c20012| 2016-04-06T02:52:08.648-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.320-0500 c20012| 2016-04-06T02:52:08.648-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:58.335-0500 c20012| 2016-04-06T02:52:08.648-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:58.385-0500 c20012| 2016-04-06T02:52:08.648-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 506 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|27, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:58.397-0500 c20012| 2016-04-06T02:52:08.648-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 506 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:58.398-0500 c20012| 2016-04-06T02:52:08.648-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 507 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.648-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|27, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:58.401-0500 c20012| 2016-04-06T02:52:08.649-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 506 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.404-0500 c20012| 2016-04-06T02:52:08.649-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 507 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:58.411-0500 c20012| 2016-04-06T02:52:08.655-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:58.430-0500 c20012| 2016-04-06T02:52:08.655-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 509 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:58.431-0500 c20012| 2016-04-06T02:52:08.655-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 509 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:58.431-0500 c20012| 2016-04-06T02:52:08.655-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 509 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.432-0500 c20012| 2016-04-06T02:52:08.655-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 507 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.434-0500 c20012| 2016-04-06T02:52:08.656-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|28, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.434-0500 c20012| 2016-04-06T02:52:08.656-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:58.437-0500 c20012| 2016-04-06T02:52:08.656-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 512 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.656-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|28, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:58.438-0500 c20012| 2016-04-06T02:52:08.656-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 512 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:58.442-0500 c20012| 2016-04-06T02:52:08.656-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|28, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.446-0500 c20012| 2016-04-06T02:52:08.656-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|28, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:58.450-0500 c20012| 2016-04-06T02:52:08.656-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|28, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.451-0500 c20012| 2016-04-06T02:52:08.656-0500 D QUERY [conn7] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:58.458-0500 c20012| 2016-04-06T02:52:08.657-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|28, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.459-0500 c20012| 2016-04-06T02:52:08.658-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|28, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.463-0500 c20012| 2016-04-06T02:52:08.658-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|28, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:58.466-0500 c20012| 2016-04-06T02:52:08.658-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|28, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.468-0500 c20012| 2016-04-06T02:52:08.658-0500 D QUERY [conn7] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:58.474-0500 c20012| 2016-04-06T02:52:08.658-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|28, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.480-0500 c20012| 2016-04-06T02:52:08.659-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 512 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|29, t: 1, h: -150120968679180590, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f184'), state: 2, when: new Date(1459929128658), why: "splitting chunk [{ _id: -97.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.485-0500 c20012| 2016-04-06T02:52:08.659-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|29 and ending at ts: Timestamp 1459929128000|29 [js_test:multi_coll_drop] 2016-04-06T02:52:58.490-0500 c20012| 2016-04-06T02:52:08.660-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:58.491-0500 c20012| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.495-0500 c20012| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.496-0500 c20012| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.517-0500 c20012| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.519-0500 c20012| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.520-0500 c20012| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.522-0500 c20012| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.525-0500 c20012| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.543-0500 c20012| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.544-0500 c20012| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.547-0500 c20012| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.548-0500 c20012| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.570-0500 c20012| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.584-0500 c20012| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.585-0500 c20012| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.594-0500 c20012| 2016-04-06T02:52:08.660-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:58.607-0500 c20012| 2016-04-06T02:52:08.660-0500 D QUERY [repl writer worker 12] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:58.608-0500 c20012| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.613-0500 c20012| 2016-04-06T02:52:08.660-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.614-0500 c20012| 2016-04-06T02:52:08.661-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.622-0500 c20012| 2016-04-06T02:52:08.661-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.626-0500 c20012| 2016-04-06T02:52:08.661-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.628-0500 c20012| 2016-04-06T02:52:08.661-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.628-0500 c20012| 2016-04-06T02:52:08.661-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.631-0500 c20012| 2016-04-06T02:52:08.661-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.634-0500 c20012| 2016-04-06T02:52:08.661-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.643-0500 c20012| 2016-04-06T02:52:08.661-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.643-0500 c20012| 2016-04-06T02:52:08.661-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.645-0500 c20012| 2016-04-06T02:52:08.661-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.650-0500 c20012| 2016-04-06T02:52:08.661-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.653-0500 c20012| 2016-04-06T02:52:08.661-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.654-0500 c20012| 2016-04-06T02:52:08.661-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.657-0500 c20012| 2016-04-06T02:52:08.661-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.666-0500 c20012| 2016-04-06T02:52:08.661-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.671-0500 c20012| 2016-04-06T02:52:08.662-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 514 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.662-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|28, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:58.674-0500 c20012| 2016-04-06T02:52:08.662-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 514 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:58.675-0500 c20012| 2016-04-06T02:52:08.662-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:58.686-0500 c20012| 2016-04-06T02:52:08.662-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:58.704-0500 c20012| 2016-04-06T02:52:08.662-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 515 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|28, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:58.710-0500 c20012| 2016-04-06T02:52:08.662-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 515 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:58.723-0500 c20012| 2016-04-06T02:52:08.663-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 515 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.727-0500 c20012| 2016-04-06T02:52:08.664-0500 D COMMAND [conn11] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|29, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.729-0500 c20012| 2016-04-06T02:52:08.664-0500 D REPL [conn11] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929128000|29, t: 1 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929128000|28, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.729-0500 c20012| 2016-04-06T02:52:08.664-0500 D REPL [conn11] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999982μs [js_test:multi_coll_drop] 2016-04-06T02:52:58.730-0500 c20012| 2016-04-06T02:52:08.664-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 514 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.731-0500 c20012| 2016-04-06T02:52:08.665-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|29, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.733-0500 c20012| 2016-04-06T02:52:08.665-0500 D COMMAND [conn11] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|29, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:58.737-0500 c20012| 2016-04-06T02:52:08.665-0500 D COMMAND [conn11] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|29, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.739-0500 c20012| 2016-04-06T02:52:08.665-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:58.740-0500 c20012| 2016-04-06T02:52:08.665-0500 D QUERY [conn11] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:52:58.745-0500 c20012| 2016-04-06T02:52:08.665-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 518 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.665-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|29, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:58.748-0500 c20012| 2016-04-06T02:52:08.665-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 518 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:58.763-0500 c20012| 2016-04-06T02:52:08.665-0500 I COMMAND [conn11] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|29, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:492 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.767-0500 c20012| 2016-04-06T02:52:08.665-0500 D COMMAND [conn11] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|8 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|29, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.769-0500 c20012| 2016-04-06T02:52:08.665-0500 D COMMAND [conn11] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|29, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:58.775-0500 c20012| 2016-04-06T02:52:08.665-0500 D COMMAND [conn11] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|8 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|29, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.779-0500 c20012| 2016-04-06T02:52:08.665-0500 D QUERY [conn11] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:58.783-0500 c20012| 2016-04-06T02:52:08.665-0500 I COMMAND [conn11] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|8 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|29, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.793-0500 c20012| 2016-04-06T02:52:08.667-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:58.798-0500 c20012| 2016-04-06T02:52:08.667-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 519 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:58.800-0500 c20012| 2016-04-06T02:52:08.667-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 519 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:58.802-0500 c20012| 2016-04-06T02:52:08.667-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 519 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.812-0500 c20012| 2016-04-06T02:52:08.667-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 518 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|30, t: 1, h: -3082120306973010549, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-97.0", lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -97.0 }, max: { _id: -96.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-97.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-96.0", lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -96.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-96.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.813-0500 c20012| 2016-04-06T02:52:08.668-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|30 and ending at ts: Timestamp 1459929128000|30 [js_test:multi_coll_drop] 2016-04-06T02:52:58.814-0500 c20012| 2016-04-06T02:52:08.668-0500 D REPL [rsBackgroundSync-0] bgsync buffer has 0 bytes [js_test:multi_coll_drop] 2016-04-06T02:52:58.817-0500 c20012| 2016-04-06T02:52:08.669-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:58.817-0500 c20012| 2016-04-06T02:52:08.669-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.818-0500 c20012| 2016-04-06T02:52:08.669-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.819-0500 c20012| 2016-04-06T02:52:08.669-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.819-0500 c20012| 2016-04-06T02:52:08.669-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.821-0500 c20012| 2016-04-06T02:52:08.669-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.822-0500 2016-04-06T02:52:42.957-0500 I NETWORK [thread2] reconnect mongovm16:20012 (192.168.100.28) ok [js_test:multi_coll_drop] 2016-04-06T02:52:58.823-0500 c20012| 2016-04-06T02:52:08.669-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.824-0500 c20012| 2016-04-06T02:52:08.669-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.825-0500 c20012| 2016-04-06T02:52:08.669-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.827-0500 c20012| 2016-04-06T02:52:08.670-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.827-0500 c20012| 2016-04-06T02:52:08.670-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.830-0500 c20012| 2016-04-06T02:52:08.670-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.831-0500 c20012| 2016-04-06T02:52:08.670-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.835-0500 c20012| 2016-04-06T02:52:08.670-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:58.836-0500 c20012| 2016-04-06T02:52:08.670-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.859-0500 c20012| 2016-04-06T02:52:08.670-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.860-0500 c20012| 2016-04-06T02:52:08.670-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 522 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.670-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|29, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:58.862-0500 c20012| 2016-04-06T02:52:08.670-0500 D QUERY [repl writer worker 4] Using idhack: { _id: "multidrop.coll-_id_-97.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:58.867-0500 c20012| 2016-04-06T02:52:08.670-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 522 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:58.868-0500 c20012| 2016-04-06T02:52:08.670-0500 D QUERY [repl writer worker 4] Using idhack: { _id: "multidrop.coll-_id_-96.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:58.871-0500 c20012| 2016-04-06T02:52:08.670-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.873-0500 c20012| 2016-04-06T02:52:08.670-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.877-0500 c20012| 2016-04-06T02:52:08.670-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.878-0500 c20012| 2016-04-06T02:52:08.670-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.878-0500 c20012| 2016-04-06T02:52:08.670-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.881-0500 c20012| 2016-04-06T02:52:08.670-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.884-0500 c20012| 2016-04-06T02:52:08.670-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.887-0500 c20012| 2016-04-06T02:52:08.670-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.888-0500 c20012| 2016-04-06T02:52:08.670-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.890-0500 c20012| 2016-04-06T02:52:08.670-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.891-0500 c20012| 2016-04-06T02:52:08.670-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.896-0500 c20012| 2016-04-06T02:52:08.670-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.896-0500 c20012| 2016-04-06T02:52:08.670-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.898-0500 c20012| 2016-04-06T02:52:08.670-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.900-0500 c20012| 2016-04-06T02:52:08.670-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.903-0500 c20012| 2016-04-06T02:52:08.670-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.904-0500 c20012| 2016-04-06T02:52:08.671-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.907-0500 c20012| 2016-04-06T02:52:08.671-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:58.913-0500 c20012| 2016-04-06T02:52:08.672-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:58.923-0500 c20012| 2016-04-06T02:52:08.672-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:58.931-0500 c20012| 2016-04-06T02:52:08.672-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 523 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|29, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:58.933-0500 c20012| 2016-04-06T02:52:08.672-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 523 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:58.937-0500 c20012| 2016-04-06T02:52:08.672-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 523 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.944-0500 c20012| 2016-04-06T02:52:08.673-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:58.947-0500 c20011| 2016-04-06T02:52:16.546-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.951-0500 c20011| 2016-04-06T02:52:16.546-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.953-0500 c20011| 2016-04-06T02:52:16.547-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.955-0500 c20011| 2016-04-06T02:52:16.551-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 51 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:26.551-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.958-0500 c20011| 2016-04-06T02:52:16.551-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 51 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:58.963-0500 c20011| 2016-04-06T02:52:16.551-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 51 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 1, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:58.965-0500 c20011| 2016-04-06T02:52:16.551-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:19.051Z [js_test:multi_coll_drop] 2016-04-06T02:52:58.968-0500 c20011| 2016-04-06T02:52:16.552-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|10, t: 1 } } cursorid:17466612721 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 2500ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.969-0500 c20011| 2016-04-06T02:52:16.552-0500 D COMMAND [conn30] run command local.$cmd { killCursors: "oplog.rs", cursors: [ 17466612721 ] } [js_test:multi_coll_drop] 2016-04-06T02:52:58.973-0500 c20011| 2016-04-06T02:52:16.552-0500 I COMMAND [conn30] command local.oplog.rs command: killCursors { killCursors: "oplog.rs", cursors: [ 17466612721 ] } numYields:0 reslen:175 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.974-0500 c20011| 2016-04-06T02:52:16.552-0500 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.974-0500 c20011| 2016-04-06T02:52:16.552-0500 D COMMAND [conn28] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:52:58.979-0500 c20011| 2016-04-06T02:52:16.553-0500 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.981-0500 c20011| 2016-04-06T02:52:16.554-0500 I COMMAND [conn31] command local.oplog.rs command: getMore { getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|10, t: 1 } } cursorid:20785203637 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 2500ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.984-0500 c20011| 2016-04-06T02:52:16.555-0500 D COMMAND [conn31] run command local.$cmd { killCursors: "oplog.rs", cursors: [ 20785203637 ] } [js_test:multi_coll_drop] 2016-04-06T02:52:58.990-0500 c20011| 2016-04-06T02:52:16.555-0500 I COMMAND [conn31] command local.oplog.rs command: killCursors { killCursors: "oplog.rs", cursors: [ 20785203637 ] } numYields:0 reslen:175 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:58.995-0500 c20011| 2016-04-06T02:52:16.555-0500 D COMMAND [conn29] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:58.999-0500 c20012| 2016-04-06T02:52:08.673-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 525 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:59.013-0500 c20012| 2016-04-06T02:52:08.673-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 525 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:59.013-0500 c20012| 2016-04-06T02:52:08.673-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 525 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.015-0500 c20012| 2016-04-06T02:52:08.673-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 522 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.017-0500 c20012| 2016-04-06T02:52:08.673-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|30, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.019-0500 c20012| 2016-04-06T02:52:08.673-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:59.022-0500 c20012| 2016-04-06T02:52:08.674-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 528 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.673-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|30, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:59.029-0500 c20012| 2016-04-06T02:52:08.674-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 528 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:59.031-0500 c20012| 2016-04-06T02:52:08.674-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 528 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|31, t: 1, h: -5487869586575022175, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.673-0500-5704c02865c17830b843f185", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128673), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -97.0 }, max: { _id: MaxKey } }, left: { min: { _id: -97.0 }, max: { _id: -96.0 }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -96.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.034-0500 c20012| 2016-04-06T02:52:08.674-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|31 and ending at ts: Timestamp 1459929128000|31 [js_test:multi_coll_drop] 2016-04-06T02:52:59.040-0500 c20012| 2016-04-06T02:52:08.674-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:59.040-0500 c20012| 2016-04-06T02:52:08.674-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.042-0500 c20012| 2016-04-06T02:52:08.674-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.045-0500 c20012| 2016-04-06T02:52:08.674-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.045-0500 c20012| 2016-04-06T02:52:08.674-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.048-0500 c20012| 2016-04-06T02:52:08.674-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.049-0500 c20012| 2016-04-06T02:52:08.674-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.050-0500 c20012| 2016-04-06T02:52:08.675-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.054-0500 c20012| 2016-04-06T02:52:08.675-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.056-0500 c20012| 2016-04-06T02:52:08.675-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.067-0500 c20012| 2016-04-06T02:52:08.675-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.068-0500 c20012| 2016-04-06T02:52:08.675-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.069-0500 c20012| 2016-04-06T02:52:08.675-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.071-0500 c20012| 2016-04-06T02:52:08.675-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.072-0500 c20012| 2016-04-06T02:52:08.675-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.076-0500 c20012| 2016-04-06T02:52:08.675-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:59.084-0500 c20012| 2016-04-06T02:52:08.675-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.103-0500 c20012| 2016-04-06T02:52:08.675-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.104-0500 c20012| 2016-04-06T02:52:08.676-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.106-0500 c20012| 2016-04-06T02:52:08.676-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.107-0500 c20012| 2016-04-06T02:52:08.676-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.111-0500 c20012| 2016-04-06T02:52:08.676-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.112-0500 c20012| 2016-04-06T02:52:08.676-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.112-0500 c20012| 2016-04-06T02:52:08.676-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.115-0500 c20012| 2016-04-06T02:52:08.676-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.115-0500 c20012| 2016-04-06T02:52:08.676-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.118-0500 c20012| 2016-04-06T02:52:08.676-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.130-0500 c20012| 2016-04-06T02:52:08.676-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.131-0500 c20012| 2016-04-06T02:52:08.676-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.132-0500 c20012| 2016-04-06T02:52:08.676-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.134-0500 c20012| 2016-04-06T02:52:08.676-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.136-0500 c20012| 2016-04-06T02:52:08.676-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.137-0500 c20012| 2016-04-06T02:52:08.676-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.141-0500 c20012| 2016-04-06T02:52:08.676-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 530 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.676-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|30, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:59.143-0500 c20012| 2016-04-06T02:52:08.676-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.145-0500 c20012| 2016-04-06T02:52:08.677-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 530 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:59.147-0500 c20012| 2016-04-06T02:52:08.678-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:59.152-0500 c20012| 2016-04-06T02:52:08.678-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:59.158-0500 c20012| 2016-04-06T02:52:08.678-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 531 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|30, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:59.159-0500 c20012| 2016-04-06T02:52:08.678-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 531 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:59.162-0500 c20012| 2016-04-06T02:52:08.678-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 531 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.180-0500 c20012| 2016-04-06T02:52:08.684-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:59.200-0500 c20012| 2016-04-06T02:52:08.684-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 533 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:59.203-0500 c20012| 2016-04-06T02:52:08.684-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 533 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:59.211-0500 c20012| 2016-04-06T02:52:08.684-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 533 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.216-0500 c20012| 2016-04-06T02:52:08.684-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 530 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.218-0500 c20012| 2016-04-06T02:52:08.685-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|31, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.218-0500 c20012| 2016-04-06T02:52:08.685-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:59.225-0500 c20012| 2016-04-06T02:52:08.685-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 536 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.685-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|31, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:59.226-0500 c20012| 2016-04-06T02:52:08.685-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 536 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:59.231-0500 c20012| 2016-04-06T02:52:08.686-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 536 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|32, t: 1, h: -2637408664367781023, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.237-0500 c20012| 2016-04-06T02:52:08.686-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|32 and ending at ts: Timestamp 1459929128000|32 [js_test:multi_coll_drop] 2016-04-06T02:52:59.240-0500 c20012| 2016-04-06T02:52:08.686-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:59.241-0500 c20012| 2016-04-06T02:52:08.686-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.242-0500 c20012| 2016-04-06T02:52:08.686-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.243-0500 c20012| 2016-04-06T02:52:08.686-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.244-0500 c20012| 2016-04-06T02:52:08.686-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.249-0500 c20012| 2016-04-06T02:52:08.686-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.251-0500 c20012| 2016-04-06T02:52:08.686-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.253-0500 c20012| 2016-04-06T02:52:08.686-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.254-0500 c20012| 2016-04-06T02:52:08.686-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.257-0500 c20012| 2016-04-06T02:52:08.686-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.258-0500 c20012| 2016-04-06T02:52:08.686-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.259-0500 c20012| 2016-04-06T02:52:08.686-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.261-0500 c20012| 2016-04-06T02:52:08.686-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.262-0500 c20012| 2016-04-06T02:52:08.686-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.266-0500 c20012| 2016-04-06T02:52:08.686-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.266-0500 c20012| 2016-04-06T02:52:08.686-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.268-0500 c20012| 2016-04-06T02:52:08.686-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:59.272-0500 c20012| 2016-04-06T02:52:08.686-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.272-0500 c20012| 2016-04-06T02:52:08.687-0500 D QUERY [repl writer worker 3] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:59.272-0500 c20012| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.273-0500 c20012| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.282-0500 c20012| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.302-0500 c20012| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.309-0500 c20012| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.311-0500 c20012| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.330-0500 c20012| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.333-0500 c20012| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.340-0500 c20012| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.340-0500 c20012| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.343-0500 c20012| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.355-0500 c20012| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.364-0500 c20012| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.366-0500 c20012| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.366-0500 c20012| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.370-0500 c20012| 2016-04-06T02:52:08.687-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.373-0500 c20012| 2016-04-06T02:52:08.688-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:59.381-0500 c20012| 2016-04-06T02:52:08.688-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:59.392-0500 c20012| 2016-04-06T02:52:08.688-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 538 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|31, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:59.394-0500 c20012| 2016-04-06T02:52:08.688-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 538 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:59.397-0500 c20012| 2016-04-06T02:52:08.688-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 538 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.402-0500 c20012| 2016-04-06T02:52:08.689-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 540 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.689-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|31, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:59.404-0500 c20012| 2016-04-06T02:52:08.689-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 540 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:59.413-0500 c20012| 2016-04-06T02:52:08.690-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:59.420-0500 c20012| 2016-04-06T02:52:08.690-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 541 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:59.427-0500 c20012| 2016-04-06T02:52:08.690-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 541 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:59.429-0500 c20012| 2016-04-06T02:52:08.690-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 541 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.432-0500 c20012| 2016-04-06T02:52:08.690-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 540 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.432-0500 c20012| 2016-04-06T02:52:08.691-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|32, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.434-0500 c20012| 2016-04-06T02:52:08.691-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:59.439-0500 c20012| 2016-04-06T02:52:08.691-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 544 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.691-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|32, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:59.441-0500 c20012| 2016-04-06T02:52:08.691-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|32, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.442-0500 c20012| 2016-04-06T02:52:08.691-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 544 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:59.445-0500 c20012| 2016-04-06T02:52:08.691-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|32, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:59.451-0500 c20012| 2016-04-06T02:52:08.691-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|32, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.453-0500 c20012| 2016-04-06T02:52:08.691-0500 D QUERY [conn7] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:59.456-0500 c20012| 2016-04-06T02:52:08.691-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|32, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:59.462-0500 c20012| 2016-04-06T02:52:08.694-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 544 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|33, t: 1, h: 1320383207073572803, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f186'), state: 2, when: new Date(1459929128693), why: "splitting chunk [{ _id: -96.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.473-0500 c20012| 2016-04-06T02:52:08.694-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|33 and ending at ts: Timestamp 1459929128000|33 [js_test:multi_coll_drop] 2016-04-06T02:52:59.476-0500 c20012| 2016-04-06T02:52:08.694-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:59.481-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.483-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.490-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.495-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.495-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.500-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.504-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.505-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.506-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.515-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.519-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.521-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.522-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.522-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.523-0500 c20012| 2016-04-06T02:52:08.695-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:59.528-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.528-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.531-0500 c20012| 2016-04-06T02:52:08.695-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:52:59.532-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.533-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.533-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.534-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.550-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.554-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.554-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.555-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.557-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.558-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.561-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.570-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.572-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.575-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.580-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.582-0500 c20012| 2016-04-06T02:52:08.695-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.583-0500 c20012| 2016-04-06T02:52:08.696-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:59.595-0500 c20012| 2016-04-06T02:52:08.696-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:59.603-0500 c20012| 2016-04-06T02:52:08.696-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 546 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|32, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:59.606-0500 c20012| 2016-04-06T02:52:08.696-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 546 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:59.607-0500 c20012| 2016-04-06T02:52:08.696-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 546 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.609-0500 c20012| 2016-04-06T02:52:08.697-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 548 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.697-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|32, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:59.611-0500 c20012| 2016-04-06T02:52:08.697-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 548 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:59.613-0500 c20012| 2016-04-06T02:52:08.699-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:59.617-0500 c20012| 2016-04-06T02:52:08.699-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 549 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:59.621-0500 c20012| 2016-04-06T02:52:08.700-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 549 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:59.622-0500 c20012| 2016-04-06T02:52:08.700-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 549 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.623-0500 c20012| 2016-04-06T02:52:08.700-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 548 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.625-0500 c20012| 2016-04-06T02:52:08.700-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|33, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.630-0500 c20012| 2016-04-06T02:52:08.700-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:59.640-0500 c20012| 2016-04-06T02:52:08.700-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 552 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.700-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|33, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:59.642-0500 c20012| 2016-04-06T02:52:08.700-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 552 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:59.649-0500 c20012| 2016-04-06T02:52:08.704-0500 D COMMAND [conn11] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|10 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|33, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.653-0500 c20012| 2016-04-06T02:52:08.704-0500 D COMMAND [conn11] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|33, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:52:59.658-0500 c20012| 2016-04-06T02:52:08.704-0500 D COMMAND [conn11] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|10 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|33, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.660-0500 c20012| 2016-04-06T02:52:08.705-0500 D QUERY [conn11] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:52:59.666-0500 c20012| 2016-04-06T02:52:08.705-0500 I COMMAND [conn11] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|10 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|33, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:52:59.673-0500 c20012| 2016-04-06T02:52:08.706-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 552 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|34, t: 1, h: 1358286614020305507, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-96.0", lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -96.0 }, max: { _id: -95.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-96.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-95.0", lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -95.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-95.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.677-0500 c20012| 2016-04-06T02:52:08.708-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|34 and ending at ts: Timestamp 1459929128000|34 [js_test:multi_coll_drop] 2016-04-06T02:52:59.680-0500 c20012| 2016-04-06T02:52:08.708-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:59.681-0500 c20012| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.682-0500 c20012| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.682-0500 c20012| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.684-0500 c20012| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.685-0500 c20012| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.686-0500 c20012| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.688-0500 c20012| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.689-0500 c20012| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.690-0500 c20012| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.692-0500 c20012| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.695-0500 c20012| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.697-0500 c20012| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.698-0500 c20012| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.699-0500 c20012| 2016-04-06T02:52:08.708-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:59.700-0500 c20012| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.701-0500 c20012| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.702-0500 c20012| 2016-04-06T02:52:08.708-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.706-0500 c20012| 2016-04-06T02:52:08.709-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll-_id_-96.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:59.712-0500 c20012| 2016-04-06T02:52:08.709-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll-_id_-95.0" } [js_test:multi_coll_drop] 2016-04-06T02:52:59.712-0500 c20012| 2016-04-06T02:52:08.709-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.715-0500 c20012| 2016-04-06T02:52:08.709-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.717-0500 c20012| 2016-04-06T02:52:08.709-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.720-0500 c20012| 2016-04-06T02:52:08.709-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.723-0500 c20012| 2016-04-06T02:52:08.709-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.723-0500 c20012| 2016-04-06T02:52:08.709-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.724-0500 c20012| 2016-04-06T02:52:08.709-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.730-0500 c20012| 2016-04-06T02:52:08.710-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 554 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.710-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|33, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:59.733-0500 c20012| 2016-04-06T02:52:08.710-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 554 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:59.734-0500 c20012| 2016-04-06T02:52:08.711-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.737-0500 c20012| 2016-04-06T02:52:08.711-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.739-0500 c20012| 2016-04-06T02:52:08.711-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.741-0500 c20012| 2016-04-06T02:52:08.711-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.741-0500 c20012| 2016-04-06T02:52:08.711-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.741-0500 c20012| 2016-04-06T02:52:08.711-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.742-0500 c20012| 2016-04-06T02:52:08.711-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.745-0500 c20012| 2016-04-06T02:52:08.711-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.748-0500 c20012| 2016-04-06T02:52:08.711-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.749-0500 c20012| 2016-04-06T02:52:08.711-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:59.758-0500 c20012| 2016-04-06T02:52:08.711-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|34, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:59.765-0500 c20012| 2016-04-06T02:52:08.711-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 555 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|33, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|34, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:59.765-0500 c20012| 2016-04-06T02:52:08.711-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 555 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:59.766-0500 c20012| 2016-04-06T02:52:08.712-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 555 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.770-0500 c20012| 2016-04-06T02:52:08.712-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|34, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|34, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:59.777-0500 c20012| 2016-04-06T02:52:08.712-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 557 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|34, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|34, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:52:59.778-0500 c20012| 2016-04-06T02:52:08.712-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 557 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:59.779-0500 c20012| 2016-04-06T02:52:08.712-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 557 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.782-0500 c20012| 2016-04-06T02:52:08.713-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 554 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.786-0500 c20012| 2016-04-06T02:52:08.713-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|34, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.787-0500 c20012| 2016-04-06T02:52:08.713-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:52:59.790-0500 c20012| 2016-04-06T02:52:08.713-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 560 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.713-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|34, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:52:59.792-0500 c20012| 2016-04-06T02:52:08.713-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 560 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:59.799-0500 c20012| 2016-04-06T02:52:08.713-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 560 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|35, t: 1, h: 2198379315137148602, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.713-0500-5704c02865c17830b843f187", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128713), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -96.0 }, max: { _id: MaxKey } }, left: { min: { _id: -96.0 }, max: { _id: -95.0 }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -95.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.802-0500 c20012| 2016-04-06T02:52:08.713-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|35 and ending at ts: Timestamp 1459929128000|35 [js_test:multi_coll_drop] 2016-04-06T02:52:59.803-0500 c20012| 2016-04-06T02:52:08.713-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:52:59.804-0500 c20012| 2016-04-06T02:52:08.713-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.809-0500 c20012| 2016-04-06T02:52:08.713-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.810-0500 c20012| 2016-04-06T02:52:08.713-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.811-0500 c20012| 2016-04-06T02:52:08.713-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.811-0500 c20012| 2016-04-06T02:52:08.713-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.812-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.816-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.817-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.818-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.818-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.819-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.820-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.821-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.821-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.824-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.826-0500 c20012| 2016-04-06T02:52:08.714-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:52:59.827-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.828-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.830-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.832-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.835-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.836-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.838-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:52:59.838-0500 s20014| 2016-04-06T02:52:41.720-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:59.841-0500 s20014| 2016-04-06T02:52:41.720-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 297 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:52:59.842-0500 s20014| 2016-04-06T02:52:41.721-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 296 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:52:59.847-0500 s20014| 2016-04-06T02:52:41.721-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Failed to execute command: RemoteCommand 295 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:01.652-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929151652), up: 24, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } reason: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:52:59.849-0500 s20014| 2016-04-06T02:52:41.721-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 295 finished with response: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:52:59.854-0500 s20014| 2016-04-06T02:52:41.721-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Failed to execute command: RemoteCommand 296 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:06.593-0500 cmd:{ findAndModify: "lockpings", query: { _id: "mongovm16:20014:1459929123:-665935931" }, update: { $set: { ping: new Date(1459929156593) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } reason: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:52:59.855-0500 s20014| 2016-04-06T02:52:41.721-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 296 finished with response: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:52:59.879-0500 s20014| 2016-04-06T02:52:41.721-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Failed to connect to mongovm16:20012 - HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:52:59.886-0500 s20014| 2016-04-06T02:52:41.721-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 299 finished with response: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:52:59.886-0500 s20014| 2016-04-06T02:52:41.721-0500 D NETWORK [replSetDistLockPinger] Marking host mongovm16:20012 as failed [js_test:multi_coll_drop] 2016-04-06T02:52:59.891-0500 s20014| 2016-04-06T02:52:41.721-0500 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:52:59.899-0500 s20014| 2016-04-06T02:52:41.721-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Failed to get connection from pool for request 298: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:52:59.901-0500 s20014| 2016-04-06T02:52:41.722-0500 D NETWORK [UserCacheInvalidator] Marking host mongovm16:20012 as failed [js_test:multi_coll_drop] 2016-04-06T02:52:59.902-0500 s20014| 2016-04-06T02:52:41.722-0500 D SHARDING [UserCacheInvalidator] Command failed with retriable error and will be retried :: caused by :: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:52:59.905-0500 s20014| 2016-04-06T02:52:41.722-0500 D ASIO [UserCacheInvalidator] startCommand: RemoteCommand 302 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:11.722-0500 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.907-0500 s20014| 2016-04-06T02:52:41.722-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 302 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:59.909-0500 s20014| 2016-04-06T02:52:41.722-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 302 finished with response: { cacheGeneration: ObjectId('5704c01c3876c4cfd2eb3eb7'), ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.911-0500 s20014| 2016-04-06T02:52:41.725-0500 D NETWORK [Balancer] Marking host mongovm16:20012 as failed [js_test:multi_coll_drop] 2016-04-06T02:52:59.913-0500 s20014| 2016-04-06T02:52:41.725-0500 D SHARDING [Balancer] Command failed with retriable error and will be retried :: caused by :: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:52:59.917-0500 s20014| 2016-04-06T02:52:41.725-0500 D ASIO [Balancer] startCommand: RemoteCommand 304 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:11.725-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929151652), up: 24, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.929-0500 s20014| 2016-04-06T02:52:41.725-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 304 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:59.932-0500 s20014| 2016-04-06T02:52:41.738-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 304 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929161000|3, t: 3 }, electionId: ObjectId('7fffffff0000000000000003') } [js_test:multi_coll_drop] 2016-04-06T02:52:59.936-0500 s20014| 2016-04-06T02:52:41.738-0500 D ASIO [Balancer] startCommand: RemoteCommand 306 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:11.738-0500 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.936-0500 s20014| 2016-04-06T02:52:41.738-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 306 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:52:59.940-0500 s20014| 2016-04-06T02:52:41.742-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 306 finished with response: { waitedMS: 3, cursor: { firstBatch: [ { _id: "shard0000", host: "mongovm16:20010" } ], id: 0, ns: "config.shards" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.941-0500 s20014| 2016-04-06T02:52:41.742-0500 D SHARDING [Balancer] found 1 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp 1459929161000|3, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.944-0500 s20014| 2016-04-06T02:52:41.742-0500 D ASIO [Balancer] startCommand: RemoteCommand 308 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:11.742-0500 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.945-0500 s20014| 2016-04-06T02:52:41.742-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 308 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:59.948-0500 s20014| 2016-04-06T02:52:41.743-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 308 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "chunksize", value: 50 } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.949-0500 s20014| 2016-04-06T02:52:41.743-0500 D SHARDING [Balancer] Refreshing MaxChunkSize: 50MB [js_test:multi_coll_drop] 2016-04-06T02:52:59.951-0500 s20014| 2016-04-06T02:52:41.743-0500 D ASIO [Balancer] startCommand: RemoteCommand 310 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:11.743-0500 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.952-0500 s20014| 2016-04-06T02:52:41.743-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 310 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:59.953-0500 s20014| 2016-04-06T02:52:41.743-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 310 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "balancer", stopped: true } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.954-0500 s20014| 2016-04-06T02:52:41.743-0500 D SHARDING [Balancer] skipping balancing round because balancing is disabled [js_test:multi_coll_drop] 2016-04-06T02:52:59.963-0500 s20014| 2016-04-06T02:52:41.743-0500 D ASIO [Balancer] startCommand: RemoteCommand 312 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:11.743-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929161743), up: 34, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.976-0500 s20014| 2016-04-06T02:52:41.743-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 312 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:59.982-0500 s20014| 2016-04-06T02:52:41.747-0500 D ASIO [conn1] startCommand: RemoteCommand 313 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:11.747-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|4, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:52:59.983-0500 s20014| 2016-04-06T02:52:41.747-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:59.986-0500 s20014| 2016-04-06T02:52:41.748-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 314 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:59.995-0500 s20014| 2016-04-06T02:52:41.748-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:52:59.997-0500 s20014| 2016-04-06T02:52:41.748-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 314 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:52:59.999-0500 s20014| 2016-04-06T02:52:41.748-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 313 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:00.008-0500 s20014| 2016-04-06T02:52:41.765-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 313 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-75.0", lastmod: Timestamp 1000|52, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -75.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.009-0500 s20014| 2016-04-06T02:52:41.765-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 312 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929161000|5, t: 3 }, electionId: ObjectId('7fffffff0000000000000003') } [js_test:multi_coll_drop] 2016-04-06T02:53:00.009-0500 s20014| 2016-04-06T02:52:41.766-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|50||5704c02806c33406d4d9c0c0 and 26 chunks [js_test:multi_coll_drop] 2016-04-06T02:53:00.010-0500 s20014| 2016-04-06T02:52:41.766-0500 D SHARDING [conn1] major version query from 1|50||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|50 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.012-0500 s20014| 2016-04-06T02:52:41.766-0500 D ASIO [conn1] startCommand: RemoteCommand 317 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:11.766-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|50 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|5, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.018-0500 s20014| 2016-04-06T02:52:41.766-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 317 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:00.020-0500 s20014| 2016-04-06T02:52:41.770-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 317 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-76.0", lastmod: Timestamp 1000|51, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -76.0 }, max: { _id: -75.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-75.0", lastmod: Timestamp 1000|52, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -75.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.020-0500 s20014| 2016-04-06T02:52:41.771-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|52||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:00.022-0500 s20014| 2016-04-06T02:52:41.771-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 4ms sequenceNumber: 29 version: 1|52||5704c02806c33406d4d9c0c0 based on: 1|50||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:00.030-0500 s20014| 2016-04-06T02:52:41.771-0500 D ASIO [conn1] startCommand: RemoteCommand 319 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:11.771-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|5, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.036-0500 s20014| 2016-04-06T02:52:41.771-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 319 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:00.043-0500 s20014| 2016-04-06T02:52:41.771-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 319 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-75.0", lastmod: Timestamp 1000|52, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -75.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.045-0500 s20014| 2016-04-06T02:52:41.772-0500 I COMMAND [conn1] splitting chunk [{ _id: -75.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:00.046-0500 s20014| 2016-04-06T02:52:41.772-0500 D NETWORK [conn1] polling for status of connection to 192.168.100.28:20010, no events [js_test:multi_coll_drop] 2016-04-06T02:53:00.049-0500 s20014| 2016-04-06T02:52:41.839-0500 D ASIO [conn1] startCommand: RemoteCommand 321 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:11.839-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|10, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.054-0500 s20014| 2016-04-06T02:52:41.840-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 321 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:00.066-0500 s20014| 2016-04-06T02:52:41.840-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 321 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-74.0", lastmod: Timestamp 1000|54, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -74.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.067-0500 s20014| 2016-04-06T02:52:41.841-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|52||5704c02806c33406d4d9c0c0 and 27 chunks [js_test:multi_coll_drop] 2016-04-06T02:53:00.073-0500 s20014| 2016-04-06T02:52:41.841-0500 D SHARDING [conn1] major version query from 1|52||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|52 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.076-0500 s20014| 2016-04-06T02:52:41.841-0500 D ASIO [conn1] startCommand: RemoteCommand 323 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:11.841-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|52 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|10, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.077-0500 s20014| 2016-04-06T02:52:41.841-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 323 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:00.083-0500 s20014| 2016-04-06T02:52:41.841-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 323 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-75.0", lastmod: Timestamp 1000|53, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -75.0 }, max: { _id: -74.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-74.0", lastmod: Timestamp 1000|54, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -74.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.084-0500 s20014| 2016-04-06T02:52:41.841-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|54||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:00.087-0500 s20014| 2016-04-06T02:52:41.841-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 30 version: 1|54||5704c02806c33406d4d9c0c0 based on: 1|52||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:00.096-0500 s20014| 2016-04-06T02:52:41.841-0500 D ASIO [conn1] startCommand: RemoteCommand 325 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:11.841-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|10, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.096-0500 s20014| 2016-04-06T02:52:41.841-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 325 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:00.098-0500 s20014| 2016-04-06T02:52:41.842-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 325 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-74.0", lastmod: Timestamp 1000|54, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -74.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.100-0500 s20014| 2016-04-06T02:52:41.842-0500 I COMMAND [conn1] splitting chunk [{ _id: -74.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:00.103-0500 s20014| 2016-04-06T02:52:41.952-0500 D ASIO [conn1] startCommand: RemoteCommand 327 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:11.952-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|14, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.107-0500 s20014| 2016-04-06T02:52:41.952-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 327 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:00.114-0500 s20014| 2016-04-06T02:52:41.953-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 327 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-73.0", lastmod: Timestamp 1000|56, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -73.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.117-0500 s20014| 2016-04-06T02:52:41.953-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|54||5704c02806c33406d4d9c0c0 and 28 chunks [js_test:multi_coll_drop] 2016-04-06T02:53:00.118-0500 s20014| 2016-04-06T02:52:41.953-0500 D SHARDING [conn1] major version query from 1|54||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|54 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.136-0500 s20014| 2016-04-06T02:52:41.953-0500 D ASIO [conn1] startCommand: RemoteCommand 329 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:11.953-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|54 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|14, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.137-0500 s20014| 2016-04-06T02:52:41.953-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 329 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:00.143-0500 s20014| 2016-04-06T02:52:41.953-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 329 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-74.0", lastmod: Timestamp 1000|55, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -74.0 }, max: { _id: -73.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-73.0", lastmod: Timestamp 1000|56, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -73.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.157-0500 s20014| 2016-04-06T02:52:41.953-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|56||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:00.159-0500 s20014| 2016-04-06T02:52:41.953-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 31 version: 1|56||5704c02806c33406d4d9c0c0 based on: 1|54||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:00.164-0500 s20014| 2016-04-06T02:52:41.954-0500 D ASIO [conn1] startCommand: RemoteCommand 331 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:11.954-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|14, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.166-0500 s20014| 2016-04-06T02:52:41.954-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 331 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:00.171-0500 s20014| 2016-04-06T02:52:41.955-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 331 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-73.0", lastmod: Timestamp 1000|56, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -73.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.183-0500 s20014| 2016-04-06T02:52:41.955-0500 I COMMAND [conn1] splitting chunk [{ _id: -73.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:00.187-0500 s20014| 2016-04-06T02:52:42.078-0500 D ASIO [conn1] startCommand: RemoteCommand 333 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:12.078-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.195-0500 s20014| 2016-04-06T02:52:42.078-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 333 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:00.203-0500 s20014| 2016-04-06T02:52:42.081-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 333 finished with response: { waitedMS: 1, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-72.0", lastmod: Timestamp 1000|58, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -72.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.204-0500 s20014| 2016-04-06T02:52:42.081-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|56||5704c02806c33406d4d9c0c0 and 29 chunks [js_test:multi_coll_drop] 2016-04-06T02:53:00.222-0500 s20014| 2016-04-06T02:52:42.081-0500 D SHARDING [conn1] major version query from 1|56||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|56 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.235-0500 s20014| 2016-04-06T02:52:42.081-0500 D ASIO [conn1] startCommand: RemoteCommand 335 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:12.081-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|56 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|3, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.240-0500 s20014| 2016-04-06T02:52:42.082-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 335 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:00.252-0500 s20014| 2016-04-06T02:52:42.082-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 335 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-73.0", lastmod: Timestamp 1000|57, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -73.0 }, max: { _id: -72.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-72.0", lastmod: Timestamp 1000|58, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -72.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.254-0500 s20014| 2016-04-06T02:52:42.083-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|58||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:00.263-0500 s20014| 2016-04-06T02:52:42.083-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 2ms sequenceNumber: 32 version: 1|58||5704c02806c33406d4d9c0c0 based on: 1|56||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:00.281-0500 s20014| 2016-04-06T02:52:42.084-0500 D ASIO [conn1] startCommand: RemoteCommand 337 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:12.084-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.293-0500 s20014| 2016-04-06T02:52:42.084-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 337 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:00.298-0500 s20014| 2016-04-06T02:52:42.084-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 337 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-72.0", lastmod: Timestamp 1000|58, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -72.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.303-0500 s20014| 2016-04-06T02:52:42.084-0500 I COMMAND [conn1] splitting chunk [{ _id: -72.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:00.306-0500 s20014| 2016-04-06T02:52:42.149-0500 D ASIO [conn1] startCommand: RemoteCommand 339 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:12.149-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|7, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.308-0500 s20014| 2016-04-06T02:52:42.151-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 339 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:00.327-0500 s20014| 2016-04-06T02:52:42.152-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 339 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-71.0", lastmod: Timestamp 1000|60, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -71.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.332-0500 s20014| 2016-04-06T02:52:42.152-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|58||5704c02806c33406d4d9c0c0 and 30 chunks [js_test:multi_coll_drop] 2016-04-06T02:53:00.336-0500 s20014| 2016-04-06T02:52:42.152-0500 D SHARDING [conn1] major version query from 1|58||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|58 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.352-0500 s20014| 2016-04-06T02:52:42.152-0500 D ASIO [conn1] startCommand: RemoteCommand 341 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:12.152-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|58 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|7, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.365-0500 s20014| 2016-04-06T02:52:42.152-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 341 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:00.374-0500 s20014| 2016-04-06T02:52:42.153-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 341 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-72.0", lastmod: Timestamp 1000|59, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -72.0 }, max: { _id: -71.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-71.0", lastmod: Timestamp 1000|60, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -71.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.375-0500 s20014| 2016-04-06T02:52:42.153-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|60||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:00.377-0500 s20014| 2016-04-06T02:52:42.153-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 33 version: 1|60||5704c02806c33406d4d9c0c0 based on: 1|58||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:00.379-0500 s20014| 2016-04-06T02:52:42.153-0500 D ASIO [conn1] startCommand: RemoteCommand 343 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:12.153-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|7, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.380-0500 s20014| 2016-04-06T02:52:42.153-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 343 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:00.382-0500 s20014| 2016-04-06T02:52:42.154-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 343 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-71.0", lastmod: Timestamp 1000|60, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -71.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.383-0500 s20014| 2016-04-06T02:52:42.154-0500 I COMMAND [conn1] splitting chunk [{ _id: -71.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:00.392-0500 s20014| 2016-04-06T02:52:42.287-0500 D ASIO [conn1] startCommand: RemoteCommand 345 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:12.287-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|11, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.395-0500 s20014| 2016-04-06T02:52:42.287-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 345 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:00.400-0500 s20014| 2016-04-06T02:52:42.290-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 345 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-70.0", lastmod: Timestamp 1000|62, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -70.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.403-0500 s20014| 2016-04-06T02:52:42.304-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|60||5704c02806c33406d4d9c0c0 and 31 chunks [js_test:multi_coll_drop] 2016-04-06T02:53:00.417-0500 s20014| 2016-04-06T02:52:42.304-0500 D SHARDING [conn1] major version query from 1|60||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|60 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.438-0500 s20014| 2016-04-06T02:52:42.304-0500 D ASIO [conn1] startCommand: RemoteCommand 347 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:12.304-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|60 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|11, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.443-0500 s20014| 2016-04-06T02:52:42.304-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 347 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:00.458-0500 s20014| 2016-04-06T02:52:42.305-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 347 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-71.0", lastmod: Timestamp 1000|61, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -71.0 }, max: { _id: -70.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-70.0", lastmod: Timestamp 1000|62, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -70.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.466-0500 s20014| 2016-04-06T02:52:42.305-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|62||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:00.467-0500 s20014| 2016-04-06T02:52:42.305-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 34 version: 1|62||5704c02806c33406d4d9c0c0 based on: 1|60||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:00.473-0500 s20014| 2016-04-06T02:52:42.305-0500 D ASIO [conn1] startCommand: RemoteCommand 349 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:12.305-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|11, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.474-0500 s20014| 2016-04-06T02:52:42.305-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 349 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:00.478-0500 s20014| 2016-04-06T02:52:42.313-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 349 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-70.0", lastmod: Timestamp 1000|62, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -70.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.490-0500 s20014| 2016-04-06T02:52:42.313-0500 I COMMAND [conn1] splitting chunk [{ _id: -70.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:00.512-0500 s20014| 2016-04-06T02:52:42.410-0500 D ASIO [conn1] startCommand: RemoteCommand 351 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:12.410-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|15, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.512-0500 s20014| 2016-04-06T02:52:42.416-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 351 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:00.518-0500 s20014| 2016-04-06T02:52:42.424-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 351 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-69.0", lastmod: Timestamp 1000|64, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -69.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.526-0500 s20014| 2016-04-06T02:52:42.424-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|62||5704c02806c33406d4d9c0c0 and 32 chunks [js_test:multi_coll_drop] 2016-04-06T02:53:00.528-0500 s20014| 2016-04-06T02:52:42.424-0500 D SHARDING [conn1] major version query from 1|62||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|62 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.529-0500 s20014| 2016-04-06T02:52:42.424-0500 D ASIO [conn1] startCommand: RemoteCommand 353 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:12.424-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|62 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|15, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.530-0500 s20014| 2016-04-06T02:52:42.424-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 353 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:00.532-0500 s20014| 2016-04-06T02:52:42.425-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 353 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-70.0", lastmod: Timestamp 1000|63, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -70.0 }, max: { _id: -69.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-69.0", lastmod: Timestamp 1000|64, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -69.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.532-0500 s20014| 2016-04-06T02:52:42.425-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|64||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:00.533-0500 s20014| 2016-04-06T02:52:42.425-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 1ms sequenceNumber: 35 version: 1|64||5704c02806c33406d4d9c0c0 based on: 1|62||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:00.558-0500 s20014| 2016-04-06T02:52:42.425-0500 D ASIO [conn1] startCommand: RemoteCommand 355 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:12.425-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|15, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.563-0500 s20014| 2016-04-06T02:52:42.428-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 355 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:00.566-0500 s20014| 2016-04-06T02:52:42.436-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 355 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-69.0", lastmod: Timestamp 1000|64, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -69.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.567-0500 s20014| 2016-04-06T02:52:42.436-0500 I COMMAND [conn1] splitting chunk [{ _id: -69.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:00.568-0500 s20014| 2016-04-06T02:52:42.702-0500 D ASIO [conn1] startCommand: RemoteCommand 357 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:12.702-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|19, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.568-0500 s20014| 2016-04-06T02:52:42.702-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 357 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:00.571-0500 s20014| 2016-04-06T02:52:42.703-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 357 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-68.0", lastmod: Timestamp 1000|66, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -68.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.572-0500 s20014| 2016-04-06T02:52:42.704-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|64||5704c02806c33406d4d9c0c0 and 33 chunks [js_test:multi_coll_drop] 2016-04-06T02:53:00.573-0500 s20014| 2016-04-06T02:52:42.704-0500 D SHARDING [conn1] major version query from 1|64||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|64 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.576-0500 s20014| 2016-04-06T02:52:42.704-0500 D ASIO [conn1] startCommand: RemoteCommand 359 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:12.704-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|64 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|19, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.578-0500 s20014| 2016-04-06T02:52:42.704-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 359 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:00.584-0500 s20014| 2016-04-06T02:52:42.706-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 359 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-69.0", lastmod: Timestamp 1000|65, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -69.0 }, max: { _id: -68.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-68.0", lastmod: Timestamp 1000|66, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -68.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.585-0500 s20014| 2016-04-06T02:52:42.711-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|66||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:00.588-0500 s20014| 2016-04-06T02:52:42.711-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 7ms sequenceNumber: 36 version: 1|66||5704c02806c33406d4d9c0c0 based on: 1|64||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:00.588-0500 s20014| 2016-04-06T02:52:42.712-0500 D ASIO [conn1] startCommand: RemoteCommand 361 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:12.712-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|19, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.591-0500 s20014| 2016-04-06T02:52:42.712-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 361 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:00.592-0500 s20014| 2016-04-06T02:52:42.712-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 361 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-68.0", lastmod: Timestamp 1000|66, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -68.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.595-0500 s20014| 2016-04-06T02:52:42.712-0500 I COMMAND [conn1] splitting chunk [{ _id: -68.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:00.596-0500 s20014| 2016-04-06T02:52:42.824-0500 D ASIO [conn1] startCommand: RemoteCommand 363 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:12.824-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|23, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.597-0500 s20014| 2016-04-06T02:52:42.824-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 363 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:00.601-0500 s20014| 2016-04-06T02:52:42.833-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 363 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-67.0", lastmod: Timestamp 1000|68, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -67.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.603-0500 s20014| 2016-04-06T02:52:42.833-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|66||5704c02806c33406d4d9c0c0 and 34 chunks [js_test:multi_coll_drop] 2016-04-06T02:53:00.604-0500 s20014| 2016-04-06T02:52:42.834-0500 D SHARDING [conn1] major version query from 1|66||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|66 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.606-0500 s20014| 2016-04-06T02:52:42.834-0500 D ASIO [conn1] startCommand: RemoteCommand 365 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:12.834-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|66 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|23, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.606-0500 s20014| 2016-04-06T02:52:42.834-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 365 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:00.613-0500 s20014| 2016-04-06T02:52:42.835-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 365 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-68.0", lastmod: Timestamp 1000|67, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -68.0 }, max: { _id: -67.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-67.0", lastmod: Timestamp 1000|68, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -67.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.614-0500 s20014| 2016-04-06T02:52:42.835-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|68||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:00.614-0500 s20014| 2016-04-06T02:52:42.835-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 1ms sequenceNumber: 37 version: 1|68||5704c02806c33406d4d9c0c0 based on: 1|66||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:00.628-0500 s20014| 2016-04-06T02:52:42.835-0500 D ASIO [conn1] startCommand: RemoteCommand 367 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:12.835-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|23, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.630-0500 s20014| 2016-04-06T02:52:42.838-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 367 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:00.633-0500 s20014| 2016-04-06T02:52:42.839-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 367 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-67.0", lastmod: Timestamp 1000|68, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -67.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.633-0500 s20014| 2016-04-06T02:52:42.839-0500 I COMMAND [conn1] splitting chunk [{ _id: -67.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:00.634-0500 s20014| 2016-04-06T02:52:42.937-0500 D ASIO [conn1] startCommand: RemoteCommand 369 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:12.937-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|27, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.634-0500 s20014| 2016-04-06T02:52:42.941-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 369 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:00.636-0500 s20014| 2016-04-06T02:52:42.942-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 369 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-66.0", lastmod: Timestamp 1000|70, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -66.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.639-0500 s20014| 2016-04-06T02:52:42.942-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|68||5704c02806c33406d4d9c0c0 and 35 chunks [js_test:multi_coll_drop] 2016-04-06T02:53:00.642-0500 s20014| 2016-04-06T02:52:42.942-0500 D SHARDING [conn1] major version query from 1|68||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|68 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.644-0500 s20014| 2016-04-06T02:52:42.942-0500 D ASIO [conn1] startCommand: RemoteCommand 371 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:12.942-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|68 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|27, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.645-0500 s20014| 2016-04-06T02:52:42.942-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 371 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:00.648-0500 s20014| 2016-04-06T02:52:42.945-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 371 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-67.0", lastmod: Timestamp 1000|69, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -67.0 }, max: { _id: -66.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-66.0", lastmod: Timestamp 1000|70, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -66.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.649-0500 s20014| 2016-04-06T02:52:42.945-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|70||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:00.650-0500 s20014| 2016-04-06T02:52:42.945-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 2ms sequenceNumber: 38 version: 1|70||5704c02806c33406d4d9c0c0 based on: 1|68||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:00.653-0500 s20014| 2016-04-06T02:52:42.945-0500 D ASIO [conn1] startCommand: RemoteCommand 373 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:12.945-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|27, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.655-0500 s20014| 2016-04-06T02:52:42.945-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 373 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:00.658-0500 s20014| 2016-04-06T02:52:42.952-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 373 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-66.0", lastmod: Timestamp 1000|70, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -66.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.660-0500 s20014| 2016-04-06T02:52:42.952-0500 I COMMAND [conn1] splitting chunk [{ _id: -66.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:00.662-0500 s20014| 2016-04-06T02:52:43.194-0500 D ASIO [conn1] startCommand: RemoteCommand 375 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:13.194-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.664-0500 s20014| 2016-04-06T02:52:43.194-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 375 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:00.667-0500 s20014| 2016-04-06T02:52:43.197-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 375 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-65.0", lastmod: Timestamp 1000|72, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -65.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.668-0500 s20014| 2016-04-06T02:52:43.197-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|70||5704c02806c33406d4d9c0c0 and 36 chunks [js_test:multi_coll_drop] 2016-04-06T02:53:00.672-0500 s20014| 2016-04-06T02:52:43.197-0500 D SHARDING [conn1] major version query from 1|70||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|70 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.675-0500 s20014| 2016-04-06T02:52:43.197-0500 D ASIO [conn1] startCommand: RemoteCommand 377 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:13.197-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|70 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|3, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.676-0500 s20014| 2016-04-06T02:52:43.197-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 377 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:00.682-0500 s20014| 2016-04-06T02:52:43.200-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 377 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-66.0", lastmod: Timestamp 1000|71, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -66.0 }, max: { _id: -65.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-65.0", lastmod: Timestamp 1000|72, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -65.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.684-0500 s20014| 2016-04-06T02:52:43.201-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|72||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:00.685-0500 s20014| 2016-04-06T02:52:43.201-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 3ms sequenceNumber: 39 version: 1|72||5704c02806c33406d4d9c0c0 based on: 1|70||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:00.686-0500 s20014| 2016-04-06T02:52:43.201-0500 D ASIO [conn1] startCommand: RemoteCommand 379 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:13.201-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.688-0500 s20014| 2016-04-06T02:52:43.201-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 379 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:00.693-0500 s20014| 2016-04-06T02:52:43.203-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 379 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-65.0", lastmod: Timestamp 1000|72, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -65.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.695-0500 s20014| 2016-04-06T02:52:43.203-0500 I COMMAND [conn1] splitting chunk [{ _id: -65.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:00.702-0500 s20014| 2016-04-06T02:52:43.324-0500 D ASIO [conn1] startCommand: RemoteCommand 381 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:13.324-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|7, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.706-0500 s20014| 2016-04-06T02:52:43.324-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 381 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:00.715-0500 s20014| 2016-04-06T02:52:43.332-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 381 finished with response: { waitedMS: 7, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-64.0", lastmod: Timestamp 1000|74, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -64.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.716-0500 s20014| 2016-04-06T02:52:43.332-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|72||5704c02806c33406d4d9c0c0 and 37 chunks [js_test:multi_coll_drop] 2016-04-06T02:53:00.718-0500 s20014| 2016-04-06T02:52:43.332-0500 D SHARDING [conn1] major version query from 1|72||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|72 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.721-0500 s20014| 2016-04-06T02:52:43.332-0500 D ASIO [conn1] startCommand: RemoteCommand 383 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:13.332-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|72 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|7, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.725-0500 s20014| 2016-04-06T02:52:43.332-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 383 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:00.745-0500 s20014| 2016-04-06T02:52:43.333-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 383 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-65.0", lastmod: Timestamp 1000|73, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -65.0 }, max: { _id: -64.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-64.0", lastmod: Timestamp 1000|74, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -64.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.747-0500 s20014| 2016-04-06T02:52:43.333-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|74||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:00.750-0500 s20014| 2016-04-06T02:52:43.333-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 40 version: 1|74||5704c02806c33406d4d9c0c0 based on: 1|72||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:00.753-0500 s20014| 2016-04-06T02:52:43.333-0500 D ASIO [conn1] startCommand: RemoteCommand 385 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:13.333-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|7, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.760-0500 s20014| 2016-04-06T02:52:43.333-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 385 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:00.764-0500 s20014| 2016-04-06T02:52:43.334-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 385 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-64.0", lastmod: Timestamp 1000|74, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -64.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.765-0500 s20014| 2016-04-06T02:52:43.334-0500 I COMMAND [conn1] splitting chunk [{ _id: -64.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:00.774-0500 d20010| 2016-04-06T02:52:41.710-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:41.710-0500-5704c04965c17830b843f1b0", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929161710), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -76.0 }, max: { _id: MaxKey } }, left: { min: { _id: -76.0 }, max: { _id: -75.0 }, lastmod: Timestamp 1000|51, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -75.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|52, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:53:00.776-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.777-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.779-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.779-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.780-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.783-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.783-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.784-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.786-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.787-0500 c20012| 2016-04-06T02:52:08.714-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.799-0500 c20012| 2016-04-06T02:52:08.714-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:00.803-0500 c20012| 2016-04-06T02:52:08.715-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|34, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:00.809-0500 c20012| 2016-04-06T02:52:08.715-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 562 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|34, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:00.811-0500 c20012| 2016-04-06T02:52:08.715-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 562 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:00.812-0500 c20012| 2016-04-06T02:52:08.715-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 562 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.814-0500 c20012| 2016-04-06T02:52:08.716-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 564 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.716-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|34, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:00.815-0500 c20012| 2016-04-06T02:52:08.716-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 564 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:00.833-0500 c20012| 2016-04-06T02:52:08.717-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:00.839-0500 c20012| 2016-04-06T02:52:08.717-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 565 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:00.840-0500 c20012| 2016-04-06T02:52:08.717-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 565 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:00.841-0500 c20012| 2016-04-06T02:52:08.717-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 565 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.845-0500 c20012| 2016-04-06T02:52:08.717-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 564 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.847-0500 c20012| 2016-04-06T02:52:08.718-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|35, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.848-0500 c20012| 2016-04-06T02:52:08.718-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:00.850-0500 c20012| 2016-04-06T02:52:08.718-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 568 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.718-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|35, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:00.851-0500 c20012| 2016-04-06T02:52:08.718-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 568 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:00.853-0500 c20012| 2016-04-06T02:52:08.719-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 568 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|36, t: 1, h: 3351989292470422809, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.854-0500 c20012| 2016-04-06T02:52:08.719-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|36 and ending at ts: Timestamp 1459929128000|36 [js_test:multi_coll_drop] 2016-04-06T02:53:00.855-0500 c20012| 2016-04-06T02:52:08.719-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:00.858-0500 c20012| 2016-04-06T02:52:08.719-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.861-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.861-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.862-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.864-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.865-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.867-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.869-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.870-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.870-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.878-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.879-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.880-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.880-0500 c20012| 2016-04-06T02:52:08.720-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:00.882-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.883-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.883-0500 c20012| 2016-04-06T02:52:08.720-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:00.888-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.889-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.890-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.891-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.891-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.892-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.895-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.897-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.899-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.901-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.902-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.906-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.906-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.907-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.909-0500 c20012| 2016-04-06T02:52:08.720-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.910-0500 c20012| 2016-04-06T02:52:08.721-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.911-0500 c20012| 2016-04-06T02:52:08.721-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.914-0500 c20012| 2016-04-06T02:52:08.721-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:00.917-0500 c20012| 2016-04-06T02:52:08.721-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:00.922-0500 c20012| 2016-04-06T02:52:08.721-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 570 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|35, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:00.923-0500 c20012| 2016-04-06T02:52:08.721-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 570 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:00.925-0500 c20012| 2016-04-06T02:52:08.721-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 570 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.928-0500 c20012| 2016-04-06T02:52:08.722-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 572 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.722-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|35, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:00.932-0500 c20012| 2016-04-06T02:52:08.722-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:00.940-0500 c20012| 2016-04-06T02:52:08.722-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 573 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:00.941-0500 c20012| 2016-04-06T02:52:08.722-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 573 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:00.941-0500 c20012| 2016-04-06T02:52:08.722-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 573 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.943-0500 c20012| 2016-04-06T02:52:08.722-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 572 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:00.944-0500 c20012| 2016-04-06T02:52:08.722-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 572 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.946-0500 c20012| 2016-04-06T02:52:08.722-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|36, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.947-0500 c20012| 2016-04-06T02:52:08.722-0500 D REPL [conn7] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929128000|36, t: 1 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929128000|35, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.947-0500 c20012| 2016-04-06T02:52:08.722-0500 D REPL [conn7] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999976μs [js_test:multi_coll_drop] 2016-04-06T02:53:00.948-0500 c20012| 2016-04-06T02:52:08.723-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|36, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.949-0500 c20012| 2016-04-06T02:52:08.723-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|36, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:00.953-0500 c20012| 2016-04-06T02:52:08.723-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|36, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.953-0500 c20012| 2016-04-06T02:52:08.723-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:00.961-0500 c20012| 2016-04-06T02:52:08.723-0500 D QUERY [conn7] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:00.963-0500 c20012| 2016-04-06T02:52:08.723-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 576 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.723-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|36, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:00.965-0500 c20012| 2016-04-06T02:52:08.723-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|36, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:00.965-0500 c20012| 2016-04-06T02:52:08.723-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 576 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:00.966-0500 c20012| 2016-04-06T02:52:08.724-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|10 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|36, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.967-0500 c20012| 2016-04-06T02:52:08.724-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|36, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:00.968-0500 c20012| 2016-04-06T02:52:08.724-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|10 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|36, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.969-0500 c20012| 2016-04-06T02:52:08.724-0500 D QUERY [conn7] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:00.973-0500 c20012| 2016-04-06T02:52:08.724-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|10 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|36, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:712 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:00.975-0500 c20012| 2016-04-06T02:52:08.725-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|36, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.976-0500 c20012| 2016-04-06T02:52:08.725-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|36, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:00.977-0500 c20012| 2016-04-06T02:52:08.725-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|36, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.978-0500 c20012| 2016-04-06T02:52:08.725-0500 D QUERY [conn7] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:00.979-0500 c20012| 2016-04-06T02:52:08.725-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|36, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:00.981-0500 c20012| 2016-04-06T02:52:08.726-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 576 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|37, t: 1, h: 8332631665531795890, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f188'), state: 2, when: new Date(1459929128725), why: "splitting chunk [{ _id: -95.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:00.982-0500 c20012| 2016-04-06T02:52:08.726-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|37 and ending at ts: Timestamp 1459929128000|37 [js_test:multi_coll_drop] 2016-04-06T02:53:00.982-0500 c20012| 2016-04-06T02:52:08.726-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:00.982-0500 c20012| 2016-04-06T02:52:08.726-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.983-0500 c20012| 2016-04-06T02:52:08.726-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.984-0500 c20012| 2016-04-06T02:52:08.726-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.984-0500 c20012| 2016-04-06T02:52:08.726-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.985-0500 c20012| 2016-04-06T02:52:08.726-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.986-0500 c20012| 2016-04-06T02:52:08.726-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.986-0500 c20012| 2016-04-06T02:52:08.726-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.986-0500 c20012| 2016-04-06T02:52:08.726-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.987-0500 c20012| 2016-04-06T02:52:08.726-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.988-0500 c20012| 2016-04-06T02:52:08.726-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.989-0500 c20012| 2016-04-06T02:52:08.726-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.990-0500 c20012| 2016-04-06T02:52:08.726-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.991-0500 c20012| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.992-0500 c20012| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.992-0500 c20012| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.993-0500 c20012| 2016-04-06T02:52:08.727-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:00.994-0500 c20012| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.995-0500 c20012| 2016-04-06T02:52:08.727-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:00.996-0500 c20012| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.997-0500 c20012| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:00.999-0500 c20012| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.026-0500 c20012| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.028-0500 c20012| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.031-0500 c20012| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.032-0500 c20012| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.032-0500 c20012| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.034-0500 c20012| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.037-0500 c20012| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.037-0500 c20012| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.038-0500 c20012| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.042-0500 c20012| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.043-0500 c20012| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.043-0500 c20012| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.045-0500 c20012| 2016-04-06T02:52:08.727-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.047-0500 c20012| 2016-04-06T02:52:08.727-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:01.052-0500 c20012| 2016-04-06T02:52:08.728-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.057-0500 c20012| 2016-04-06T02:52:08.728-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 578 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|36, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.059-0500 c20012| 2016-04-06T02:52:08.728-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 578 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.059-0500 c20012| 2016-04-06T02:52:08.728-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 578 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.063-0500 c20012| 2016-04-06T02:52:08.728-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 580 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.728-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|36, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:01.064-0500 c20012| 2016-04-06T02:52:08.728-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 580 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.067-0500 c20012| 2016-04-06T02:52:08.729-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.071-0500 c20012| 2016-04-06T02:52:08.729-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 581 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.071-0500 c20012| 2016-04-06T02:52:08.729-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 581 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.071-0500 c20012| 2016-04-06T02:52:08.730-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 580 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.072-0500 c20012| 2016-04-06T02:52:08.730-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|37, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.072-0500 c20012| 2016-04-06T02:52:08.730-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:01.073-0500 c20012| 2016-04-06T02:52:08.730-0500 D COMMAND [conn11] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|37, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.073-0500 c20012| 2016-04-06T02:52:08.730-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 583 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.730-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|37, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:01.074-0500 c20012| 2016-04-06T02:52:08.730-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 581 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.075-0500 c20012| 2016-04-06T02:52:08.730-0500 D COMMAND [conn11] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|37, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:01.077-0500 c20012| 2016-04-06T02:52:08.730-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 583 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.078-0500 c20012| 2016-04-06T02:52:08.730-0500 D COMMAND [conn11] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|37, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.079-0500 c20012| 2016-04-06T02:52:08.730-0500 D QUERY [conn11] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:01.082-0500 c20012| 2016-04-06T02:52:08.730-0500 I COMMAND [conn11] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|37, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:492 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:01.083-0500 c20012| 2016-04-06T02:52:08.730-0500 D COMMAND [conn11] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|12 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|37, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.088-0500 c20012| 2016-04-06T02:52:08.730-0500 D COMMAND [conn11] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|37, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:01.093-0500 c20012| 2016-04-06T02:52:08.730-0500 D COMMAND [conn11] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|12 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|37, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.098-0500 c20012| 2016-04-06T02:52:08.730-0500 D QUERY [conn11] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:01.104-0500 c20012| 2016-04-06T02:52:08.731-0500 I COMMAND [conn11] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|12 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|37, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:01.112-0500 c20012| 2016-04-06T02:52:08.732-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 583 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|38, t: 1, h: 1151462575445385727, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-95.0", lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -95.0 }, max: { _id: -94.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-95.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-94.0", lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -94.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-94.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.114-0500 c20012| 2016-04-06T02:52:08.732-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|38 and ending at ts: Timestamp 1459929128000|38 [js_test:multi_coll_drop] 2016-04-06T02:53:01.130-0500 c20012| 2016-04-06T02:52:08.732-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:01.133-0500 c20012| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.134-0500 c20012| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.135-0500 c20012| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.135-0500 c20012| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.140-0500 c20012| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.140-0500 c20012| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.142-0500 c20012| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.142-0500 c20012| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.143-0500 c20012| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.144-0500 c20012| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.145-0500 c20012| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.146-0500 c20012| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.146-0500 c20012| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.156-0500 c20012| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.157-0500 c20012| 2016-04-06T02:52:08.732-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:01.157-0500 c20012| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.158-0500 c20012| 2016-04-06T02:52:08.732-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.158-0500 c20012| 2016-04-06T02:52:08.732-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-95.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:01.158-0500 c20012| 2016-04-06T02:52:08.733-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-94.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:01.158-0500 c20012| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.159-0500 c20012| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.159-0500 c20012| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.160-0500 c20012| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.160-0500 c20012| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.161-0500 c20012| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.162-0500 c20012| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.163-0500 c20012| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.165-0500 c20012| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.166-0500 c20012| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.167-0500 c20012| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.167-0500 c20012| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.168-0500 c20012| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.169-0500 c20012| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.169-0500 c20012| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.169-0500 c20012| 2016-04-06T02:52:08.733-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.173-0500 c20012| 2016-04-06T02:52:08.733-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:01.176-0500 c20012| 2016-04-06T02:52:08.733-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.180-0500 c20012| 2016-04-06T02:52:08.733-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 586 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|37, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.180-0500 c20012| 2016-04-06T02:52:08.733-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 586 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.183-0500 c20012| 2016-04-06T02:52:08.733-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 586 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.184-0500 c20012| 2016-04-06T02:52:08.734-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 588 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.734-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|37, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:01.187-0500 c20012| 2016-04-06T02:52:08.734-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 588 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.189-0500 c20012| 2016-04-06T02:52:08.734-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.200-0500 c20012| 2016-04-06T02:52:08.734-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 589 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.204-0500 c20012| 2016-04-06T02:52:08.734-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 589 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.205-0500 c20012| 2016-04-06T02:52:08.735-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 589 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.209-0500 c20012| 2016-04-06T02:52:08.735-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 588 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.210-0500 c20012| 2016-04-06T02:52:08.735-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|38, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.212-0500 c20012| 2016-04-06T02:52:08.735-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:01.214-0500 c20012| 2016-04-06T02:52:08.735-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 592 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.735-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|38, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:01.214-0500 c20012| 2016-04-06T02:52:08.735-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 592 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.222-0500 c20012| 2016-04-06T02:52:08.738-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 592 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|39, t: 1, h: -144793915507581801, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.735-0500-5704c02865c17830b843f189", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128735), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -95.0 }, max: { _id: MaxKey } }, left: { min: { _id: -95.0 }, max: { _id: -94.0 }, lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -94.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.223-0500 c20012| 2016-04-06T02:52:08.738-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|39 and ending at ts: Timestamp 1459929128000|39 [js_test:multi_coll_drop] 2016-04-06T02:53:01.223-0500 c20012| 2016-04-06T02:52:08.738-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:01.226-0500 c20012| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.229-0500 c20012| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.230-0500 c20012| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.231-0500 c20012| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.232-0500 c20012| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.233-0500 c20012| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.234-0500 c20012| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.241-0500 c20012| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.241-0500 c20012| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.244-0500 c20012| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.245-0500 c20012| 2016-04-06T02:52:08.738-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.245-0500 c20012| 2016-04-06T02:52:08.739-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.248-0500 c20012| 2016-04-06T02:52:08.739-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.249-0500 c20012| 2016-04-06T02:52:08.739-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.251-0500 c20012| 2016-04-06T02:52:08.739-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:01.254-0500 c20012| 2016-04-06T02:52:08.739-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.255-0500 c20012| 2016-04-06T02:52:08.739-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.255-0500 c20012| 2016-04-06T02:52:08.739-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.255-0500 c20012| 2016-04-06T02:52:08.739-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.259-0500 c20012| 2016-04-06T02:52:08.739-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.259-0500 c20012| 2016-04-06T02:52:08.739-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.261-0500 c20012| 2016-04-06T02:52:08.739-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.263-0500 c20012| 2016-04-06T02:52:08.739-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.278-0500 c20012| 2016-04-06T02:52:08.739-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.279-0500 c20012| 2016-04-06T02:52:08.739-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.280-0500 c20012| 2016-04-06T02:52:08.739-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.282-0500 c20012| 2016-04-06T02:52:08.739-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.283-0500 c20012| 2016-04-06T02:52:08.739-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.285-0500 c20012| 2016-04-06T02:52:08.739-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.286-0500 c20012| 2016-04-06T02:52:08.739-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.289-0500 c20012| 2016-04-06T02:52:08.739-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.289-0500 c20012| 2016-04-06T02:52:08.739-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.290-0500 c20012| 2016-04-06T02:52:08.739-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.291-0500 c20012| 2016-04-06T02:52:08.739-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:01.299-0500 c20012| 2016-04-06T02:52:08.739-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.304-0500 c20012| 2016-04-06T02:52:08.739-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 594 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|38, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.307-0500 c20012| 2016-04-06T02:52:08.739-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 594 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.310-0500 c20012| 2016-04-06T02:52:08.739-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 594 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.312-0500 c20012| 2016-04-06T02:52:08.740-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 596 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.740-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|38, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:01.316-0500 c20012| 2016-04-06T02:52:08.740-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 596 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.331-0500 c20012| 2016-04-06T02:52:08.751-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.353-0500 c20012| 2016-04-06T02:52:08.751-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 597 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.363-0500 c20012| 2016-04-06T02:52:08.751-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 597 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.373-0500 c20012| 2016-04-06T02:52:08.751-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 597 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.375-0500 c20012| 2016-04-06T02:52:08.752-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 596 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.381-0500 c20012| 2016-04-06T02:52:08.752-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|39, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.403-0500 c20012| 2016-04-06T02:52:08.752-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:01.405-0500 c20012| 2016-04-06T02:52:08.752-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 600 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.752-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|39, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:01.407-0500 c20012| 2016-04-06T02:52:08.752-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 600 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.410-0500 c20012| 2016-04-06T02:52:08.754-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 600 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|40, t: 1, h: -5970909802005772631, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.412-0500 c20012| 2016-04-06T02:52:08.754-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|40 and ending at ts: Timestamp 1459929128000|40 [js_test:multi_coll_drop] 2016-04-06T02:53:01.431-0500 c20012| 2016-04-06T02:52:08.755-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:01.433-0500 c20012| 2016-04-06T02:52:08.755-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.433-0500 c20012| 2016-04-06T02:52:08.755-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.435-0500 c20012| 2016-04-06T02:52:08.755-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.439-0500 c20012| 2016-04-06T02:52:08.755-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.439-0500 c20012| 2016-04-06T02:52:08.756-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.439-0500 c20012| 2016-04-06T02:52:08.756-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.440-0500 c20012| 2016-04-06T02:52:08.756-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.445-0500 c20012| 2016-04-06T02:52:08.756-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.445-0500 c20012| 2016-04-06T02:52:08.756-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.446-0500 c20012| 2016-04-06T02:52:08.756-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.447-0500 c20012| 2016-04-06T02:52:08.756-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.449-0500 c20012| 2016-04-06T02:52:08.756-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.449-0500 c20012| 2016-04-06T02:52:08.756-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.449-0500 c20012| 2016-04-06T02:52:08.756-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:01.450-0500 c20012| 2016-04-06T02:52:08.756-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.453-0500 c20012| 2016-04-06T02:52:08.756-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.459-0500 c20012| 2016-04-06T02:52:08.756-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:01.460-0500 c20012| 2016-04-06T02:52:08.756-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.461-0500 c20012| 2016-04-06T02:52:08.756-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.462-0500 c20012| 2016-04-06T02:52:08.756-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.463-0500 c20012| 2016-04-06T02:52:08.756-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.468-0500 c20012| 2016-04-06T02:52:08.756-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 602 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.756-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|39, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:01.469-0500 c20012| 2016-04-06T02:52:08.756-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 602 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.477-0500 c20012| 2016-04-06T02:52:08.757-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.484-0500 c20012| 2016-04-06T02:52:08.757-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.487-0500 c20012| 2016-04-06T02:52:08.757-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.488-0500 c20012| 2016-04-06T02:52:08.757-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.488-0500 c20012| 2016-04-06T02:52:08.757-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.489-0500 c20012| 2016-04-06T02:52:08.757-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.489-0500 c20012| 2016-04-06T02:52:08.757-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.490-0500 c20012| 2016-04-06T02:52:08.757-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.495-0500 c20012| 2016-04-06T02:52:08.757-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.497-0500 c20012| 2016-04-06T02:52:08.757-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.498-0500 c20012| 2016-04-06T02:52:08.757-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.499-0500 c20012| 2016-04-06T02:52:08.757-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.500-0500 c20012| 2016-04-06T02:52:08.757-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.502-0500 c20012| 2016-04-06T02:52:08.757-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:01.506-0500 c20012| 2016-04-06T02:52:08.758-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.512-0500 c20012| 2016-04-06T02:52:08.758-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 603 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|39, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.514-0500 c20012| 2016-04-06T02:52:08.758-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 603 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.516-0500 c20012| 2016-04-06T02:52:08.758-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 603 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.518-0500 c20012| 2016-04-06T02:52:08.759-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.520-0500 c20012| 2016-04-06T02:52:08.759-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 605 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.521-0500 c20012| 2016-04-06T02:52:08.760-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 605 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.521-0500 c20012| 2016-04-06T02:52:08.760-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 605 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.522-0500 c20012| 2016-04-06T02:52:08.765-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 602 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.524-0500 c20012| 2016-04-06T02:52:08.765-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|40, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.524-0500 c20012| 2016-04-06T02:52:08.765-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:01.534-0500 c20012| 2016-04-06T02:52:08.765-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 608 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.765-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|40, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:01.535-0500 c20012| 2016-04-06T02:52:08.765-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 608 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.542-0500 c20012| 2016-04-06T02:52:08.770-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 608 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|41, t: 1, h: -8586936061680186804, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f18a'), state: 2, when: new Date(1459929128769), why: "splitting chunk [{ _id: -94.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.544-0500 c20012| 2016-04-06T02:52:08.770-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|41 and ending at ts: Timestamp 1459929128000|41 [js_test:multi_coll_drop] 2016-04-06T02:53:01.547-0500 c20012| 2016-04-06T02:52:08.770-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:01.550-0500 c20012| 2016-04-06T02:52:08.770-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.550-0500 c20012| 2016-04-06T02:52:08.770-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.552-0500 c20012| 2016-04-06T02:52:08.770-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.555-0500 c20012| 2016-04-06T02:52:08.770-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.560-0500 c20012| 2016-04-06T02:52:08.770-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.561-0500 c20012| 2016-04-06T02:52:08.770-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.562-0500 c20012| 2016-04-06T02:52:08.770-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.564-0500 c20012| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.564-0500 c20012| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.567-0500 c20012| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.571-0500 c20012| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.572-0500 c20012| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.573-0500 c20012| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.574-0500 c20012| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.577-0500 c20012| 2016-04-06T02:52:08.771-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:01.578-0500 c20012| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.581-0500 c20012| 2016-04-06T02:52:08.771-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:01.583-0500 c20012| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.583-0500 c20012| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.585-0500 c20012| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.587-0500 c20012| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.588-0500 c20012| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.591-0500 c20012| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.591-0500 c20012| 2016-04-06T02:52:08.771-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.592-0500 c20012| 2016-04-06T02:52:08.772-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.592-0500 c20012| 2016-04-06T02:52:08.772-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.595-0500 c20012| 2016-04-06T02:52:08.772-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.596-0500 c20012| 2016-04-06T02:52:08.772-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.599-0500 c20012| 2016-04-06T02:52:08.772-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.600-0500 c20012| 2016-04-06T02:52:08.772-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.607-0500 c20012| 2016-04-06T02:52:08.772-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 610 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.772-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|40, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:01.610-0500 c20012| 2016-04-06T02:52:08.772-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.612-0500 c20012| 2016-04-06T02:52:08.772-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.613-0500 c20012| 2016-04-06T02:52:08.772-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.615-0500 c20012| 2016-04-06T02:52:08.772-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.615-0500 c20012| 2016-04-06T02:52:08.772-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 610 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.632-0500 c20012| 2016-04-06T02:52:08.773-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:01.634-0500 c20012| 2016-04-06T02:52:08.773-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.638-0500 c20012| 2016-04-06T02:52:08.773-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 611 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|40, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.639-0500 c20012| 2016-04-06T02:52:08.773-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 611 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.640-0500 c20012| 2016-04-06T02:52:08.773-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 611 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.641-0500 c20012| 2016-04-06T02:52:08.775-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 610 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.646-0500 c20012| 2016-04-06T02:52:08.776-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.648-0500 c20012| 2016-04-06T02:52:08.776-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|41, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.655-0500 c20012| 2016-04-06T02:52:08.776-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 614 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.656-0500 c20012| 2016-04-06T02:52:08.776-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:01.664-0500 c20012| 2016-04-06T02:52:08.776-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 614 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.665-0500 c20012| 2016-04-06T02:52:08.776-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 614 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.669-0500 c20012| 2016-04-06T02:52:08.776-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 615 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.776-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|41, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:01.672-0500 c20012| 2016-04-06T02:52:08.777-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 615 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.686-0500 c20012| 2016-04-06T02:52:08.780-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 615 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|42, t: 1, h: 833305568785647658, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-94.0", lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -94.0 }, max: { _id: -93.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-94.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-93.0", lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -93.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-93.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.694-0500 c20012| 2016-04-06T02:52:08.780-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|42 and ending at ts: Timestamp 1459929128000|42 [js_test:multi_coll_drop] 2016-04-06T02:53:01.697-0500 c20012| 2016-04-06T02:52:08.781-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:01.700-0500 c20012| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.701-0500 c20012| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.710-0500 c20012| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.710-0500 c20012| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.712-0500 c20012| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.712-0500 c20012| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.715-0500 c20012| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.716-0500 c20012| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.722-0500 c20012| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.723-0500 c20012| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.723-0500 c20012| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.725-0500 c20012| 2016-04-06T02:52:08.781-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.725-0500 c20012| 2016-04-06T02:52:08.782-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.729-0500 c20012| 2016-04-06T02:52:08.782-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:01.732-0500 c20012| 2016-04-06T02:52:08.782-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.732-0500 c20012| 2016-04-06T02:52:08.782-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.739-0500 c20012| 2016-04-06T02:52:08.782-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-94.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:01.747-0500 c20012| 2016-04-06T02:52:08.782-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.751-0500 c20012| 2016-04-06T02:52:08.782-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-93.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:01.760-0500 c20012| 2016-04-06T02:52:08.782-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 618 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.782-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|41, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:01.762-0500 c20012| 2016-04-06T02:52:08.783-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 618 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.764-0500 c20012| 2016-04-06T02:52:08.783-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.766-0500 c20012| 2016-04-06T02:52:08.783-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.767-0500 c20012| 2016-04-06T02:52:08.783-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.767-0500 c20012| 2016-04-06T02:52:08.783-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.770-0500 c20012| 2016-04-06T02:52:08.783-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.770-0500 c20012| 2016-04-06T02:52:08.783-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.771-0500 c20012| 2016-04-06T02:52:08.783-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.773-0500 c20012| 2016-04-06T02:52:08.783-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.774-0500 c20012| 2016-04-06T02:52:08.783-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.775-0500 c20012| 2016-04-06T02:52:08.783-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.777-0500 c20012| 2016-04-06T02:52:08.783-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.780-0500 c20012| 2016-04-06T02:52:08.783-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.782-0500 c20012| 2016-04-06T02:52:08.783-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.784-0500 c20012| 2016-04-06T02:52:08.783-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.784-0500 c20012| 2016-04-06T02:52:08.783-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.785-0500 c20012| 2016-04-06T02:52:08.783-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.786-0500 c20012| 2016-04-06T02:52:08.783-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:01.792-0500 c20012| 2016-04-06T02:52:08.783-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.796-0500 c20012| 2016-04-06T02:52:08.783-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 619 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|41, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.798-0500 c20012| 2016-04-06T02:52:08.783-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 619 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.800-0500 c20012| 2016-04-06T02:52:08.784-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 619 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.801-0500 c20012| 2016-04-06T02:52:08.784-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 618 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.802-0500 c20012| 2016-04-06T02:52:08.784-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|42, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.803-0500 c20012| 2016-04-06T02:52:08.784-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:01.804-0500 c20012| 2016-04-06T02:52:08.784-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 622 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.784-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|42, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:01.804-0500 c20012| 2016-04-06T02:52:08.784-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 622 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.807-0500 c20012| 2016-04-06T02:52:08.788-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 622 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|43, t: 1, h: -3405107048992371553, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.784-0500-5704c02865c17830b843f18b", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128784), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -94.0 }, max: { _id: MaxKey } }, left: { min: { _id: -94.0 }, max: { _id: -93.0 }, lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -93.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.809-0500 c20012| 2016-04-06T02:52:08.788-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|43 and ending at ts: Timestamp 1459929128000|43 [js_test:multi_coll_drop] 2016-04-06T02:53:01.811-0500 c20012| 2016-04-06T02:52:08.789-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:01.812-0500 c20012| 2016-04-06T02:52:08.789-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.813-0500 c20012| 2016-04-06T02:52:08.789-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.815-0500 c20012| 2016-04-06T02:52:08.789-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.815-0500 c20012| 2016-04-06T02:52:08.789-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.816-0500 c20012| 2016-04-06T02:52:08.789-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.817-0500 c20012| 2016-04-06T02:52:08.789-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.818-0500 c20012| 2016-04-06T02:52:08.789-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.818-0500 c20012| 2016-04-06T02:52:08.789-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.820-0500 c20012| 2016-04-06T02:52:08.789-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.820-0500 c20012| 2016-04-06T02:52:08.789-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.820-0500 c20012| 2016-04-06T02:52:08.789-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:01.821-0500 c20012| 2016-04-06T02:52:08.789-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.824-0500 c20012| 2016-04-06T02:52:08.789-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.825-0500 c20012| 2016-04-06T02:52:08.789-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.827-0500 c20012| 2016-04-06T02:52:08.789-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.829-0500 c20012| 2016-04-06T02:52:08.789-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.830-0500 c20012| 2016-04-06T02:52:08.789-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.831-0500 c20012| 2016-04-06T02:52:08.789-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.832-0500 c20012| 2016-04-06T02:52:08.789-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.834-0500 c20012| 2016-04-06T02:52:08.790-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.837-0500 c20012| 2016-04-06T02:52:08.790-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.838-0500 c20012| 2016-04-06T02:52:08.790-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.840-0500 c20012| 2016-04-06T02:52:08.790-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.840-0500 c20012| 2016-04-06T02:52:08.790-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.841-0500 c20012| 2016-04-06T02:52:08.790-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.842-0500 c20012| 2016-04-06T02:52:08.790-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.842-0500 c20012| 2016-04-06T02:52:08.789-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.844-0500 c20012| 2016-04-06T02:52:08.790-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.846-0500 c20012| 2016-04-06T02:52:08.790-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.849-0500 c20012| 2016-04-06T02:52:08.791-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 624 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.791-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|42, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:01.851-0500 c20012| 2016-04-06T02:52:08.791-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 624 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.852-0500 c20012| 2016-04-06T02:52:08.793-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.853-0500 c20012| 2016-04-06T02:52:08.793-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.855-0500 c20012| 2016-04-06T02:52:08.793-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.856-0500 c20012| 2016-04-06T02:52:08.793-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.858-0500 c20012| 2016-04-06T02:52:08.793-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.870-0500 c20012| 2016-04-06T02:52:08.794-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 625 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.872-0500 c20012| 2016-04-06T02:52:08.794-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 625 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.874-0500 c20012| 2016-04-06T02:52:08.794-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 625 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.875-0500 c20012| 2016-04-06T02:52:08.794-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:01.878-0500 c20012| 2016-04-06T02:52:08.794-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.889-0500 c20012| 2016-04-06T02:52:08.794-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 627 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|42, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.893-0500 c20012| 2016-04-06T02:52:08.794-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 627 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.896-0500 c20012| 2016-04-06T02:52:08.795-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 627 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.901-0500 c20012| 2016-04-06T02:52:08.798-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.906-0500 c20012| 2016-04-06T02:52:08.798-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 629 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:01.906-0500 c20012| 2016-04-06T02:52:08.798-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 629 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.906-0500 c20012| 2016-04-06T02:52:08.798-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 629 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.906-0500 c20012| 2016-04-06T02:52:08.800-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 624 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.907-0500 c20012| 2016-04-06T02:52:08.800-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|43, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.908-0500 c20012| 2016-04-06T02:52:08.800-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:01.915-0500 c20012| 2016-04-06T02:52:08.800-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 632 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.800-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|43, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:01.917-0500 c20012| 2016-04-06T02:52:08.800-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 632 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.918-0500 c20012| 2016-04-06T02:52:08.800-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 632 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|44, t: 1, h: -7327796729150212279, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:01.919-0500 c20012| 2016-04-06T02:52:08.800-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|44 and ending at ts: Timestamp 1459929128000|44 [js_test:multi_coll_drop] 2016-04-06T02:53:01.921-0500 c20012| 2016-04-06T02:52:08.805-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:01.925-0500 c20012| 2016-04-06T02:52:08.805-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 634 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.805-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|43, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:01.925-0500 c20012| 2016-04-06T02:52:08.805-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 634 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:01.926-0500 c20012| 2016-04-06T02:52:08.805-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.926-0500 c20012| 2016-04-06T02:52:08.805-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.926-0500 c20012| 2016-04-06T02:52:08.805-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.928-0500 c20012| 2016-04-06T02:52:08.805-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.930-0500 c20012| 2016-04-06T02:52:08.805-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.932-0500 c20012| 2016-04-06T02:52:08.805-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.934-0500 c20012| 2016-04-06T02:52:08.805-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.936-0500 c20012| 2016-04-06T02:52:08.805-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.938-0500 c20012| 2016-04-06T02:52:08.806-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.940-0500 c20012| 2016-04-06T02:52:08.806-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.941-0500 c20012| 2016-04-06T02:52:08.806-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.945-0500 c20012| 2016-04-06T02:52:08.806-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.946-0500 c20012| 2016-04-06T02:52:08.806-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.946-0500 c20012| 2016-04-06T02:52:08.806-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:01.960-0500 c20012| 2016-04-06T02:52:08.806-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.963-0500 c20012| 2016-04-06T02:52:08.806-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.994-0500 c20012| 2016-04-06T02:52:08.806-0500 D QUERY [repl writer worker 11] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:01.996-0500 c20012| 2016-04-06T02:52:08.806-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.997-0500 c20012| 2016-04-06T02:52:08.806-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:01.999-0500 c20012| 2016-04-06T02:52:08.806-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.000-0500 c20012| 2016-04-06T02:52:08.806-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.000-0500 c20012| 2016-04-06T02:52:08.806-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.000-0500 c20012| 2016-04-06T02:52:08.806-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.000-0500 c20012| 2016-04-06T02:52:08.806-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.000-0500 c20012| 2016-04-06T02:52:08.807-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.000-0500 c20012| 2016-04-06T02:52:08.807-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.000-0500 c20012| 2016-04-06T02:52:08.807-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.001-0500 c20012| 2016-04-06T02:52:08.807-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.001-0500 c20012| 2016-04-06T02:52:08.807-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.001-0500 c20012| 2016-04-06T02:52:08.807-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.023-0500 s20015| 2016-04-06T02:52:41.720-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:02.025-0500 s20015| 2016-04-06T02:52:41.720-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 64 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:02.025-0500 s20015| 2016-04-06T02:52:41.721-0500 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.100.28:20011, no events [js_test:multi_coll_drop] 2016-04-06T02:53:02.031-0500 s20015| 2016-04-06T02:52:41.721-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 63 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:02.034-0500 d20010| 2016-04-06T02:52:41.721-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] dropping unhealthy pooled connection to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:02.036-0500 d20010| 2016-04-06T02:52:41.721-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:53:02.039-0500 d20010| 2016-04-06T02:52:41.721-0500 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: WriteConcernFailed: InterruptedDueToReplStateChange: operation was interrupted. Error details: [js_test:multi_coll_drop] 2016-04-06T02:53:02.039-0500 d20010| 2016-04-06T02:52:41.747-0500 I SHARDING [conn5] distributed lock with ts: 5704c03a65c17830b843f1af' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:53:02.041-0500 d20010| 2016-04-06T02:52:41.747-0500 I COMMAND [conn5] command admin.$cmd command: splitChunk { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -76.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -75.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|50, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } numYields:0 reslen:74 locks:{ Global: { acquireCount: { r: 6, w: 2 } }, Database: { acquireCount: { r: 2, w: 2 } }, Collection: { acquireCount: { r: 2, W: 2 } } } protocol:op_command 14864ms [js_test:multi_coll_drop] 2016-04-06T02:53:02.042-0500 c20012| 2016-04-06T02:52:08.807-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.043-0500 c20012| 2016-04-06T02:52:08.807-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.043-0500 c20012| 2016-04-06T02:52:08.807-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.043-0500 c20012| 2016-04-06T02:52:08.807-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.077-0500 c20012| 2016-04-06T02:52:08.808-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:02.095-0500 c20012| 2016-04-06T02:52:08.808-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:02.105-0500 c20012| 2016-04-06T02:52:08.808-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 635 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|43, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:02.105-0500 c20012| 2016-04-06T02:52:08.808-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 635 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:02.107-0500 c20012| 2016-04-06T02:52:08.809-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 635 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:02.124-0500 c20012| 2016-04-06T02:52:08.823-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:02.141-0500 c20012| 2016-04-06T02:52:08.824-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 637 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:02.143-0500 c20012| 2016-04-06T02:52:08.824-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 637 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:02.145-0500 c20012| 2016-04-06T02:52:08.824-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 637 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:02.151-0500 c20012| 2016-04-06T02:52:08.824-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 634 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:02.155-0500 c20012| 2016-04-06T02:52:08.824-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|44, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:02.155-0500 c20012| 2016-04-06T02:52:08.824-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:02.179-0500 c20012| 2016-04-06T02:52:08.825-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 640 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.825-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|44, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:02.182-0500 c20012| 2016-04-06T02:52:08.825-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 640 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:02.204-0500 c20012| 2016-04-06T02:52:08.835-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 640 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|45, t: 1, h: -2798690155182775057, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f18c'), state: 2, when: new Date(1459929128828), why: "splitting chunk [{ _id: -93.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:02.205-0500 c20012| 2016-04-06T02:52:08.835-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|45, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:02.210-0500 c20012| 2016-04-06T02:52:08.835-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|45 and ending at ts: Timestamp 1459929128000|45 [js_test:multi_coll_drop] 2016-04-06T02:53:02.212-0500 c20012| 2016-04-06T02:52:08.835-0500 D REPL [rsBackgroundSync-0] bgsync buffer has 0 bytes [js_test:multi_coll_drop] 2016-04-06T02:53:02.213-0500 c20012| 2016-04-06T02:52:08.835-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:02.216-0500 c20012| 2016-04-06T02:52:08.836-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.218-0500 c20012| 2016-04-06T02:52:08.836-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.222-0500 c20012| 2016-04-06T02:52:08.836-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.245-0500 c20012| 2016-04-06T02:52:08.836-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.250-0500 c20012| 2016-04-06T02:52:08.836-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.251-0500 c20012| 2016-04-06T02:52:08.836-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.253-0500 c20012| 2016-04-06T02:52:08.836-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.254-0500 c20012| 2016-04-06T02:52:08.836-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.254-0500 c20012| 2016-04-06T02:52:08.836-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.256-0500 c20012| 2016-04-06T02:52:08.836-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.257-0500 c20012| 2016-04-06T02:52:08.836-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.258-0500 c20012| 2016-04-06T02:52:08.836-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.259-0500 c20012| 2016-04-06T02:52:08.836-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.261-0500 c20012| 2016-04-06T02:52:08.836-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.263-0500 c20012| 2016-04-06T02:52:08.836-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.264-0500 c20012| 2016-04-06T02:52:08.836-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:02.266-0500 c20012| 2016-04-06T02:52:08.836-0500 D QUERY [repl writer worker 3] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:02.268-0500 c20013| 2016-04-06T02:52:08.906-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 722 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.906-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|55, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:02.269-0500 c20013| 2016-04-06T02:52:08.906-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 722 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:02.270-0500 d20010| 2016-04-06T02:52:41.772-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -75.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -74.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|52, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:02.271-0500 d20010| 2016-04-06T02:52:41.782-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -75.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c04965c17830b843f1b1 [js_test:multi_coll_drop] 2016-04-06T02:53:02.278-0500 d20010| 2016-04-06T02:52:41.782-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|52||5704c02806c33406d4d9c0c0, current metadata version is 1|52||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:02.280-0500 d20010| 2016-04-06T02:52:41.785-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|52||5704c02806c33406d4d9c0c0, took 3ms) [js_test:multi_coll_drop] 2016-04-06T02:53:02.281-0500 d20010| 2016-04-06T02:52:41.786-0500 I SHARDING [conn5] splitChunk accepted at version 1|52||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:02.287-0500 d20010| 2016-04-06T02:52:41.797-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:41.797-0500-5704c04965c17830b843f1b2", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929161797), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -75.0 }, max: { _id: MaxKey } }, left: { min: { _id: -75.0 }, max: { _id: -74.0 }, lastmod: Timestamp 1000|53, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -74.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|54, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:53:02.289-0500 d20010| 2016-04-06T02:52:41.839-0500 I SHARDING [conn5] distributed lock with ts: 5704c04965c17830b843f1b1' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:53:02.290-0500 c20012| 2016-04-06T02:52:08.836-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.291-0500 c20012| 2016-04-06T02:52:08.837-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.292-0500 c20012| 2016-04-06T02:52:08.837-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.294-0500 c20012| 2016-04-06T02:52:08.837-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.296-0500 c20012| 2016-04-06T02:52:08.837-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.296-0500 c20012| 2016-04-06T02:52:08.837-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.297-0500 c20011| 2016-04-06T02:52:16.555-0500 D COMMAND [conn29] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:02.300-0500 c20011| 2016-04-06T02:52:16.557-0500 I COMMAND [conn29] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:02.301-0500 c20011| 2016-04-06T02:52:16.558-0500 D COMMAND [conn32] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:02.303-0500 c20011| 2016-04-06T02:52:16.558-0500 I COMMAND [conn32] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:02.308-0500 c20011| 2016-04-06T02:52:16.658-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:02.312-0500 s20015| 2016-04-06T02:52:41.721-0500 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.100.28:20013, no events [js_test:multi_coll_drop] 2016-04-06T02:53:02.319-0500 c20011| 2016-04-06T02:52:16.658-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:02.321-0500 c20011| 2016-04-06T02:52:16.859-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:02.321-0500 s20015| 2016-04-06T02:52:41.721-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:02.322-0500 c20011| 2016-04-06T02:52:16.859-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:02.323-0500 c20011| 2016-04-06T02:52:17.059-0500 D COMMAND [conn32] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:02.324-0500 c20011| 2016-04-06T02:52:17.059-0500 I COMMAND [conn32] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:02.328-0500 c20011| 2016-04-06T02:52:17.060-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:02.332-0500 c20011| 2016-04-06T02:52:17.060-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:02.332-0500 c20011| 2016-04-06T02:52:17.199-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:59629 #36 (9 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:02.336-0500 s20015| 2016-04-06T02:52:41.721-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 66 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:02.339-0500 d20010| 2016-04-06T02:52:41.842-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -74.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -73.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|54, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:02.342-0500 d20010| 2016-04-06T02:52:41.879-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -74.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c04965c17830b843f1b3 [js_test:multi_coll_drop] 2016-04-06T02:53:02.343-0500 d20010| 2016-04-06T02:52:41.879-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|54||5704c02806c33406d4d9c0c0, current metadata version is 1|54||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:02.346-0500 d20010| 2016-04-06T02:52:41.880-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|54||5704c02806c33406d4d9c0c0, took 1ms) [js_test:multi_coll_drop] 2016-04-06T02:53:02.346-0500 d20010| 2016-04-06T02:52:41.880-0500 I SHARDING [conn5] splitChunk accepted at version 1|54||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:02.353-0500 d20010| 2016-04-06T02:52:41.893-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:41.893-0500-5704c04965c17830b843f1b4", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929161893), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -74.0 }, max: { _id: MaxKey } }, left: { min: { _id: -74.0 }, max: { _id: -73.0 }, lastmod: Timestamp 1000|55, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -73.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|56, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:53:02.356-0500 d20010| 2016-04-06T02:52:41.952-0500 I SHARDING [conn5] distributed lock with ts: 5704c04965c17830b843f1b3' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:53:02.357-0500 c20012| 2016-04-06T02:52:08.837-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.360-0500 d20010| 2016-04-06T02:52:41.952-0500 I COMMAND [conn5] command admin.$cmd command: splitChunk { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -74.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -73.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|54, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } numYields:0 reslen:74 locks:{ Global: { acquireCount: { r: 6, w: 2 } }, Database: { acquireCount: { r: 2, w: 2 } }, Collection: { acquireCount: { r: 2, W: 2 } } } protocol:op_command 109ms [js_test:multi_coll_drop] 2016-04-06T02:53:02.376-0500 d20010| 2016-04-06T02:52:41.955-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -73.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -72.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|56, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:02.387-0500 d20010| 2016-04-06T02:52:41.997-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -73.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c04965c17830b843f1b5 [js_test:multi_coll_drop] 2016-04-06T02:53:02.391-0500 d20010| 2016-04-06T02:52:41.997-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|56||5704c02806c33406d4d9c0c0, current metadata version is 1|56||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:02.394-0500 d20010| 2016-04-06T02:52:42.017-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|56||5704c02806c33406d4d9c0c0, took 19ms) [js_test:multi_coll_drop] 2016-04-06T02:53:02.396-0500 d20010| 2016-04-06T02:52:42.017-0500 I SHARDING [conn5] splitChunk accepted at version 1|56||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:02.402-0500 d20010| 2016-04-06T02:52:42.035-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:42.035-0500-5704c04a65c17830b843f1b6", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162035), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -73.0 }, max: { _id: MaxKey } }, left: { min: { _id: -73.0 }, max: { _id: -72.0 }, lastmod: Timestamp 1000|57, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -72.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|58, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:53:02.403-0500 d20010| 2016-04-06T02:52:42.078-0500 I SHARDING [conn5] distributed lock with ts: 5704c04965c17830b843f1b5' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:53:02.407-0500 d20010| 2016-04-06T02:52:42.078-0500 I COMMAND [conn5] command admin.$cmd command: splitChunk { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -73.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -72.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|56, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } numYields:0 reslen:74 locks:{ Global: { acquireCount: { r: 6, w: 2 } }, Database: { acquireCount: { r: 2, w: 2 } }, Collection: { acquireCount: { r: 2, W: 2 } } } protocol:op_command 122ms [js_test:multi_coll_drop] 2016-04-06T02:53:02.416-0500 d20010| 2016-04-06T02:52:42.087-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -72.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -71.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|58, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:02.417-0500 d20010| 2016-04-06T02:52:42.098-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -72.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c04a65c17830b843f1b7 [js_test:multi_coll_drop] 2016-04-06T02:53:02.421-0500 d20010| 2016-04-06T02:52:42.098-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|58||5704c02806c33406d4d9c0c0, current metadata version is 1|58||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:02.423-0500 d20010| 2016-04-06T02:52:42.100-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|58||5704c02806c33406d4d9c0c0, took 1ms) [js_test:multi_coll_drop] 2016-04-06T02:53:02.423-0500 d20010| 2016-04-06T02:52:42.100-0500 I SHARDING [conn5] splitChunk accepted at version 1|58||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:02.431-0500 d20010| 2016-04-06T02:52:42.110-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:42.110-0500-5704c04a65c17830b843f1b8", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162110), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -72.0 }, max: { _id: MaxKey } }, left: { min: { _id: -72.0 }, max: { _id: -71.0 }, lastmod: Timestamp 1000|59, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -71.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|60, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:53:02.436-0500 d20010| 2016-04-06T02:52:42.149-0500 I SHARDING [conn5] distributed lock with ts: 5704c04a65c17830b843f1b7' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:53:02.440-0500 d20010| 2016-04-06T02:52:42.154-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -71.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -70.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|60, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:02.443-0500 d20010| 2016-04-06T02:52:42.176-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -71.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c04a65c17830b843f1b9 [js_test:multi_coll_drop] 2016-04-06T02:53:02.457-0500 c20013| 2016-04-06T02:52:08.906-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 722 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|56, t: 1, h: -2006001534307679450, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:02.465-0500 c20013| 2016-04-06T02:52:08.906-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|56 and ending at ts: Timestamp 1459929128000|56 [js_test:multi_coll_drop] 2016-04-06T02:53:02.467-0500 c20012| 2016-04-06T02:52:08.837-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.705-0500 c20012| 2016-04-06T02:52:08.837-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.706-0500 c20012| 2016-04-06T02:52:08.837-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:02.708-0500 c20011| 2016-04-06T02:52:17.200-0500 D COMMAND [conn36] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:02.712-0500 s20015| 2016-04-06T02:52:41.721-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Failed to execute command: RemoteCommand 62 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:02.631-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929152631), up: 25, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } reason: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:02.715-0500 d20010| 2016-04-06T02:52:42.176-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|60||5704c02806c33406d4d9c0c0, current metadata version is 1|60||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:02.724-0500 c20011| 2016-04-06T02:52:17.200-0500 I COMMAND [conn36] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20014" } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:02.741-0500 c20013| 2016-04-06T02:52:08.906-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:02.748-0500 c20013| 2016-04-06T02:52:08.906-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 724 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:02.750-0500 c20013| 2016-04-06T02:52:08.906-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 724 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:02.752-0500 c20013| 2016-04-06T02:52:08.907-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:02.752-0500 s20015| 2016-04-06T02:52:41.721-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 62 finished with response: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:02.753-0500 s20015| 2016-04-06T02:52:41.721-0500 D NETWORK [Balancer] Marking host mongovm16:20012 as failed [js_test:multi_coll_drop] 2016-04-06T02:53:02.753-0500 s20015| 2016-04-06T02:52:41.721-0500 D SHARDING [Balancer] Command failed with retriable error and will be retried :: caused by :: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:02.758-0500 s20015| 2016-04-06T02:52:41.721-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 65 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:02.762-0500 s20015| 2016-04-06T02:52:41.721-0500 D ASIO [Balancer] startCommand: RemoteCommand 68 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:11.721-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929152631), up: 25, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:02.766-0500 s20015| 2016-04-06T02:52:41.721-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Failed to execute command: RemoteCommand 63 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:07.364-0500 cmd:{ findAndModify: "lockpings", query: { _id: "mongovm16:20015:1459929127:-1485108316" }, update: { $set: { ping: new Date(1459929157363) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } reason: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:02.768-0500 s20015| 2016-04-06T02:52:41.721-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 63 finished with response: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:02.769-0500 s20015| 2016-04-06T02:52:41.721-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 68 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:02.770-0500 s20015| 2016-04-06T02:52:41.722-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Failed to execute command: RemoteCommand 65 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:53:07.373-0500 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } reason: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:02.771-0500 s20015| 2016-04-06T02:52:41.722-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 65 finished with response: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:02.773-0500 s20015| 2016-04-06T02:52:41.722-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] failed to close stream: Transport endpoint is not connected [js_test:multi_coll_drop] 2016-04-06T02:53:02.775-0500 s20015| 2016-04-06T02:52:41.722-0500 D NETWORK [UserCacheInvalidator] Marking host mongovm16:20012 as failed [js_test:multi_coll_drop] 2016-04-06T02:53:02.776-0500 s20015| 2016-04-06T02:52:41.722-0500 D SHARDING [UserCacheInvalidator] Command failed with retriable error and will be retried :: caused by :: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:02.777-0500 s20015| 2016-04-06T02:52:41.722-0500 D ASIO [UserCacheInvalidator] startCommand: RemoteCommand 71 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:11.722-0500 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:02.778-0500 s20015| 2016-04-06T02:52:41.722-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:02.778-0500 s20015| 2016-04-06T02:52:41.722-0500 D NETWORK [replSetDistLockPinger] Marking host mongovm16:20012 as failed [js_test:multi_coll_drop] 2016-04-06T02:53:02.780-0500 s20015| 2016-04-06T02:52:41.722-0500 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:02.783-0500 s20015| 2016-04-06T02:52:41.725-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 72 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:02.784-0500 s20015| 2016-04-06T02:52:41.726-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:02.785-0500 s20015| 2016-04-06T02:52:41.726-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 72 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:02.789-0500 s20015| 2016-04-06T02:52:41.726-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 71 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:02.790-0500 s20015| 2016-04-06T02:52:41.726-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 71 finished with response: { cacheGeneration: ObjectId('5704c01c3876c4cfd2eb3eb7'), ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:02.792-0500 s20015| 2016-04-06T02:52:41.737-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 68 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929161000|1, t: 3 }, electionId: ObjectId('7fffffff0000000000000003') } [js_test:multi_coll_drop] 2016-04-06T02:53:02.795-0500 s20015| 2016-04-06T02:52:41.737-0500 D ASIO [Balancer] startCommand: RemoteCommand 75 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:11.737-0500 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:02.798-0500 s20015| 2016-04-06T02:52:41.737-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 75 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:02.800-0500 s20015| 2016-04-06T02:52:41.737-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 75 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "shard0000", host: "mongovm16:20010" } ], id: 0, ns: "config.shards" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:02.801-0500 s20015| 2016-04-06T02:52:41.737-0500 D SHARDING [Balancer] found 1 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp 1459929161000|3, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:02.804-0500 s20015| 2016-04-06T02:52:41.738-0500 D ASIO [Balancer] startCommand: RemoteCommand 77 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:11.738-0500 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:02.806-0500 s20015| 2016-04-06T02:52:41.738-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 77 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:02.807-0500 s20015| 2016-04-06T02:52:41.738-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 77 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "chunksize", value: 50 } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:02.808-0500 s20015| 2016-04-06T02:52:41.738-0500 D SHARDING [Balancer] Refreshing MaxChunkSize: 50MB [js_test:multi_coll_drop] 2016-04-06T02:53:02.813-0500 s20015| 2016-04-06T02:52:41.738-0500 D ASIO [Balancer] startCommand: RemoteCommand 79 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:11.738-0500 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:02.816-0500 s20015| 2016-04-06T02:52:41.738-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 79 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:02.820-0500 s20015| 2016-04-06T02:52:41.747-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 79 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "balancer", stopped: true } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:02.820-0500 s20015| 2016-04-06T02:52:41.747-0500 D SHARDING [Balancer] skipping balancing round because balancing is disabled [js_test:multi_coll_drop] 2016-04-06T02:53:02.827-0500 s20015| 2016-04-06T02:52:41.747-0500 D ASIO [Balancer] startCommand: RemoteCommand 81 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:11.747-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929161747), up: 34, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:02.828-0500 s20015| 2016-04-06T02:52:41.747-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 81 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:02.829-0500 d20010| 2016-04-06T02:52:42.180-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|60||5704c02806c33406d4d9c0c0, took 4ms) [js_test:multi_coll_drop] 2016-04-06T02:53:02.829-0500 d20010| 2016-04-06T02:52:42.180-0500 I SHARDING [conn5] splitChunk accepted at version 1|60||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:02.834-0500 d20010| 2016-04-06T02:52:42.191-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:42.191-0500-5704c04a65c17830b843f1ba", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162191), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -71.0 }, max: { _id: MaxKey } }, left: { min: { _id: -71.0 }, max: { _id: -70.0 }, lastmod: Timestamp 1000|61, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -70.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|62, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:53:02.836-0500 d20010| 2016-04-06T02:52:42.287-0500 I SHARDING [conn5] distributed lock with ts: 5704c04a65c17830b843f1b9' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:53:02.840-0500 d20010| 2016-04-06T02:52:42.287-0500 I COMMAND [conn5] command admin.$cmd command: splitChunk { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -71.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -70.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|60, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } numYields:0 reslen:74 locks:{ Global: { acquireCount: { r: 6, w: 2 } }, Database: { acquireCount: { r: 2, w: 2 } }, Collection: { acquireCount: { r: 2, W: 2 } } } protocol:op_command 132ms [js_test:multi_coll_drop] 2016-04-06T02:53:02.843-0500 d20010| 2016-04-06T02:52:42.313-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -70.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -69.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|62, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:02.845-0500 d20010| 2016-04-06T02:52:42.326-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -70.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c04a65c17830b843f1bb [js_test:multi_coll_drop] 2016-04-06T02:53:02.850-0500 d20010| 2016-04-06T02:52:42.326-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|62||5704c02806c33406d4d9c0c0, current metadata version is 1|62||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:02.853-0500 d20010| 2016-04-06T02:52:42.329-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|62||5704c02806c33406d4d9c0c0, took 3ms) [js_test:multi_coll_drop] 2016-04-06T02:53:02.854-0500 d20010| 2016-04-06T02:52:42.329-0500 I SHARDING [conn5] splitChunk accepted at version 1|62||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:02.859-0500 d20010| 2016-04-06T02:52:42.348-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:42.348-0500-5704c04a65c17830b843f1bc", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162348), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -70.0 }, max: { _id: MaxKey } }, left: { min: { _id: -70.0 }, max: { _id: -69.0 }, lastmod: Timestamp 1000|63, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -69.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|64, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:53:02.862-0500 d20010| 2016-04-06T02:52:42.410-0500 I SHARDING [conn5] distributed lock with ts: 5704c04a65c17830b843f1bb' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:53:02.865-0500 d20010| 2016-04-06T02:52:42.436-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -69.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -68.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|64, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:02.868-0500 d20010| 2016-04-06T02:52:42.487-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -69.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c04a65c17830b843f1bd [js_test:multi_coll_drop] 2016-04-06T02:53:02.870-0500 d20010| 2016-04-06T02:52:42.487-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|64||5704c02806c33406d4d9c0c0, current metadata version is 1|64||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:02.872-0500 d20010| 2016-04-06T02:52:42.493-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|64||5704c02806c33406d4d9c0c0, took 5ms) [js_test:multi_coll_drop] 2016-04-06T02:53:02.872-0500 d20010| 2016-04-06T02:52:42.493-0500 I SHARDING [conn5] splitChunk accepted at version 1|64||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:02.876-0500 d20010| 2016-04-06T02:52:42.526-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:42.526-0500-5704c04a65c17830b843f1be", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162526), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -69.0 }, max: { _id: MaxKey } }, left: { min: { _id: -69.0 }, max: { _id: -68.0 }, lastmod: Timestamp 1000|65, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -68.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|66, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:53:02.879-0500 d20010| 2016-04-06T02:52:42.701-0500 I SHARDING [conn5] distributed lock with ts: 5704c04a65c17830b843f1bd' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:53:02.884-0500 d20010| 2016-04-06T02:52:42.701-0500 I COMMAND [conn5] command admin.$cmd command: splitChunk { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -69.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -68.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|64, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } numYields:0 reslen:74 locks:{ Global: { acquireCount: { r: 6, w: 2 } }, Database: { acquireCount: { r: 2, w: 2 } }, Collection: { acquireCount: { r: 2, W: 2 } } } protocol:op_command 265ms [js_test:multi_coll_drop] 2016-04-06T02:53:02.887-0500 d20010| 2016-04-06T02:52:42.713-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -68.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -67.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|66, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:02.889-0500 d20010| 2016-04-06T02:52:42.739-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -68.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c04a65c17830b843f1bf [js_test:multi_coll_drop] 2016-04-06T02:53:02.890-0500 d20010| 2016-04-06T02:52:42.739-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|66||5704c02806c33406d4d9c0c0, current metadata version is 1|66||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:02.892-0500 d20010| 2016-04-06T02:52:42.753-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|66||5704c02806c33406d4d9c0c0, took 14ms) [js_test:multi_coll_drop] 2016-04-06T02:53:02.894-0500 d20010| 2016-04-06T02:52:42.754-0500 I SHARDING [conn5] splitChunk accepted at version 1|66||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:02.896-0500 d20010| 2016-04-06T02:52:42.780-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:42.780-0500-5704c04a65c17830b843f1c0", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162780), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -68.0 }, max: { _id: MaxKey } }, left: { min: { _id: -68.0 }, max: { _id: -67.0 }, lastmod: Timestamp 1000|67, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -67.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|68, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:53:02.897-0500 d20010| 2016-04-06T02:52:42.823-0500 I SHARDING [conn5] distributed lock with ts: 5704c04a65c17830b843f1bf' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:53:02.902-0500 d20010| 2016-04-06T02:52:42.823-0500 I COMMAND [conn5] command admin.$cmd command: splitChunk { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -68.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -67.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|66, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } numYields:0 reslen:74 locks:{ Global: { acquireCount: { r: 6, w: 2 } }, Database: { acquireCount: { r: 2, w: 2 } }, Collection: { acquireCount: { r: 2, W: 2 } } } protocol:op_command 110ms [js_test:multi_coll_drop] 2016-04-06T02:53:02.906-0500 d20010| 2016-04-06T02:52:42.840-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -67.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -66.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|68, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:02.909-0500 d20010| 2016-04-06T02:52:42.865-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -67.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c04a65c17830b843f1c1 [js_test:multi_coll_drop] 2016-04-06T02:53:02.912-0500 d20010| 2016-04-06T02:52:42.865-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|68||5704c02806c33406d4d9c0c0, current metadata version is 1|68||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:02.913-0500 d20010| 2016-04-06T02:52:42.870-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|68||5704c02806c33406d4d9c0c0, took 5ms) [js_test:multi_coll_drop] 2016-04-06T02:53:02.913-0500 d20010| 2016-04-06T02:52:42.870-0500 I SHARDING [conn5] splitChunk accepted at version 1|68||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:02.918-0500 d20010| 2016-04-06T02:52:42.894-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:42.894-0500-5704c04a65c17830b843f1c2", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162894), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -67.0 }, max: { _id: MaxKey } }, left: { min: { _id: -67.0 }, max: { _id: -66.0 }, lastmod: Timestamp 1000|69, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -66.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|70, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:53:02.921-0500 d20010| 2016-04-06T02:52:42.937-0500 I SHARDING [conn5] distributed lock with ts: 5704c04a65c17830b843f1c1' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:53:02.926-0500 d20010| 2016-04-06T02:52:42.952-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -66.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -65.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|70, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:02.928-0500 d20010| 2016-04-06T02:52:43.053-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -66.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c04a65c17830b843f1c3 [js_test:multi_coll_drop] 2016-04-06T02:53:02.931-0500 d20010| 2016-04-06T02:52:43.053-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|70||5704c02806c33406d4d9c0c0, current metadata version is 1|70||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:02.934-0500 d20010| 2016-04-06T02:52:43.082-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|70||5704c02806c33406d4d9c0c0, took 29ms) [js_test:multi_coll_drop] 2016-04-06T02:53:02.936-0500 d20010| 2016-04-06T02:52:43.082-0500 I SHARDING [conn5] splitChunk accepted at version 1|70||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:02.943-0500 d20010| 2016-04-06T02:52:43.119-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:43.119-0500-5704c04b65c17830b843f1c4", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929163119), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -66.0 }, max: { _id: MaxKey } }, left: { min: { _id: -66.0 }, max: { _id: -65.0 }, lastmod: Timestamp 1000|71, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -65.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|72, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:53:02.944-0500 d20010| 2016-04-06T02:52:43.194-0500 I SHARDING [conn5] distributed lock with ts: 5704c04a65c17830b843f1c3' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:53:02.948-0500 d20010| 2016-04-06T02:52:43.194-0500 I COMMAND [conn5] command admin.$cmd command: splitChunk { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -66.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -65.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|70, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } numYields:0 reslen:74 locks:{ Global: { acquireCount: { r: 6, w: 2 } }, Database: { acquireCount: { r: 2, w: 2 } }, Collection: { acquireCount: { r: 2, W: 2 } } } protocol:op_command 241ms [js_test:multi_coll_drop] 2016-04-06T02:53:02.950-0500 d20010| 2016-04-06T02:52:43.203-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -65.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -64.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|72, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:02.953-0500 d20010| 2016-04-06T02:52:43.231-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -65.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c04b65c17830b843f1c5 [js_test:multi_coll_drop] 2016-04-06T02:53:02.955-0500 d20010| 2016-04-06T02:52:43.231-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|72||5704c02806c33406d4d9c0c0, current metadata version is 1|72||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:02.958-0500 d20010| 2016-04-06T02:52:43.232-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|72||5704c02806c33406d4d9c0c0, took 1ms) [js_test:multi_coll_drop] 2016-04-06T02:53:02.959-0500 d20010| 2016-04-06T02:52:43.232-0500 I SHARDING [conn5] splitChunk accepted at version 1|72||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:02.962-0500 d20010| 2016-04-06T02:52:43.260-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:52:43.260-0500-5704c04b65c17830b843f1c6", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929163260), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -65.0 }, max: { _id: MaxKey } }, left: { min: { _id: -65.0 }, max: { _id: -64.0 }, lastmod: Timestamp 1000|73, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -64.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|74, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:53:02.962-0500 d20010| 2016-04-06T02:52:43.324-0500 I SHARDING [conn5] distributed lock with ts: 5704c04b65c17830b843f1c5' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:53:02.967-0500 d20010| 2016-04-06T02:52:43.324-0500 I COMMAND [conn5] command admin.$cmd command: splitChunk { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -65.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -64.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|72, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } numYields:0 reslen:74 locks:{ Global: { acquireCount: { r: 6, w: 2 } }, Database: { acquireCount: { r: 2, w: 2 } }, Collection: { acquireCount: { r: 2, W: 2 } } } protocol:op_command 120ms [js_test:multi_coll_drop] 2016-04-06T02:53:02.969-0500 d20010| 2016-04-06T02:52:43.335-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -64.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -63.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|74, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:02.971-0500 d20010| 2016-04-06T02:52:43.367-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -64.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c04b65c17830b843f1c7 [js_test:multi_coll_drop] 2016-04-06T02:53:02.973-0500 d20010| 2016-04-06T02:52:43.367-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|74||5704c02806c33406d4d9c0c0, current metadata version is 1|74||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:02.975-0500 s20015| 2016-04-06T02:52:41.773-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 81 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929161000|6, t: 3 }, electionId: ObjectId('7fffffff0000000000000003') } [js_test:multi_coll_drop] 2016-04-06T02:53:02.978-0500 c20011| 2016-04-06T02:52:17.200-0500 D COMMAND [conn36] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929137199), up: 10, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:02.980-0500 c20011| 2016-04-06T02:52:17.200-0500 D - [conn36] User Assertion: 10107:not master [js_test:multi_coll_drop] 2016-04-06T02:53:02.983-0500 c20011| 2016-04-06T02:52:17.200-0500 D COMMAND [conn36] assertion while executing command 'update' on database 'config' with arguments '{ update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929137199), up: 10, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 10107 not master [js_test:multi_coll_drop] 2016-04-06T02:53:02.988-0500 c20011| 2016-04-06T02:52:17.200-0500 I COMMAND [conn36] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929137199), up: 10, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } exception: not master code:10107 numYields:0 reslen:55 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:02.990-0500 c20011| 2016-04-06T02:52:17.200-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:59630 #37 (10 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:02.991-0500 c20011| 2016-04-06T02:52:17.201-0500 D COMMAND [conn37] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:02.996-0500 c20011| 2016-04-06T02:52:17.201-0500 I COMMAND [conn37] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20014" } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:02.998-0500 c20011| 2016-04-06T02:52:17.201-0500 D COMMAND [conn37] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.001-0500 c20011| 2016-04-06T02:52:17.201-0500 I COMMAND [conn37] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.002-0500 c20011| 2016-04-06T02:52:17.201-0500 D COMMAND [conn37] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.025-0500 c20011| 2016-04-06T02:52:17.201-0500 I COMMAND [conn37] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.027-0500 c20011| 2016-04-06T02:52:17.261-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.029-0500 c20011| 2016-04-06T02:52:17.261-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.030-0500 c20011| 2016-04-06T02:52:17.436-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:59636 #38 (11 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:03.031-0500 c20011| 2016-04-06T02:52:17.436-0500 D COMMAND [conn38] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:03.032-0500 c20011| 2016-04-06T02:52:17.436-0500 I COMMAND [conn38] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20015" } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.035-0500 c20011| 2016-04-06T02:52:17.436-0500 D COMMAND [conn38] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929137435), up: 10, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.037-0500 c20011| 2016-04-06T02:52:17.436-0500 D - [conn38] User Assertion: 10107:not master [js_test:multi_coll_drop] 2016-04-06T02:53:03.040-0500 c20011| 2016-04-06T02:52:17.436-0500 D COMMAND [conn38] assertion while executing command 'update' on database 'config' with arguments '{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929137435), up: 10, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 10107 not master [js_test:multi_coll_drop] 2016-04-06T02:53:03.042-0500 c20011| 2016-04-06T02:52:17.436-0500 I COMMAND [conn38] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929137435), up: 10, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } exception: not master code:10107 numYields:0 reslen:55 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.045-0500 c20011| 2016-04-06T02:52:17.437-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:59637 #39 (12 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:03.045-0500 c20011| 2016-04-06T02:52:17.437-0500 D COMMAND [conn39] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:03.047-0500 c20011| 2016-04-06T02:52:17.437-0500 I COMMAND [conn39] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20015" } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.048-0500 c20011| 2016-04-06T02:52:17.437-0500 D COMMAND [conn39] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.049-0500 c20011| 2016-04-06T02:52:17.437-0500 I COMMAND [conn39] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.050-0500 c20011| 2016-04-06T02:52:17.437-0500 D COMMAND [conn39] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.052-0500 c20011| 2016-04-06T02:52:17.437-0500 I COMMAND [conn39] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.052-0500 c20011| 2016-04-06T02:52:17.462-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.052-0500 c20011| 2016-04-06T02:52:17.462-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.053-0500 c20011| 2016-04-06T02:52:17.560-0500 D COMMAND [conn32] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.053-0500 c20011| 2016-04-06T02:52:17.560-0500 I COMMAND [conn32] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.053-0500 c20011| 2016-04-06T02:52:17.663-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.059-0500 c20011| 2016-04-06T02:52:17.663-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.060-0500 c20011| 2016-04-06T02:52:17.702-0500 D COMMAND [conn37] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.064-0500 c20011| 2016-04-06T02:52:17.702-0500 I COMMAND [conn37] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.065-0500 c20011| 2016-04-06T02:52:17.864-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.065-0500 c20011| 2016-04-06T02:52:17.864-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.065-0500 c20011| 2016-04-06T02:52:17.938-0500 D COMMAND [conn39] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.067-0500 c20011| 2016-04-06T02:52:17.938-0500 I COMMAND [conn39] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.067-0500 c20011| 2016-04-06T02:52:18.061-0500 D COMMAND [conn32] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.069-0500 c20011| 2016-04-06T02:52:18.061-0500 I COMMAND [conn32] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.071-0500 c20011| 2016-04-06T02:52:18.065-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.073-0500 c20011| 2016-04-06T02:52:18.065-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.073-0500 c20011| 2016-04-06T02:52:18.203-0500 D COMMAND [conn37] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.075-0500 c20011| 2016-04-06T02:52:18.204-0500 I COMMAND [conn37] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.076-0500 c20011| 2016-04-06T02:52:18.266-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.080-0500 c20011| 2016-04-06T02:52:18.266-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.085-0500 c20011| 2016-04-06T02:52:18.358-0500 D COMMAND [conn32] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.087-0500 c20011| 2016-04-06T02:52:18.359-0500 I COMMAND [conn32] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.088-0500 c20011| 2016-04-06T02:52:18.439-0500 D COMMAND [conn39] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.093-0500 c20011| 2016-04-06T02:52:18.439-0500 I COMMAND [conn39] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.094-0500 c20011| 2016-04-06T02:52:18.466-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.101-0500 c20011| 2016-04-06T02:52:18.467-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.104-0500 c20011| 2016-04-06T02:52:18.547-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 53 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:28.547-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.104-0500 c20011| 2016-04-06T02:52:18.547-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 53 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:03.105-0500 c20011| 2016-04-06T02:52:18.547-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 53 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 1, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:03.106-0500 c20011| 2016-04-06T02:52:18.547-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:21.047Z [js_test:multi_coll_drop] 2016-04-06T02:53:03.106-0500 c20011| 2016-04-06T02:52:18.562-0500 D COMMAND [conn32] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.108-0500 c20011| 2016-04-06T02:52:18.562-0500 I COMMAND [conn32] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.108-0500 c20011| 2016-04-06T02:52:18.667-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.109-0500 c20011| 2016-04-06T02:52:18.668-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.110-0500 c20011| 2016-04-06T02:52:18.705-0500 D COMMAND [conn37] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.111-0500 c20011| 2016-04-06T02:52:18.705-0500 I COMMAND [conn37] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.112-0500 c20011| 2016-04-06T02:52:18.868-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.113-0500 c20011| 2016-04-06T02:52:18.869-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.113-0500 c20011| 2016-04-06T02:52:18.940-0500 D COMMAND [conn39] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.114-0500 c20011| 2016-04-06T02:52:18.940-0500 I COMMAND [conn39] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.116-0500 c20011| 2016-04-06T02:52:19.051-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 55 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:29.051-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.117-0500 c20011| 2016-04-06T02:52:19.051-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 55 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.118-0500 c20011| 2016-04-06T02:52:19.051-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 55 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 1, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:03.120-0500 c20011| 2016-04-06T02:52:19.052-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:21.552Z [js_test:multi_coll_drop] 2016-04-06T02:53:03.121-0500 c20011| 2016-04-06T02:52:19.053-0500 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.122-0500 c20011| 2016-04-06T02:52:19.053-0500 D COMMAND [conn28] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:03.126-0500 c20011| 2016-04-06T02:52:19.053-0500 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.129-0500 c20011| 2016-04-06T02:52:19.063-0500 D COMMAND [conn32] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.131-0500 c20011| 2016-04-06T02:52:19.063-0500 I COMMAND [conn32] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.133-0500 c20011| 2016-04-06T02:52:19.066-0500 D COMMAND [conn29] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.134-0500 c20011| 2016-04-06T02:52:19.066-0500 D COMMAND [conn29] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:03.138-0500 c20011| 2016-04-06T02:52:19.066-0500 I COMMAND [conn29] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.144-0500 c20011| 2016-04-06T02:52:19.070-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.155-0500 c20011| 2016-04-06T02:52:19.070-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.161-0500 c20011| 2016-04-06T02:52:19.101-0500 I REPL [ReplicationExecutor] Not starting an election, since we are not electable due to: Not standing for election because I am still waiting for stepdown period to end at 2016-04-06T02:52:24.045-0500 (mask 0x20) [js_test:multi_coll_drop] 2016-04-06T02:53:03.167-0500 c20011| 2016-04-06T02:52:19.140-0500 D COMMAND [conn29] run command admin.$cmd { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 1, candidateIndex: 1, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:03.169-0500 c20011| 2016-04-06T02:52:19.140-0500 D COMMAND [conn29] command: replSetRequestVotes [js_test:multi_coll_drop] 2016-04-06T02:53:03.170-0500 c20011| 2016-04-06T02:52:19.140-0500 D QUERY [conn29] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:03.174-0500 c20011| 2016-04-06T02:52:19.141-0500 I COMMAND [conn29] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 1, candidateIndex: 1, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929130000|10, t: 1 } } numYields:0 reslen:143 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.176-0500 c20011| 2016-04-06T02:52:19.141-0500 D COMMAND [conn29] run command admin.$cmd { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 2, candidateIndex: 1, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:03.176-0500 c20011| 2016-04-06T02:52:19.141-0500 D COMMAND [conn29] command: replSetRequestVotes [js_test:multi_coll_drop] 2016-04-06T02:53:03.178-0500 c20011| 2016-04-06T02:52:19.142-0500 D QUERY [conn29] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:03.184-0500 c20011| 2016-04-06T02:52:19.142-0500 I COMMAND [conn29] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 2, candidateIndex: 1, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929130000|10, t: 1 } } numYields:0 reslen:143 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.186-0500 c20011| 2016-04-06T02:52:19.142-0500 D COMMAND [conn29] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.187-0500 c20011| 2016-04-06T02:52:19.142-0500 D COMMAND [conn29] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:03.190-0500 c20011| 2016-04-06T02:52:19.142-0500 I COMMAND [conn29] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 2 } numYields:0 reslen:459 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.190-0500 c20011| 2016-04-06T02:52:19.208-0500 D COMMAND [conn37] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.193-0500 c20011| 2016-04-06T02:52:19.208-0500 I COMMAND [conn37] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.195-0500 c20011| 2016-04-06T02:52:19.276-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.197-0500 c20011| 2016-04-06T02:52:19.276-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.199-0500 c20011| 2016-04-06T02:52:19.441-0500 D COMMAND [conn39] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.201-0500 c20011| 2016-04-06T02:52:19.441-0500 I COMMAND [conn39] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.202-0500 c20011| 2016-04-06T02:52:19.477-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.204-0500 c20011| 2016-04-06T02:52:19.477-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.207-0500 c20011| 2016-04-06T02:52:19.576-0500 D COMMAND [conn32] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.209-0500 c20011| 2016-04-06T02:52:19.576-0500 I COMMAND [conn32] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.209-0500 c20011| 2016-04-06T02:52:19.678-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.210-0500 c20011| 2016-04-06T02:52:19.678-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.211-0500 c20011| 2016-04-06T02:52:19.709-0500 D COMMAND [conn37] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.212-0500 c20011| 2016-04-06T02:52:19.709-0500 I COMMAND [conn37] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.213-0500 c20011| 2016-04-06T02:52:19.942-0500 D COMMAND [conn39] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.213-0500 c20011| 2016-04-06T02:52:19.942-0500 I COMMAND [conn39] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.217-0500 c20011| 2016-04-06T02:52:21.047-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 57 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:31.047-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.218-0500 c20011| 2016-04-06T02:52:21.048-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 57 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:03.223-0500 c20011| 2016-04-06T02:52:21.048-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 57 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 2, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:03.226-0500 c20011| 2016-04-06T02:52:21.049-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:23.549Z [js_test:multi_coll_drop] 2016-04-06T02:53:03.230-0500 c20011| 2016-04-06T02:52:21.143-0500 D COMMAND [conn29] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.231-0500 c20011| 2016-04-06T02:52:21.143-0500 D COMMAND [conn29] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:03.232-0500 c20011| 2016-04-06T02:52:21.144-0500 I COMMAND [conn29] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 2 } numYields:0 reslen:459 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.237-0500 c20011| 2016-04-06T02:52:21.552-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 59 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:31.552-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.238-0500 c20011| 2016-04-06T02:52:21.552-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 59 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.241-0500 c20011| 2016-04-06T02:52:21.553-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 59 finished with response: { ok: 1.0, electionTime: new Date(6270347906482438145), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 2, primaryId: 1, durableOpTime: { ts: Timestamp 1459929139000|5, t: 2 }, opTime: { ts: Timestamp 1459929139000|5, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:03.245-0500 c20011| 2016-04-06T02:52:21.554-0500 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.246-0500 c20011| 2016-04-06T02:52:21.554-0500 I REPL [ReplicationExecutor] Member mongovm16:20012 is now in state PRIMARY [js_test:multi_coll_drop] 2016-04-06T02:53:03.246-0500 c20011| 2016-04-06T02:52:21.554-0500 D COMMAND [conn28] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:03.246-0500 c20011| 2016-04-06T02:52:21.554-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:24.054Z [js_test:multi_coll_drop] 2016-04-06T02:53:03.249-0500 c20011| 2016-04-06T02:52:21.554-0500 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } numYields:0 reslen:458 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.250-0500 c20011| 2016-04-06T02:52:21.615-0500 I REPL [ReplicationExecutor] syncing from: mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.256-0500 c20011| 2016-04-06T02:52:21.615-0500 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 61 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:51.615-0500 cmd:{ find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:03.258-0500 c20011| 2016-04-06T02:52:21.615-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 61 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.260-0500 c20011| 2016-04-06T02:52:21.616-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 61 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1459929117000|1, h: 1169182228640141205, v: 2, op: "n", ns: "", o: { msg: "initiating set" } } ], id: 0, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.262-0500 c20011| 2016-04-06T02:52:21.616-0500 D REPL [rsBackgroundSync] scheduling fetcher to read remote oplog on mongovm16:20012 starting at filter: { ts: { $gte: Timestamp 1459929130000|10 } } [js_test:multi_coll_drop] 2016-04-06T02:53:03.265-0500 c20011| 2016-04-06T02:52:21.616-0500 D REPL [SyncSourceFeedback] setting syncSourceFeedback to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.269-0500 c20011| 2016-04-06T02:52:21.616-0500 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 63 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:26.616-0500 cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929130000|10 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.273-0500 c20011| 2016-04-06T02:52:21.617-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:03.281-0500 c20011| 2016-04-06T02:52:21.617-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 65 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:03.282-0500 c20011| 2016-04-06T02:52:21.617-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.284-0500 c20011| 2016-04-06T02:52:21.617-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 66 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.286-0500 c20011| 2016-04-06T02:52:21.618-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.287-0500 c20011| 2016-04-06T02:52:21.618-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 66 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:03.288-0500 c20011| 2016-04-06T02:52:21.618-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 65 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.292-0500 c20011| 2016-04-06T02:52:21.618-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 65 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.294-0500 c20011| 2016-04-06T02:52:21.619-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.296-0500 c20011| 2016-04-06T02:52:21.619-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 64 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.298-0500 c20011| 2016-04-06T02:52:21.620-0500 I ASIO [NetworkInterfaceASIO-BGSync-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.299-0500 c20011| 2016-04-06T02:52:21.620-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 64 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:03.301-0500 c20011| 2016-04-06T02:52:21.620-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 63 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.311-0500 c20011| 2016-04-06T02:52:21.621-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 63 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1459929130000|10, t: 1, h: 3135197531614568333, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } }, { ts: Timestamp 1459929139000|2, t: 2, h: -9164491805014394944, v: 2, op: "n", ns: "", o: { msg: "new primary" } }, { ts: Timestamp 1459929139000|3, t: 2, h: -3935544630640156266, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c03365c17830b843f1a5'), state: 2, when: new Date(1459929139585), why: "splitting chunk [{ _id: -81.0 }, { _id: MaxKey }) in multidrop.coll" } } }, { ts: Timestamp 1459929139000|4, t: 2, h: -8260193851631985048, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20014" }, o: { $set: { ping: new Date(1459929137199), up: 10, waiting: false } } }, { ts: Timestamp 1459929139000|5, t: 2, h: 666054914550689290, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20015" }, o: { $set: { ping: new Date(1459929137435), up: 10, waiting: false } } } ], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.313-0500 c20011| 2016-04-06T02:52:21.621-0500 D REPL [rsBackgroundSync-0] fetcher read 5 operations from remote oplog starting at ts: Timestamp 1459929130000|10 and ending at ts: Timestamp 1459929139000|5 [js_test:multi_coll_drop] 2016-04-06T02:53:03.315-0500 c20011| 2016-04-06T02:52:21.621-0500 D STORAGE [rsSync] stored meta data for local.replset.minvalid @ RecordId(17) [js_test:multi_coll_drop] 2016-04-06T02:53:03.317-0500 c20011| 2016-04-06T02:52:21.621-0500 D STORAGE [rsSync] WiredTigerKVEngine::createRecordStore uri: table:collection-39--6404702321693896372 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) [js_test:multi_coll_drop] 2016-04-06T02:53:03.320-0500 c20011| 2016-04-06T02:52:21.623-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 69 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:26.623-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:03.322-0500 c20011| 2016-04-06T02:52:21.624-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 69 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.324-0500 c20011| 2016-04-06T02:52:21.627-0500 D STORAGE [rsSync] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-39--6404702321693896372 ok range 1 -> 1 current: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:03.325-0500 c20011| 2016-04-06T02:52:21.627-0500 D STORAGE [rsSync] local.replset.minvalid: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:53:03.328-0500 c20011| 2016-04-06T02:52:21.627-0500 D STORAGE [rsSync] WiredTigerKVEngine::createSortedDataInterface ident: index-40--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.replset.minvalid" }), [js_test:multi_coll_drop] 2016-04-06T02:53:03.332-0500 c20011| 2016-04-06T02:52:21.627-0500 D STORAGE [rsSync] create uri: table:index-40--6404702321693896372 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=6,infoObj={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.replset.minvalid" }), [js_test:multi_coll_drop] 2016-04-06T02:53:03.334-0500 c20011| 2016-04-06T02:52:21.634-0500 D STORAGE [rsSync] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-40--6404702321693896372 ok range 6 -> 6 current: 6 [js_test:multi_coll_drop] 2016-04-06T02:53:03.334-0500 c20011| 2016-04-06T02:52:21.634-0500 D STORAGE [rsSync] local.replset.minvalid: clearing plan cache - collection info cache reset [js_test:multi_coll_drop] 2016-04-06T02:53:03.337-0500 c20011| 2016-04-06T02:52:21.634-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:03.338-0500 c20011| 2016-04-06T02:52:21.634-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.338-0500 c20011| 2016-04-06T02:52:21.634-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.339-0500 c20011| 2016-04-06T02:52:21.634-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.341-0500 c20011| 2016-04-06T02:52:21.635-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.341-0500 c20011| 2016-04-06T02:52:21.635-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.342-0500 c20011| 2016-04-06T02:52:21.635-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.348-0500 c20011| 2016-04-06T02:52:21.635-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.349-0500 c20011| 2016-04-06T02:52:21.635-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.350-0500 c20011| 2016-04-06T02:52:21.635-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.352-0500 c20011| 2016-04-06T02:52:21.635-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.352-0500 c20011| 2016-04-06T02:52:21.635-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.353-0500 c20011| 2016-04-06T02:52:21.635-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.354-0500 c20011| 2016-04-06T02:52:21.635-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.356-0500 c20011| 2016-04-06T02:52:21.635-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.357-0500 c20011| 2016-04-06T02:52:21.635-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:03.358-0500 c20011| 2016-04-06T02:52:21.635-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.360-0500 c20011| 2016-04-06T02:52:21.635-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.365-0500 c20011| 2016-04-06T02:52:21.635-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.368-0500 c20011| 2016-04-06T02:52:21.635-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.370-0500 c20011| 2016-04-06T02:52:21.635-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.370-0500 c20011| 2016-04-06T02:52:21.635-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.374-0500 c20011| 2016-04-06T02:52:21.635-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.375-0500 c20011| 2016-04-06T02:52:21.635-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.376-0500 c20011| 2016-04-06T02:52:21.635-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.378-0500 c20011| 2016-04-06T02:52:21.635-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.380-0500 c20011| 2016-04-06T02:52:21.635-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.383-0500 c20011| 2016-04-06T02:52:21.635-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.386-0500 c20011| 2016-04-06T02:52:21.635-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.389-0500 c20011| 2016-04-06T02:52:21.636-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.391-0500 c20011| 2016-04-06T02:52:21.636-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.394-0500 c20011| 2016-04-06T02:52:21.635-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.399-0500 c20011| 2016-04-06T02:52:21.636-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.399-0500 c20011| 2016-04-06T02:52:21.635-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.401-0500 c20011| 2016-04-06T02:52:21.636-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:03.405-0500 c20011| 2016-04-06T02:52:21.636-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:03.405-0500 c20011| 2016-04-06T02:52:21.636-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.412-0500 c20011| 2016-04-06T02:52:21.636-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929139000|2, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:03.413-0500 c20011| 2016-04-06T02:52:21.636-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.414-0500 c20011| 2016-04-06T02:52:21.636-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.424-0500 c20011| 2016-04-06T02:52:21.636-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 70 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929139000|2, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:03.426-0500 c20011| 2016-04-06T02:52:21.636-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.428-0500 c20011| 2016-04-06T02:52:21.636-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 70 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.429-0500 c20011| 2016-04-06T02:52:21.636-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.433-0500 c20011| 2016-04-06T02:52:21.636-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.434-0500 c20011| 2016-04-06T02:52:21.636-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.436-0500 c20011| 2016-04-06T02:52:21.636-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.442-0500 c20011| 2016-04-06T02:52:21.636-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.447-0500 c20011| 2016-04-06T02:52:21.636-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.449-0500 c20011| 2016-04-06T02:52:21.636-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.451-0500 c20011| 2016-04-06T02:52:21.636-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.455-0500 c20011| 2016-04-06T02:52:21.636-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.456-0500 c20011| 2016-04-06T02:52:21.636-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.462-0500 c20011| 2016-04-06T02:52:21.636-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.465-0500 c20011| 2016-04-06T02:52:21.637-0500 D REPL [rsSync] replication batch size is 3 [js_test:multi_coll_drop] 2016-04-06T02:53:03.465-0500 c20011| 2016-04-06T02:52:21.637-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.470-0500 c20011| 2016-04-06T02:52:21.637-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:03.476-0500 c20011| 2016-04-06T02:52:21.637-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 70 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.484-0500 c20011| 2016-04-06T02:52:21.637-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:03.486-0500 c20011| 2016-04-06T02:52:21.637-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:03.487-0500 c20011| 2016-04-06T02:52:21.637-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.487-0500 c20011| 2016-04-06T02:52:21.637-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.495-0500 c20011| 2016-04-06T02:52:21.637-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.509-0500 c20011| 2016-04-06T02:52:21.637-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.511-0500 c20011| 2016-04-06T02:52:21.637-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.514-0500 c20011| 2016-04-06T02:52:21.637-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.518-0500 c20011| 2016-04-06T02:52:21.637-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.520-0500 c20011| 2016-04-06T02:52:21.637-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.522-0500 c20011| 2016-04-06T02:52:21.637-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.522-0500 c20011| 2016-04-06T02:52:21.637-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.523-0500 c20011| 2016-04-06T02:52:21.637-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.525-0500 c20011| 2016-04-06T02:52:21.637-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.528-0500 c20011| 2016-04-06T02:52:21.637-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.529-0500 c20011| 2016-04-06T02:52:21.637-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.531-0500 c20011| 2016-04-06T02:52:21.637-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.532-0500 c20011| 2016-04-06T02:52:21.637-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.535-0500 c20011| 2016-04-06T02:52:21.638-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:03.541-0500 c20011| 2016-04-06T02:52:21.638-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929139000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:03.550-0500 c20011| 2016-04-06T02:52:21.638-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 72 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929139000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:03.551-0500 c20011| 2016-04-06T02:52:21.638-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 72 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.552-0500 c20011| 2016-04-06T02:52:21.638-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 72 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.558-0500 c20011| 2016-04-06T02:52:21.638-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929139000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929139000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:03.562-0500 c20011| 2016-04-06T02:52:21.638-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 73 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929139000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929139000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:03.563-0500 c20011| 2016-04-06T02:52:21.638-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.565-0500 c20011| 2016-04-06T02:52:21.638-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 73 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.565-0500 c20011| 2016-04-06T02:52:21.638-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 74 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.567-0500 c20011| 2016-04-06T02:52:21.638-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 73 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.572-0500 c20011| 2016-04-06T02:52:21.638-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929139000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929139000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:03.579-0500 c20011| 2016-04-06T02:52:21.638-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 76 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929139000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929139000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:03.580-0500 c20011| 2016-04-06T02:52:21.639-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 76 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.581-0500 c20011| 2016-04-06T02:52:21.639-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 76 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.584-0500 c20011| 2016-04-06T02:52:21.641-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.588-0500 c20011| 2016-04-06T02:52:21.641-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 69 finished with response: { cursor: { nextBatch: [], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.589-0500 c20011| 2016-04-06T02:52:21.641-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 74 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:03.592-0500 c20011| 2016-04-06T02:52:21.641-0500 D COMMAND [conn36] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929139000|5, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.596-0500 c20011| 2016-04-06T02:52:21.641-0500 D REPL [conn36] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929139000|5, t: 2 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.599-0500 c20011| 2016-04-06T02:52:21.641-0500 D REPL [conn36] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999978μs [js_test:multi_coll_drop] 2016-04-06T02:53:03.600-0500 c20011| 2016-04-06T02:52:21.644-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929139000|5, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.602-0500 c20011| 2016-04-06T02:52:21.644-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:03.607-0500 c20011| 2016-04-06T02:52:21.644-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 80 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:26.644-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929139000|5, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:03.607-0500 c20011| 2016-04-06T02:52:21.644-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 80 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.610-0500 c20011| 2016-04-06T02:52:21.644-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929139000|5, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:03.612-0500 c20011| 2016-04-06T02:52:21.644-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929139000|5, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.614-0500 c20011| 2016-04-06T02:52:21.644-0500 D QUERY [conn36] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:03.620-0500 c20011| 2016-04-06T02:52:21.644-0500 I COMMAND [conn36] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929139000|5, t: 2 } }, maxTimeMS: 30000 } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:423 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.621-0500 c20011| 2016-04-06T02:52:21.645-0500 D COMMAND [conn36] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929139000|5, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.623-0500 c20011| 2016-04-06T02:52:21.645-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929139000|5, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:03.625-0500 c20011| 2016-04-06T02:52:21.645-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929139000|5, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.626-0500 c20011| 2016-04-06T02:52:21.645-0500 D QUERY [conn36] Using idhack: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:03.628-0500 c20011| 2016-04-06T02:52:21.645-0500 I COMMAND [conn36] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929139000|5, t: 2 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:414 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.630-0500 c20011| 2016-04-06T02:52:21.645-0500 D COMMAND [conn36] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929139000|5, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.633-0500 c20011| 2016-04-06T02:52:21.645-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929139000|5, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:03.638-0500 c20011| 2016-04-06T02:52:21.645-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929139000|5, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.638-0500 c20011| 2016-04-06T02:52:21.645-0500 D QUERY [conn36] Using idhack: query: { _id: "balancer" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:03.644-0500 c20011| 2016-04-06T02:52:21.645-0500 I COMMAND [conn36] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929139000|5, t: 2 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:408 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.649-0500 c20011| 2016-04-06T02:52:21.646-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 80 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929141000|1, t: 2, h: 1487969004901916751, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20014" }, o: { $set: { ping: new Date(1459929141645), up: 14, waiting: true } } } ], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.655-0500 c20011| 2016-04-06T02:52:21.646-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929141000|1 and ending at ts: Timestamp 1459929141000|1 [js_test:multi_coll_drop] 2016-04-06T02:53:03.660-0500 c20011| 2016-04-06T02:52:21.646-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:03.662-0500 c20011| 2016-04-06T02:52:21.646-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.663-0500 c20011| 2016-04-06T02:52:21.646-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.664-0500 c20011| 2016-04-06T02:52:21.646-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.665-0500 c20011| 2016-04-06T02:52:21.646-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.667-0500 c20011| 2016-04-06T02:52:21.646-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.667-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.669-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.670-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.671-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.673-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.675-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.677-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.677-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.678-0500 c20011| 2016-04-06T02:52:21.647-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:03.685-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.688-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.688-0500 c20011| 2016-04-06T02:52:21.647-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:03.700-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.701-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.703-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.707-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.719-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.719-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.720-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.721-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.722-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.725-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.726-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.727-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.727-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.727-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.728-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.729-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.730-0500 c20011| 2016-04-06T02:52:21.647-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.731-0500 c20011| 2016-04-06T02:52:21.647-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:03.738-0500 c20011| 2016-04-06T02:52:21.648-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929139000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:03.740-0500 c20011| 2016-04-06T02:52:21.648-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 82 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929139000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:03.741-0500 c20011| 2016-04-06T02:52:21.648-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 82 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.742-0500 c20011| 2016-04-06T02:52:21.648-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 82 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.744-0500 c20011| 2016-04-06T02:52:21.651-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 84 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:26.651-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929139000|5, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:03.760-0500 c20011| 2016-04-06T02:52:21.651-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:03.762-0500 c20011| 2016-04-06T02:52:21.651-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 85 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:03.768-0500 c20011| 2016-04-06T02:52:21.651-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 85 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.769-0500 c20011| 2016-04-06T02:52:21.651-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 85 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.770-0500 c20011| 2016-04-06T02:52:21.653-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 84 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.771-0500 c20011| 2016-04-06T02:52:21.654-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 84 finished with response: { cursor: { nextBatch: [], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.772-0500 c20011| 2016-04-06T02:52:21.654-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929141000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.772-0500 c20011| 2016-04-06T02:52:21.654-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:03.775-0500 c20011| 2016-04-06T02:52:21.654-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 88 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:26.654-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929141000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:03.776-0500 c20011| 2016-04-06T02:52:21.655-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 88 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.780-0500 c20011| 2016-04-06T02:52:22.562-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:59865 #40 (13 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:03.782-0500 c20011| 2016-04-06T02:52:22.562-0500 D COMMAND [conn40] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20010" } [js_test:multi_coll_drop] 2016-04-06T02:53:03.787-0500 c20011| 2016-04-06T02:52:22.563-0500 I COMMAND [conn40] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20010" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.796-0500 c20011| 2016-04-06T02:52:22.563-0500 D COMMAND [conn40] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|40 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929141000|1, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.806-0500 c20011| 2016-04-06T02:52:22.563-0500 D COMMAND [conn40] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929141000|1, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:03.832-0500 c20011| 2016-04-06T02:52:22.563-0500 D COMMAND [conn40] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|40 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929141000|1, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.834-0500 c20011| 2016-04-06T02:52:22.563-0500 D COMMAND [conn38] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929141000|1, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.836-0500 c20011| 2016-04-06T02:52:22.563-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929141000|1, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:03.838-0500 c20011| 2016-04-06T02:52:22.563-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929141000|1, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.838-0500 c20011| 2016-04-06T02:52:22.563-0500 D QUERY [conn38] Using idhack: query: { _id: "balancer" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:03.839-0500 c20011| 2016-04-06T02:52:22.563-0500 D QUERY [conn40] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:03.843-0500 c20011| 2016-04-06T02:52:22.563-0500 I COMMAND [conn38] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929141000|1, t: 2 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:408 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.847-0500 c20011| 2016-04-06T02:52:22.563-0500 I COMMAND [conn40] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|40 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929141000|1, t: 2 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:03.852-0500 c20011| 2016-04-06T02:52:22.565-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 88 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929142000|1, t: 2, h: -2425702389962912903, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-81.0", lastmod: Timestamp 1000|41, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -81.0 }, max: { _id: -80.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-81.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-80.0", lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -80.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-80.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.854-0500 c20011| 2016-04-06T02:52:22.565-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929142000|1 and ending at ts: Timestamp 1459929142000|1 [js_test:multi_coll_drop] 2016-04-06T02:53:03.858-0500 c20011| 2016-04-06T02:52:22.565-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:03.858-0500 c20011| 2016-04-06T02:52:22.565-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.858-0500 c20011| 2016-04-06T02:52:22.565-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.859-0500 c20011| 2016-04-06T02:52:22.565-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.861-0500 c20011| 2016-04-06T02:52:22.565-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.863-0500 c20011| 2016-04-06T02:52:22.565-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.864-0500 c20011| 2016-04-06T02:52:22.565-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.865-0500 c20011| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.866-0500 c20011| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.872-0500 c20011| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.873-0500 c20011| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.874-0500 c20011| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.878-0500 c20011| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.880-0500 c20011| 2016-04-06T02:52:22.566-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:03.881-0500 c20011| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.881-0500 c20011| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.882-0500 c20011| 2016-04-06T02:52:22.566-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-81.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:03.883-0500 c20011| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.884-0500 c20011| 2016-04-06T02:52:22.566-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-80.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:03.885-0500 c20011| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.886-0500 c20011| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.888-0500 c20011| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.889-0500 c20011| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.891-0500 c20011| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.892-0500 c20011| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.893-0500 c20011| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.894-0500 c20011| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.898-0500 c20011| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.899-0500 c20011| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.899-0500 c20011| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.903-0500 c20011| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.904-0500 c20011| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.905-0500 c20011| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.908-0500 c20011| 2016-04-06T02:52:22.567-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 90 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.567-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929141000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:03.910-0500 c20011| 2016-04-06T02:52:22.567-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 90 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.913-0500 c20011| 2016-04-06T02:52:22.567-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 90 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929142000|2, t: 2, h: 2120859807080656699, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20015" }, o: { $set: { ping: new Date(1459929142564), up: 15, waiting: true } } } ], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.915-0500 c20011| 2016-04-06T02:52:22.568-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929142000|2 and ending at ts: Timestamp 1459929142000|2 [js_test:multi_coll_drop] 2016-04-06T02:53:03.918-0500 c20011| 2016-04-06T02:52:22.570-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 92 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.570-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929141000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:03.918-0500 c20011| 2016-04-06T02:52:22.570-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 92 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.923-0500 c20011| 2016-04-06T02:52:22.570-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.926-0500 c20011| 2016-04-06T02:52:22.570-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.928-0500 c20011| 2016-04-06T02:52:22.570-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.931-0500 c20011| 2016-04-06T02:52:22.571-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:03.934-0500 c20011| 2016-04-06T02:52:22.571-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:03.939-0500 c20011| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.941-0500 c20011| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.944-0500 c20011| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.950-0500 c20011| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.951-0500 c20011| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.953-0500 c20011| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.958-0500 c20011| 2016-04-06T02:52:22.571-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|1, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:03.964-0500 c20011| 2016-04-06T02:52:22.571-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 93 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|1, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:03.964-0500 c20011| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.966-0500 c20011| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.968-0500 c20011| 2016-04-06T02:52:22.571-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 93 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:03.970-0500 c20011| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.971-0500 c20011| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.971-0500 c20011| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.972-0500 c20011| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.976-0500 c20011| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.978-0500 c20011| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.978-0500 c20011| 2016-04-06T02:52:22.571-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:03.981-0500 c20011| 2016-04-06T02:52:22.571-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 93 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:03.982-0500 c20011| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.983-0500 c20011| 2016-04-06T02:52:22.572-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:03.985-0500 c20011| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.986-0500 c20011| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.987-0500 c20011| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.987-0500 c20011| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.990-0500 c20011| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.994-0500 c20011| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.997-0500 c20011| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.998-0500 c20011| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:03.999-0500 c20011| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.007-0500 c20011| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.007-0500 c20011| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.010-0500 c20011| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.011-0500 c20011| 2016-04-06T02:52:22.573-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.013-0500 c20011| 2016-04-06T02:52:22.573-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.016-0500 c20011| 2016-04-06T02:52:22.573-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.017-0500 c20011| 2016-04-06T02:52:22.579-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.017-0500 c20011| 2016-04-06T02:52:22.580-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.019-0500 c20011| 2016-04-06T02:52:22.580-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:04.022-0500 c20011| 2016-04-06T02:52:22.580-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|2, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.024-0500 c20011| 2016-04-06T02:52:22.580-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 95 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|2, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.025-0500 c20011| 2016-04-06T02:52:22.580-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 95 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:04.027-0500 c20011| 2016-04-06T02:52:22.580-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 95 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.034-0500 c20011| 2016-04-06T02:52:22.590-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|2, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.038-0500 c20011| 2016-04-06T02:52:22.590-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 97 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|2, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.040-0500 c20011| 2016-04-06T02:52:22.590-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 97 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:04.045-0500 c20011| 2016-04-06T02:52:22.590-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 97 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.047-0500 c20011| 2016-04-06T02:52:22.591-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 92 finished with response: { cursor: { nextBatch: [], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.048-0500 c20011| 2016-04-06T02:52:22.591-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.050-0500 c20011| 2016-04-06T02:52:22.591-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:04.057-0500 c20011| 2016-04-06T02:52:22.591-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 100 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.591-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:04.058-0500 c20011| 2016-04-06T02:52:22.591-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 100 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:04.066-0500 c20011| 2016-04-06T02:52:22.591-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 100 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929142000|3, t: 2, h: -7768965791966286535, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:22.591-0500-5704c03665c17830b843f1a6", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929142591), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -81.0 }, max: { _id: MaxKey } }, left: { min: { _id: -81.0 }, max: { _id: -80.0 }, lastmod: Timestamp 1000|41, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -80.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.068-0500 c20011| 2016-04-06T02:52:22.592-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929142000|3 and ending at ts: Timestamp 1459929142000|3 [js_test:multi_coll_drop] 2016-04-06T02:53:04.074-0500 c20011| 2016-04-06T02:52:22.592-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:04.074-0500 c20011| 2016-04-06T02:52:22.593-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.075-0500 c20011| 2016-04-06T02:52:22.593-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.076-0500 c20011| 2016-04-06T02:52:22.593-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.081-0500 c20011| 2016-04-06T02:52:22.593-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.082-0500 c20011| 2016-04-06T02:52:22.593-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.085-0500 c20011| 2016-04-06T02:52:22.593-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.088-0500 c20011| 2016-04-06T02:52:22.593-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.089-0500 c20011| 2016-04-06T02:52:22.593-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.089-0500 c20011| 2016-04-06T02:52:22.593-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.090-0500 c20011| 2016-04-06T02:52:22.593-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.091-0500 c20011| 2016-04-06T02:52:22.593-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.092-0500 c20011| 2016-04-06T02:52:22.593-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.093-0500 c20011| 2016-04-06T02:52:22.593-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:04.093-0500 c20011| 2016-04-06T02:52:22.593-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.097-0500 c20011| 2016-04-06T02:52:22.593-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.097-0500 c20011| 2016-04-06T02:52:22.593-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.097-0500 c20011| 2016-04-06T02:52:22.593-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.099-0500 c20011| 2016-04-06T02:52:22.593-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.099-0500 c20011| 2016-04-06T02:52:22.594-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.100-0500 c20011| 2016-04-06T02:52:22.594-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.103-0500 c20011| 2016-04-06T02:52:22.594-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.103-0500 c20011| 2016-04-06T02:52:22.594-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.103-0500 c20011| 2016-04-06T02:52:22.594-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.107-0500 c20011| 2016-04-06T02:52:22.594-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.107-0500 c20011| 2016-04-06T02:52:22.594-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.107-0500 c20011| 2016-04-06T02:52:22.594-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.109-0500 c20011| 2016-04-06T02:52:22.594-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.111-0500 c20011| 2016-04-06T02:52:22.594-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.113-0500 c20011| 2016-04-06T02:52:22.594-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.114-0500 c20011| 2016-04-06T02:52:22.594-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.115-0500 c20011| 2016-04-06T02:52:22.594-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.116-0500 c20011| 2016-04-06T02:52:22.594-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.118-0500 c20011| 2016-04-06T02:52:22.594-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.120-0500 c20011| 2016-04-06T02:52:22.594-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:04.134-0500 c20011| 2016-04-06T02:52:22.594-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 102 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.594-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:04.142-0500 c20011| 2016-04-06T02:52:22.594-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.143-0500 c20011| 2016-04-06T02:52:22.594-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 102 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:04.143-0500 c20012| 2016-04-06T02:52:08.837-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.144-0500 c20012| 2016-04-06T02:52:08.837-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.145-0500 c20012| 2016-04-06T02:52:08.837-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.149-0500 c20012| 2016-04-06T02:52:08.837-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.150-0500 c20012| 2016-04-06T02:52:08.837-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.150-0500 c20012| 2016-04-06T02:52:08.837-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.153-0500 c20012| 2016-04-06T02:52:08.837-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.154-0500 c20012| 2016-04-06T02:52:08.837-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:04.160-0500 c20012| 2016-04-06T02:52:08.837-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.164-0500 c20012| 2016-04-06T02:52:08.837-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 642 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|44, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.165-0500 c20012| 2016-04-06T02:52:08.837-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 642 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:04.171-0500 c20012| 2016-04-06T02:52:08.837-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 643 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.837-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|45, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:04.173-0500 c20012| 2016-04-06T02:52:08.837-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 642 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.173-0500 c20012| 2016-04-06T02:52:08.839-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 643 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:04.178-0500 c20012| 2016-04-06T02:52:08.839-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.184-0500 c20012| 2016-04-06T02:52:08.839-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 645 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.185-0500 c20012| 2016-04-06T02:52:08.839-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 645 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:04.186-0500 c20012| 2016-04-06T02:52:08.839-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 645 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.195-0500 c20012| 2016-04-06T02:52:08.839-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 643 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|46, t: 1, h: 3326031865404345327, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-93.0", lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -93.0 }, max: { _id: -92.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-93.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-92.0", lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -92.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-92.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.204-0500 c20012| 2016-04-06T02:52:08.839-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|46 and ending at ts: Timestamp 1459929128000|46 [js_test:multi_coll_drop] 2016-04-06T02:53:04.213-0500 c20011| 2016-04-06T02:52:22.594-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 103 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.214-0500 c20011| 2016-04-06T02:52:22.594-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 103 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:04.216-0500 c20011| 2016-04-06T02:52:22.595-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 103 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.225-0500 c20011| 2016-04-06T02:52:22.615-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.234-0500 c20011| 2016-04-06T02:52:22.615-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 105 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.235-0500 c20011| 2016-04-06T02:52:22.615-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 105 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:04.235-0500 c20011| 2016-04-06T02:52:22.615-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 105 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.244-0500 c20011| 2016-04-06T02:52:22.625-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 102 finished with response: { cursor: { nextBatch: [], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.246-0500 c20011| 2016-04-06T02:52:22.626-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|2, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.248-0500 c20011| 2016-04-06T02:52:22.626-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:04.259-0500 c20011| 2016-04-06T02:52:22.626-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 108 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.626-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|2, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:04.264-0500 c20011| 2016-04-06T02:52:22.626-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 108 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:04.268-0500 c20011| 2016-04-06T02:52:22.633-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.272-0500 c20011| 2016-04-06T02:52:22.633-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 109 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.273-0500 c20011| 2016-04-06T02:52:22.633-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 109 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:04.278-0500 c20011| 2016-04-06T02:52:22.633-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 109 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.280-0500 c20011| 2016-04-06T02:52:22.633-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 108 finished with response: { cursor: { nextBatch: [], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.282-0500 c20011| 2016-04-06T02:52:22.634-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|3, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.282-0500 c20011| 2016-04-06T02:52:22.634-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:04.286-0500 c20011| 2016-04-06T02:52:22.634-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 112 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.634-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|3, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:04.287-0500 c20011| 2016-04-06T02:52:22.634-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 112 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:04.292-0500 c20012| 2016-04-06T02:52:08.840-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:04.293-0500 c20012| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.296-0500 c20012| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.298-0500 c20012| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.299-0500 c20012| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.301-0500 c20012| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.301-0500 c20012| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.302-0500 c20012| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.304-0500 c20012| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.304-0500 c20012| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.305-0500 c20012| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.307-0500 c20012| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.310-0500 c20012| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.312-0500 c20012| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.313-0500 c20012| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.314-0500 c20012| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.315-0500 c20012| 2016-04-06T02:52:08.840-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:04.316-0500 c20012| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.317-0500 c20012| 2016-04-06T02:52:08.840-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll-_id_-93.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:04.320-0500 c20012| 2016-04-06T02:52:08.840-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll-_id_-92.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:04.322-0500 c20012| 2016-04-06T02:52:08.840-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.323-0500 c20012| 2016-04-06T02:52:08.841-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.323-0500 c20012| 2016-04-06T02:52:08.841-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.330-0500 c20012| 2016-04-06T02:52:08.841-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.331-0500 c20012| 2016-04-06T02:52:08.841-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.333-0500 c20012| 2016-04-06T02:52:08.841-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.334-0500 c20012| 2016-04-06T02:52:08.841-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.334-0500 c20012| 2016-04-06T02:52:08.841-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.345-0500 c20012| 2016-04-06T02:52:08.841-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.346-0500 c20012| 2016-04-06T02:52:08.841-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.348-0500 c20012| 2016-04-06T02:52:08.841-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.350-0500 c20012| 2016-04-06T02:52:08.841-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.351-0500 c20012| 2016-04-06T02:52:08.841-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.351-0500 c20012| 2016-04-06T02:52:08.841-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.355-0500 c20011| 2016-04-06T02:52:22.634-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 112 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929142000|4, t: 2, h: 5387421193544532636, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.357-0500 c20011| 2016-04-06T02:52:22.645-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929142000|4 and ending at ts: Timestamp 1459929142000|4 [js_test:multi_coll_drop] 2016-04-06T02:53:04.359-0500 c20011| 2016-04-06T02:52:22.645-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:04.359-0500 c20011| 2016-04-06T02:52:22.645-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.360-0500 c20011| 2016-04-06T02:52:22.645-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.362-0500 c20011| 2016-04-06T02:52:22.645-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.363-0500 c20011| 2016-04-06T02:52:22.645-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.364-0500 c20011| 2016-04-06T02:52:22.645-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.366-0500 c20011| 2016-04-06T02:52:22.647-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.366-0500 c20012| 2016-04-06T02:52:08.841-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.370-0500 c20011| 2016-04-06T02:52:22.647-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.372-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.374-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.374-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.378-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.382-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.384-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.384-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.385-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.386-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.392-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.394-0500 c20013| 2016-04-06T02:52:08.907-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:04.396-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.396-0500 c20013| 2016-04-06T02:52:08.907-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:04.398-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.399-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.400-0500 c20013| 2016-04-06T02:52:08.907-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 724 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.401-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.401-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.402-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.404-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.405-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.408-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.410-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.411-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.412-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.414-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.415-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.416-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.417-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.418-0500 c20012| 2016-04-06T02:52:08.841-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.418-0500 c20012| 2016-04-06T02:52:08.841-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:04.421-0500 c20012| 2016-04-06T02:52:08.841-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|46, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.422-0500 c20011| 2016-04-06T02:52:22.647-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.422-0500 c20011| 2016-04-06T02:52:22.647-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.422-0500 c20011| 2016-04-06T02:52:22.647-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.424-0500 c20013| 2016-04-06T02:52:08.908-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.425-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.427-0500 c20013| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.427-0500 c20013| 2016-04-06T02:52:08.908-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.431-0500 c20012| 2016-04-06T02:52:08.841-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 648 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|45, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|46, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.431-0500 c20011| 2016-04-06T02:52:22.647-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.432-0500 c20011| 2016-04-06T02:52:22.647-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.434-0500 c20011| 2016-04-06T02:52:22.647-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 114 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.647-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|3, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:04.435-0500 c20011| 2016-04-06T02:52:22.647-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.437-0500 c20012| 2016-04-06T02:52:08.841-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 648 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:04.438-0500 c20012| 2016-04-06T02:52:08.841-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 648 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.444-0500 c20012| 2016-04-06T02:52:08.841-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|46, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|46, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.451-0500 c20012| 2016-04-06T02:52:08.841-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 650 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|46, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|46, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.452-0500 c20012| 2016-04-06T02:52:08.841-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 650 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:04.454-0500 c20012| 2016-04-06T02:52:08.842-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 651 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.842-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|45, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:04.456-0500 c20012| 2016-04-06T02:52:08.842-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 650 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.461-0500 c20012| 2016-04-06T02:52:08.842-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 651 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:04.463-0500 c20012| 2016-04-06T02:52:08.845-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 651 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.464-0500 c20012| 2016-04-06T02:52:08.845-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|46, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.465-0500 c20012| 2016-04-06T02:52:08.845-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:04.468-0500 c20012| 2016-04-06T02:52:08.845-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 654 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.845-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|46, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:04.469-0500 c20012| 2016-04-06T02:52:08.845-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 654 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:04.477-0500 c20012| 2016-04-06T02:52:08.848-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 654 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|47, t: 1, h: -7437953265225953598, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.842-0500-5704c02865c17830b843f18d", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128842), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -93.0 }, max: { _id: MaxKey } }, left: { min: { _id: -93.0 }, max: { _id: -92.0 }, lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -92.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.479-0500 c20012| 2016-04-06T02:52:08.848-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|47 and ending at ts: Timestamp 1459929128000|47 [js_test:multi_coll_drop] 2016-04-06T02:53:04.481-0500 c20012| 2016-04-06T02:52:08.848-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:04.482-0500 c20012| 2016-04-06T02:52:08.848-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.484-0500 c20012| 2016-04-06T02:52:08.848-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.489-0500 c20012| 2016-04-06T02:52:08.848-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.490-0500 c20012| 2016-04-06T02:52:08.848-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.492-0500 c20012| 2016-04-06T02:52:08.848-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.495-0500 c20012| 2016-04-06T02:52:08.848-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.496-0500 c20012| 2016-04-06T02:52:08.848-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.498-0500 c20012| 2016-04-06T02:52:08.849-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.499-0500 c20012| 2016-04-06T02:52:08.849-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.499-0500 c20012| 2016-04-06T02:52:08.849-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.501-0500 c20012| 2016-04-06T02:52:08.849-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.505-0500 c20012| 2016-04-06T02:52:08.849-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.506-0500 c20012| 2016-04-06T02:52:08.849-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.508-0500 c20012| 2016-04-06T02:52:08.849-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.510-0500 c20012| 2016-04-06T02:52:08.849-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.511-0500 c20012| 2016-04-06T02:52:08.849-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:04.512-0500 c20012| 2016-04-06T02:52:08.849-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.513-0500 c20012| 2016-04-06T02:52:08.849-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.520-0500 c20012| 2016-04-06T02:52:08.849-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.522-0500 c20012| 2016-04-06T02:52:08.849-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.527-0500 c20012| 2016-04-06T02:52:08.849-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.530-0500 c20012| 2016-04-06T02:52:08.849-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.530-0500 c20012| 2016-04-06T02:52:08.849-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.533-0500 c20012| 2016-04-06T02:52:08.849-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.535-0500 c20012| 2016-04-06T02:52:08.849-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.537-0500 c20012| 2016-04-06T02:52:08.849-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.540-0500 c20012| 2016-04-06T02:52:08.849-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.541-0500 c20012| 2016-04-06T02:52:08.849-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.541-0500 c20012| 2016-04-06T02:52:08.849-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.542-0500 c20012| 2016-04-06T02:52:08.849-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.543-0500 c20012| 2016-04-06T02:52:08.850-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.547-0500 c20012| 2016-04-06T02:52:08.850-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.549-0500 c20012| 2016-04-06T02:52:08.850-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.553-0500 c20012| 2016-04-06T02:52:08.850-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 656 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.850-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|46, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:04.555-0500 c20012| 2016-04-06T02:52:08.850-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:04.558-0500 c20012| 2016-04-06T02:52:08.850-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 656 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:04.567-0500 c20012| 2016-04-06T02:52:08.850-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|46, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.572-0500 c20012| 2016-04-06T02:52:08.850-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 657 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|46, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.574-0500 c20012| 2016-04-06T02:52:08.850-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 657 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:04.577-0500 c20012| 2016-04-06T02:52:08.851-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 657 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.579-0500 c20012| 2016-04-06T02:52:08.851-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 656 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|48, t: 1, h: -6375076965146338454, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.582-0500 c20012| 2016-04-06T02:52:08.851-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|47, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.586-0500 c20012| 2016-04-06T02:52:08.851-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|48 and ending at ts: Timestamp 1459929128000|48 [js_test:multi_coll_drop] 2016-04-06T02:53:04.589-0500 c20012| 2016-04-06T02:52:08.851-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:04.589-0500 c20012| 2016-04-06T02:52:08.851-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.591-0500 c20012| 2016-04-06T02:52:08.851-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.592-0500 c20012| 2016-04-06T02:52:08.851-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.593-0500 c20012| 2016-04-06T02:52:08.851-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.594-0500 c20012| 2016-04-06T02:52:08.851-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.594-0500 c20012| 2016-04-06T02:52:08.851-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.596-0500 c20012| 2016-04-06T02:52:08.851-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.599-0500 c20012| 2016-04-06T02:52:08.851-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.601-0500 c20012| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.604-0500 c20012| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.605-0500 c20012| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.607-0500 c20012| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.608-0500 c20012| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.608-0500 c20012| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.615-0500 c20012| 2016-04-06T02:52:08.852-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.620-0500 c20012| 2016-04-06T02:52:08.852-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 660 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.622-0500 c20012| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.622-0500 c20012| 2016-04-06T02:52:08.852-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 660 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:04.624-0500 c20012| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.625-0500 c20012| 2016-04-06T02:52:08.852-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:04.627-0500 c20012| 2016-04-06T02:52:08.852-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:04.630-0500 c20012| 2016-04-06T02:52:08.852-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 660 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.630-0500 c20012| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.630-0500 c20012| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.631-0500 c20012| 2016-04-06T02:52:08.852-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.633-0500 c20012| 2016-04-06T02:52:08.853-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.633-0500 c20012| 2016-04-06T02:52:08.853-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.633-0500 c20012| 2016-04-06T02:52:08.853-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.634-0500 c20012| 2016-04-06T02:52:08.853-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.636-0500 c20012| 2016-04-06T02:52:08.853-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.637-0500 c20012| 2016-04-06T02:52:08.853-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.638-0500 c20012| 2016-04-06T02:52:08.853-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.642-0500 c20012| 2016-04-06T02:52:08.853-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.645-0500 c20012| 2016-04-06T02:52:08.853-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.647-0500 c20012| 2016-04-06T02:52:08.853-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.648-0500 c20012| 2016-04-06T02:52:08.853-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.652-0500 c20012| 2016-04-06T02:52:08.853-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.655-0500 c20012| 2016-04-06T02:52:08.853-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.657-0500 c20012| 2016-04-06T02:52:08.853-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:04.660-0500 c20012| 2016-04-06T02:52:08.853-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.666-0500 c20012| 2016-04-06T02:52:08.853-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 662 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|47, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.666-0500 c20012| 2016-04-06T02:52:08.853-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 662 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:04.669-0500 c20012| 2016-04-06T02:52:08.853-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 663 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.853-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|47, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:04.671-0500 c20012| 2016-04-06T02:52:08.853-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 663 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:04.671-0500 c20012| 2016-04-06T02:52:08.853-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 662 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.674-0500 c20012| 2016-04-06T02:52:08.856-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 663 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.675-0500 c20012| 2016-04-06T02:52:08.856-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|48, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.677-0500 c20012| 2016-04-06T02:52:08.856-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:04.680-0500 c20012| 2016-04-06T02:52:08.856-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 666 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.856-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|48, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:04.681-0500 c20012| 2016-04-06T02:52:08.856-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 666 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:04.684-0500 c20012| 2016-04-06T02:52:08.857-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.690-0500 c20012| 2016-04-06T02:52:08.857-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 667 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.691-0500 c20012| 2016-04-06T02:52:08.857-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 667 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:04.693-0500 c20012| 2016-04-06T02:52:08.857-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 667 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.696-0500 c20012| 2016-04-06T02:52:08.859-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|48, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.698-0500 c20012| 2016-04-06T02:52:08.859-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|48, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:04.704-0500 c20012| 2016-04-06T02:52:08.859-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|48, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.708-0500 c20012| 2016-04-06T02:52:08.859-0500 D QUERY [conn7] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:04.710-0500 c20012| 2016-04-06T02:52:08.859-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|48, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:04.718-0500 c20012| 2016-04-06T02:52:08.860-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 666 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|49, t: 1, h: 8965959093496929051, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f18e'), state: 2, when: new Date(1459929128859), why: "splitting chunk [{ _id: -92.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.724-0500 c20012| 2016-04-06T02:52:08.860-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|49 and ending at ts: Timestamp 1459929128000|49 [js_test:multi_coll_drop] 2016-04-06T02:53:04.725-0500 c20012| 2016-04-06T02:52:08.861-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:04.727-0500 c20012| 2016-04-06T02:52:08.861-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.727-0500 c20012| 2016-04-06T02:52:08.861-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.730-0500 c20012| 2016-04-06T02:52:08.861-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.731-0500 c20012| 2016-04-06T02:52:08.861-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.732-0500 c20012| 2016-04-06T02:52:08.861-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.735-0500 c20012| 2016-04-06T02:52:08.861-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.737-0500 c20012| 2016-04-06T02:52:08.861-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.738-0500 c20012| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.739-0500 c20012| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.741-0500 c20012| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.746-0500 c20012| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.746-0500 c20012| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.747-0500 c20012| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.749-0500 c20012| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.751-0500 c20012| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.751-0500 c20012| 2016-04-06T02:52:08.862-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.752-0500 c20012| 2016-04-06T02:52:08.862-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:04.755-0500 c20012| 2016-04-06T02:52:08.862-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:04.763-0500 c20012| 2016-04-06T02:52:08.862-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 670 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.862-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|48, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:04.765-0500 c20012| 2016-04-06T02:52:08.862-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 670 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:04.767-0500 c20012| 2016-04-06T02:52:08.863-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.768-0500 c20012| 2016-04-06T02:52:08.863-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.771-0500 c20012| 2016-04-06T02:52:08.863-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.772-0500 c20012| 2016-04-06T02:52:08.863-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.773-0500 c20012| 2016-04-06T02:52:08.863-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.776-0500 c20012| 2016-04-06T02:52:08.863-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.777-0500 c20012| 2016-04-06T02:52:08.863-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.778-0500 c20012| 2016-04-06T02:52:08.863-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.781-0500 c20012| 2016-04-06T02:52:08.863-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.782-0500 c20012| 2016-04-06T02:52:08.863-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.784-0500 c20012| 2016-04-06T02:52:08.863-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.785-0500 c20012| 2016-04-06T02:52:08.863-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.786-0500 c20012| 2016-04-06T02:52:08.863-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.786-0500 c20012| 2016-04-06T02:52:08.863-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.787-0500 c20012| 2016-04-06T02:52:08.863-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.789-0500 c20012| 2016-04-06T02:52:08.863-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.795-0500 c20012| 2016-04-06T02:52:08.863-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:04.800-0500 c20012| 2016-04-06T02:52:08.863-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.805-0500 c20012| 2016-04-06T02:52:08.863-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 671 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|48, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.807-0500 c20012| 2016-04-06T02:52:08.863-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 671 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:04.808-0500 c20012| 2016-04-06T02:52:08.863-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 671 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.815-0500 c20012| 2016-04-06T02:52:08.866-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.829-0500 c20012| 2016-04-06T02:52:08.866-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 673 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.837-0500 c20012| 2016-04-06T02:52:08.866-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 673 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:04.841-0500 c20012| 2016-04-06T02:52:08.867-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 673 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.845-0500 c20012| 2016-04-06T02:52:08.867-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 670 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.846-0500 c20012| 2016-04-06T02:52:08.867-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|49, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.847-0500 c20012| 2016-04-06T02:52:08.867-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:04.849-0500 c20012| 2016-04-06T02:52:08.867-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 676 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.867-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|49, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:04.850-0500 c20012| 2016-04-06T02:52:08.867-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 676 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:04.854-0500 c20012| 2016-04-06T02:52:08.869-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 676 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|50, t: 1, h: 2946125543669679599, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-92.0", lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -92.0 }, max: { _id: -91.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-92.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-91.0", lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -91.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-91.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.856-0500 c20012| 2016-04-06T02:52:08.869-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|50 and ending at ts: Timestamp 1459929128000|50 [js_test:multi_coll_drop] 2016-04-06T02:53:04.858-0500 c20012| 2016-04-06T02:52:08.869-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:04.859-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.861-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.862-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.862-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.863-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.865-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.866-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.869-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.871-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.874-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.875-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.876-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.878-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.881-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.882-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.885-0500 c20012| 2016-04-06T02:52:08.870-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:04.885-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.886-0500 c20012| 2016-04-06T02:52:08.870-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll-_id_-92.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:04.886-0500 c20012| 2016-04-06T02:52:08.870-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll-_id_-91.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:04.888-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.892-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.892-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.895-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.897-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.899-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.903-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.919-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.920-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.920-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.921-0500 c20012| 2016-04-06T02:52:08.870-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.923-0500 c20012| 2016-04-06T02:52:08.871-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.930-0500 c20012| 2016-04-06T02:52:08.871-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.930-0500 c20012| 2016-04-06T02:52:08.871-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.937-0500 c20012| 2016-04-06T02:52:08.871-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.941-0500 c20012| 2016-04-06T02:52:08.871-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.944-0500 c20012| 2016-04-06T02:52:08.871-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:04.949-0500 c20012| 2016-04-06T02:52:08.871-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.958-0500 c20012| 2016-04-06T02:52:08.871-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 678 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|49, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.961-0500 c20012| 2016-04-06T02:52:08.871-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 678 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:04.966-0500 c20012| 2016-04-06T02:52:08.871-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 679 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.871-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|49, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:04.967-0500 c20012| 2016-04-06T02:52:08.871-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 679 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:04.968-0500 c20012| 2016-04-06T02:52:08.871-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 678 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.971-0500 c20012| 2016-04-06T02:52:08.872-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.972-0500 c20012| 2016-04-06T02:52:08.872-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 679 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.976-0500 c20012| 2016-04-06T02:52:08.872-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 681 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:04.976-0500 c20012| 2016-04-06T02:52:08.872-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 681 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:04.978-0500 c20012| 2016-04-06T02:52:08.872-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|50, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.979-0500 c20012| 2016-04-06T02:52:08.873-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:04.981-0500 c20012| 2016-04-06T02:52:08.873-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 683 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.873-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|50, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:04.984-0500 c20012| 2016-04-06T02:52:08.873-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 681 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.987-0500 c20012| 2016-04-06T02:52:08.873-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 683 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:04.994-0500 c20012| 2016-04-06T02:52:08.873-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 683 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|51, t: 1, h: -8522370368222023966, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.872-0500-5704c02865c17830b843f18f", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128872), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -92.0 }, max: { _id: MaxKey } }, left: { min: { _id: -92.0 }, max: { _id: -91.0 }, lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -91.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:04.995-0500 c20012| 2016-04-06T02:52:08.873-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|51 and ending at ts: Timestamp 1459929128000|51 [js_test:multi_coll_drop] 2016-04-06T02:53:04.996-0500 c20012| 2016-04-06T02:52:08.874-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:04.997-0500 c20012| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.997-0500 c20012| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.998-0500 c20012| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:04.998-0500 c20012| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.004-0500 c20012| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.004-0500 c20012| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.004-0500 c20012| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.004-0500 c20012| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.004-0500 c20012| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.004-0500 c20012| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.006-0500 c20012| 2016-04-06T02:52:08.874-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.007-0500 c20012| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.007-0500 c20012| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.008-0500 c20012| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.009-0500 c20012| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.009-0500 c20012| 2016-04-06T02:52:08.875-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:05.009-0500 c20012| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.013-0500 c20012| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.013-0500 c20012| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.013-0500 c20012| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.014-0500 c20012| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.015-0500 c20012| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.015-0500 c20012| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.015-0500 c20012| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.020-0500 c20012| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.020-0500 c20012| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.022-0500 c20012| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.023-0500 c20012| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.023-0500 c20012| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.025-0500 c20012| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.026-0500 c20012| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.027-0500 c20012| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.029-0500 c20012| 2016-04-06T02:52:08.875-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.031-0500 c20012| 2016-04-06T02:52:08.875-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:05.035-0500 c20012| 2016-04-06T02:52:08.875-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.040-0500 c20012| 2016-04-06T02:52:08.875-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 686 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|50, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.042-0500 c20012| 2016-04-06T02:52:08.876-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 686 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.044-0500 c20012| 2016-04-06T02:52:08.876-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 686 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.045-0500 c20012| 2016-04-06T02:52:08.878-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 688 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.878-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|50, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:05.050-0500 c20012| 2016-04-06T02:52:08.878-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 688 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.051-0500 c20012| 2016-04-06T02:52:08.878-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.052-0500 c20012| 2016-04-06T02:52:08.878-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 689 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.058-0500 c20012| 2016-04-06T02:52:08.878-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 689 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.060-0500 c20012| 2016-04-06T02:52:08.878-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 689 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.064-0500 c20012| 2016-04-06T02:52:08.882-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 688 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.068-0500 c20012| 2016-04-06T02:52:08.882-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|51, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.069-0500 c20012| 2016-04-06T02:52:08.882-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:05.070-0500 c20012| 2016-04-06T02:52:08.882-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 692 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.882-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|51, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:05.073-0500 c20012| 2016-04-06T02:52:08.882-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 692 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.079-0500 c20012| 2016-04-06T02:52:08.882-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 692 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|52, t: 1, h: -8575808186857473367, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.081-0500 c20012| 2016-04-06T02:52:08.882-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|52 and ending at ts: Timestamp 1459929128000|52 [js_test:multi_coll_drop] 2016-04-06T02:53:05.109-0500 c20012| 2016-04-06T02:52:08.883-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:05.110-0500 c20012| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.110-0500 c20012| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.110-0500 c20012| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.117-0500 c20012| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.119-0500 c20012| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.120-0500 c20012| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.123-0500 c20012| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.127-0500 c20012| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.127-0500 c20012| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.128-0500 c20012| 2016-04-06T02:52:08.883-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.129-0500 c20012| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.129-0500 c20012| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.131-0500 c20012| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.132-0500 c20012| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.134-0500 c20012| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.135-0500 c20012| 2016-04-06T02:52:08.884-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:05.136-0500 c20012| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.137-0500 c20012| 2016-04-06T02:52:08.884-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:05.139-0500 c20012| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.140-0500 c20012| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.142-0500 c20012| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.143-0500 c20012| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.143-0500 c20012| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.146-0500 c20012| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.148-0500 c20012| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.148-0500 c20012| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.149-0500 c20012| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.150-0500 c20012| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.151-0500 c20012| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.152-0500 c20012| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.152-0500 c20012| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.152-0500 c20012| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.153-0500 c20012| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.154-0500 c20012| 2016-04-06T02:52:08.884-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.157-0500 c20012| 2016-04-06T02:52:08.884-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:05.159-0500 c20012| 2016-04-06T02:52:08.885-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 694 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.885-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|51, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:05.160-0500 c20012| 2016-04-06T02:52:08.885-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.162-0500 c20012| 2016-04-06T02:52:08.885-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 695 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|51, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.163-0500 c20012| 2016-04-06T02:52:08.885-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 694 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.164-0500 c20012| 2016-04-06T02:52:08.885-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 695 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.166-0500 c20012| 2016-04-06T02:52:08.885-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 695 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.168-0500 c20012| 2016-04-06T02:52:08.886-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 694 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.171-0500 c20012| 2016-04-06T02:52:08.886-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|52, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.172-0500 c20012| 2016-04-06T02:52:08.886-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:05.173-0500 c20012| 2016-04-06T02:52:08.886-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 698 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.886-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|52, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:05.177-0500 c20012| 2016-04-06T02:52:08.886-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 698 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.183-0500 c20012| 2016-04-06T02:52:08.886-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.194-0500 c20012| 2016-04-06T02:52:08.886-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 699 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.196-0500 c20012| 2016-04-06T02:52:08.886-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 699 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.197-0500 c20012| 2016-04-06T02:52:08.886-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 699 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.200-0500 c20012| 2016-04-06T02:52:08.887-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|18 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|52, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.206-0500 c20012| 2016-04-06T02:52:08.887-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|52, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:05.208-0500 c20012| 2016-04-06T02:52:08.887-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|18 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|52, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.210-0500 c20012| 2016-04-06T02:52:08.887-0500 D QUERY [conn7] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:05.217-0500 c20012| 2016-04-06T02:52:08.887-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|18 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|52, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:712 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:05.223-0500 c20012| 2016-04-06T02:52:08.888-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|52, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.224-0500 c20012| 2016-04-06T02:52:08.888-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|52, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:05.225-0500 c20012| 2016-04-06T02:52:08.888-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|52, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.226-0500 c20012| 2016-04-06T02:52:08.888-0500 D QUERY [conn7] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:05.228-0500 c20012| 2016-04-06T02:52:08.888-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|52, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:05.230-0500 c20012| 2016-04-06T02:52:08.889-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 698 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|53, t: 1, h: 6499600219381119724, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f190'), state: 2, when: new Date(1459929128888), why: "splitting chunk [{ _id: -91.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.231-0500 c20012| 2016-04-06T02:52:08.889-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|53 and ending at ts: Timestamp 1459929128000|53 [js_test:multi_coll_drop] 2016-04-06T02:53:05.232-0500 c20012| 2016-04-06T02:52:08.889-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:05.233-0500 c20012| 2016-04-06T02:52:08.889-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.234-0500 c20012| 2016-04-06T02:52:08.889-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.234-0500 c20012| 2016-04-06T02:52:08.889-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.235-0500 c20012| 2016-04-06T02:52:08.889-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.236-0500 c20012| 2016-04-06T02:52:08.889-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.238-0500 c20012| 2016-04-06T02:52:08.889-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.239-0500 c20012| 2016-04-06T02:52:08.889-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.239-0500 c20012| 2016-04-06T02:52:08.889-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.240-0500 c20012| 2016-04-06T02:52:08.889-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.241-0500 c20012| 2016-04-06T02:52:08.889-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.242-0500 c20012| 2016-04-06T02:52:08.889-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.242-0500 c20012| 2016-04-06T02:52:08.889-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.243-0500 c20012| 2016-04-06T02:52:08.889-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.244-0500 c20012| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.245-0500 c20012| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.246-0500 c20012| 2016-04-06T02:52:08.890-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:05.247-0500 c20012| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.248-0500 c20012| 2016-04-06T02:52:08.890-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:05.249-0500 c20012| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.250-0500 c20012| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.251-0500 c20012| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.252-0500 c20012| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.252-0500 c20012| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.253-0500 c20012| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.253-0500 c20012| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.254-0500 c20012| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.254-0500 c20012| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.255-0500 c20012| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.256-0500 c20012| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.257-0500 c20012| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.258-0500 c20012| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.259-0500 c20012| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.259-0500 c20012| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.260-0500 c20012| 2016-04-06T02:52:08.890-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.260-0500 c20012| 2016-04-06T02:52:08.890-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:05.262-0500 c20012| 2016-04-06T02:52:08.890-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.265-0500 c20012| 2016-04-06T02:52:08.890-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 702 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|52, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.274-0500 c20012| 2016-04-06T02:52:08.890-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 702 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.276-0500 c20012| 2016-04-06T02:52:08.890-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 702 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.277-0500 c20012| 2016-04-06T02:52:08.891-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 704 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.891-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|52, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:05.277-0500 c20012| 2016-04-06T02:52:08.891-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 704 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.279-0500 c20012| 2016-04-06T02:52:08.892-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 704 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.281-0500 c20012| 2016-04-06T02:52:08.892-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|53, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.281-0500 c20012| 2016-04-06T02:52:08.892-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:05.287-0500 c20012| 2016-04-06T02:52:08.892-0500 D COMMAND [conn11] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|53, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.288-0500 c20012| 2016-04-06T02:52:08.892-0500 D COMMAND [conn11] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|53, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:05.294-0500 c20012| 2016-04-06T02:52:08.892-0500 D COMMAND [conn11] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|53, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.297-0500 c20012| 2016-04-06T02:52:08.892-0500 D QUERY [conn11] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:05.312-0500 c20012| 2016-04-06T02:52:08.892-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 706 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.892-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|53, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:05.313-0500 c20012| 2016-04-06T02:52:08.892-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 706 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.316-0500 c20012| 2016-04-06T02:52:08.892-0500 I COMMAND [conn11] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|53, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:492 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:05.320-0500 c20012| 2016-04-06T02:52:08.893-0500 D COMMAND [conn11] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|20 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|53, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.324-0500 c20012| 2016-04-06T02:52:08.893-0500 D COMMAND [conn11] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|53, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:05.327-0500 c20012| 2016-04-06T02:52:08.893-0500 D COMMAND [conn11] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|20 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|53, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.328-0500 c20012| 2016-04-06T02:52:08.893-0500 D QUERY [conn11] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:05.333-0500 c20012| 2016-04-06T02:52:08.893-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.336-0500 c20012| 2016-04-06T02:52:08.893-0500 I COMMAND [conn11] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|20 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|53, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:05.342-0500 c20012| 2016-04-06T02:52:08.893-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 707 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.343-0500 c20012| 2016-04-06T02:52:08.893-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 707 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.344-0500 c20012| 2016-04-06T02:52:08.893-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 707 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.347-0500 c20012| 2016-04-06T02:52:08.894-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 706 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|54, t: 1, h: -140542895342390815, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-91.0", lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -91.0 }, max: { _id: -90.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-91.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-90.0", lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -90.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-90.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.349-0500 c20012| 2016-04-06T02:52:08.895-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|54 and ending at ts: Timestamp 1459929128000|54 [js_test:multi_coll_drop] 2016-04-06T02:53:05.350-0500 c20012| 2016-04-06T02:52:08.895-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:05.351-0500 c20012| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.352-0500 c20012| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.352-0500 c20012| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.352-0500 c20012| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.354-0500 c20012| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.354-0500 c20012| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.354-0500 c20012| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.355-0500 c20012| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.356-0500 c20012| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.357-0500 c20012| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.357-0500 c20012| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.358-0500 c20012| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.358-0500 c20012| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.359-0500 c20012| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.359-0500 c20012| 2016-04-06T02:52:08.895-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:05.361-0500 c20012| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.361-0500 c20012| 2016-04-06T02:52:08.895-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-91.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:05.362-0500 c20012| 2016-04-06T02:52:08.895-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-90.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:05.363-0500 c20012| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.364-0500 c20012| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.367-0500 c20012| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.367-0500 c20012| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.367-0500 c20012| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.369-0500 c20012| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.370-0500 c20012| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.372-0500 c20012| 2016-04-06T02:52:08.896-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.373-0500 c20012| 2016-04-06T02:52:08.896-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.378-0500 c20012| 2016-04-06T02:52:08.896-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.378-0500 c20012| 2016-04-06T02:52:08.896-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.379-0500 c20012| 2016-04-06T02:52:08.896-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.383-0500 c20012| 2016-04-06T02:52:08.895-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.383-0500 c20012| 2016-04-06T02:52:08.896-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.384-0500 c20012| 2016-04-06T02:52:08.896-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.384-0500 c20012| 2016-04-06T02:52:08.896-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.387-0500 c20012| 2016-04-06T02:52:08.896-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.387-0500 c20012| 2016-04-06T02:52:08.896-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:05.390-0500 c20012| 2016-04-06T02:52:08.896-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.393-0500 c20012| 2016-04-06T02:52:08.896-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 710 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|53, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.393-0500 c20012| 2016-04-06T02:52:08.896-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 710 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.394-0500 c20012| 2016-04-06T02:52:08.896-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 710 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.394-0500 c20012| 2016-04-06T02:52:08.897-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 712 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.897-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|53, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:05.395-0500 c20012| 2016-04-06T02:52:08.897-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 712 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.397-0500 c20012| 2016-04-06T02:52:08.898-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.400-0500 c20012| 2016-04-06T02:52:08.898-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 713 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.401-0500 c20012| 2016-04-06T02:52:08.899-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 713 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.401-0500 c20012| 2016-04-06T02:52:08.899-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 713 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.403-0500 c20012| 2016-04-06T02:52:08.900-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 712 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.405-0500 c20012| 2016-04-06T02:52:08.900-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|54, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.406-0500 c20012| 2016-04-06T02:52:08.900-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:05.407-0500 c20012| 2016-04-06T02:52:08.900-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 716 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.900-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|54, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:05.409-0500 c20012| 2016-04-06T02:52:08.900-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 716 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.414-0500 c20012| 2016-04-06T02:52:08.900-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 716 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|55, t: 1, h: -853858200892887985, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.900-0500-5704c02865c17830b843f191", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128900), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -91.0 }, max: { _id: MaxKey } }, left: { min: { _id: -91.0 }, max: { _id: -90.0 }, lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -90.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.416-0500 c20012| 2016-04-06T02:52:08.901-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|55 and ending at ts: Timestamp 1459929128000|55 [js_test:multi_coll_drop] 2016-04-06T02:53:05.418-0500 c20012| 2016-04-06T02:52:08.901-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:05.418-0500 c20012| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.420-0500 c20012| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.421-0500 c20012| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.422-0500 c20012| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.423-0500 c20012| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.425-0500 c20012| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.425-0500 c20012| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.427-0500 c20012| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.427-0500 c20012| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.427-0500 c20012| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.428-0500 c20012| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.429-0500 c20012| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.431-0500 c20012| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.433-0500 c20012| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.436-0500 c20012| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.437-0500 c20012| 2016-04-06T02:52:08.901-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:05.437-0500 c20012| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.441-0500 c20012| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.442-0500 c20012| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.443-0500 c20012| 2016-04-06T02:52:08.901-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.443-0500 c20012| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.448-0500 c20012| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.451-0500 c20012| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.453-0500 c20012| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.455-0500 c20012| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.456-0500 c20012| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.458-0500 c20012| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.460-0500 c20012| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.467-0500 c20012| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.470-0500 c20012| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.471-0500 c20012| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.473-0500 c20012| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.473-0500 c20012| 2016-04-06T02:52:08.902-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.476-0500 c20012| 2016-04-06T02:52:08.902-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:05.480-0500 c20012| 2016-04-06T02:52:08.902-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.486-0500 c20012| 2016-04-06T02:52:08.902-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 718 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|54, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.488-0500 c20012| 2016-04-06T02:52:08.902-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 718 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.492-0500 c20012| 2016-04-06T02:52:08.902-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 718 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.497-0500 c20012| 2016-04-06T02:52:08.903-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 720 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.903-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|54, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:05.498-0500 c20012| 2016-04-06T02:52:08.903-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 720 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.505-0500 c20012| 2016-04-06T02:52:08.905-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.516-0500 c20012| 2016-04-06T02:52:08.905-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 721 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.518-0500 c20012| 2016-04-06T02:52:08.905-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 721 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.520-0500 c20012| 2016-04-06T02:52:08.905-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 721 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.522-0500 c20012| 2016-04-06T02:52:08.906-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 720 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.525-0500 c20012| 2016-04-06T02:52:08.906-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|55, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.531-0500 c20012| 2016-04-06T02:52:08.906-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:05.534-0500 c20012| 2016-04-06T02:52:08.906-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 724 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.906-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|55, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:05.537-0500 c20012| 2016-04-06T02:52:08.906-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 724 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.540-0500 c20012| 2016-04-06T02:52:08.906-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 724 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|56, t: 1, h: -2006001534307679450, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.543-0500 c20012| 2016-04-06T02:52:08.906-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|56 and ending at ts: Timestamp 1459929128000|56 [js_test:multi_coll_drop] 2016-04-06T02:53:05.543-0500 c20012| 2016-04-06T02:52:08.906-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:05.545-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.546-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.547-0500 c20011| 2016-04-06T02:52:22.647-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:05.547-0500 c20011| 2016-04-06T02:52:22.647-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.549-0500 c20011| 2016-04-06T02:52:22.647-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.549-0500 c20011| 2016-04-06T02:52:22.647-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:05.549-0500 c20011| 2016-04-06T02:52:22.647-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.550-0500 c20011| 2016-04-06T02:52:22.647-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.554-0500 c20011| 2016-04-06T02:52:22.648-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.555-0500 c20011| 2016-04-06T02:52:22.648-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.557-0500 c20011| 2016-04-06T02:52:22.648-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.557-0500 c20011| 2016-04-06T02:52:22.648-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.559-0500 c20011| 2016-04-06T02:52:22.648-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.560-0500 c20011| 2016-04-06T02:52:22.648-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.560-0500 c20011| 2016-04-06T02:52:22.648-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.561-0500 c20011| 2016-04-06T02:52:22.648-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.562-0500 c20011| 2016-04-06T02:52:22.648-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.563-0500 c20011| 2016-04-06T02:52:22.648-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.566-0500 c20011| 2016-04-06T02:52:22.648-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.569-0500 c20011| 2016-04-06T02:52:22.648-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.569-0500 c20011| 2016-04-06T02:52:22.649-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.570-0500 c20011| 2016-04-06T02:52:22.649-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.571-0500 c20011| 2016-04-06T02:52:22.649-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 114 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:05.571-0500 c20011| 2016-04-06T02:52:22.650-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.575-0500 c20011| 2016-04-06T02:52:22.650-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:05.578-0500 c20011| 2016-04-06T02:52:22.650-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.587-0500 c20011| 2016-04-06T02:52:22.650-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 115 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.590-0500 c20011| 2016-04-06T02:52:22.650-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 115 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:05.591-0500 c20011| 2016-04-06T02:52:22.650-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 115 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.596-0500 c20011| 2016-04-06T02:52:22.653-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 114 finished with response: { cursor: { nextBatch: [], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.599-0500 c20011| 2016-04-06T02:52:22.654-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|4, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.600-0500 c20011| 2016-04-06T02:52:22.654-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:05.604-0500 c20011| 2016-04-06T02:52:22.654-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 118 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.654-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|4, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:05.604-0500 c20011| 2016-04-06T02:52:22.654-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 118 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:05.608-0500 c20011| 2016-04-06T02:52:22.654-0500 D COMMAND [conn36] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|40 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|4, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.608-0500 c20011| 2016-04-06T02:52:22.654-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|4, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:05.613-0500 c20011| 2016-04-06T02:52:22.654-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|40 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|4, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.619-0500 c20011| 2016-04-06T02:52:22.655-0500 D QUERY [conn36] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:05.622-0500 c20011| 2016-04-06T02:52:22.655-0500 I COMMAND [conn36] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|40 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|4, t: 2 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:712 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:05.630-0500 c20011| 2016-04-06T02:52:22.656-0500 D COMMAND [conn36] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|4, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.634-0500 c20011| 2016-04-06T02:52:22.656-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|4, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:05.638-0500 c20011| 2016-04-06T02:52:22.656-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|4, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.639-0500 c20011| 2016-04-06T02:52:22.656-0500 D QUERY [conn36] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:05.639-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.640-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.640-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.642-0500 c20013| 2016-04-06T02:52:08.908-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.643-0500 c20013| 2016-04-06T02:52:08.908-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.645-0500 c20013| 2016-04-06T02:52:08.909-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:05.651-0500 c20013| 2016-04-06T02:52:08.909-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.654-0500 c20013| 2016-04-06T02:52:08.909-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 726 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.655-0500 c20013| 2016-04-06T02:52:08.909-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 726 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.656-0500 c20013| 2016-04-06T02:52:08.909-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 727 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.909-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|55, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:05.658-0500 c20013| 2016-04-06T02:52:08.909-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 727 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.660-0500 c20013| 2016-04-06T02:52:08.909-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 726 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.663-0500 c20011| 2016-04-06T02:52:22.656-0500 I COMMAND [conn36] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|4, t: 2 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:05.664-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.665-0500 s20014| 2016-04-06T02:52:48.717-0500 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:05.666-0500 s20014| 2016-04-06T02:52:48.717-0500 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:05.668-0500 s20014| 2016-04-06T02:52:48.717-0500 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.100.28:20011, no events [js_test:multi_coll_drop] 2016-04-06T02:53:05.671-0500 c20011| 2016-04-06T02:52:22.656-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.680-0500 c20011| 2016-04-06T02:52:22.656-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 119 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.681-0500 c20011| 2016-04-06T02:52:22.656-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 119 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:05.682-0500 c20011| 2016-04-06T02:52:22.656-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 119 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.689-0500 c20011| 2016-04-06T02:52:22.657-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 118 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929142000|5, t: 2, h: -3686273911828341714, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c03665c17830b843f1a7'), state: 2, when: new Date(1459929142656), why: "splitting chunk [{ _id: -80.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.691-0500 c20011| 2016-04-06T02:52:22.657-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929142000|5 and ending at ts: Timestamp 1459929142000|5 [js_test:multi_coll_drop] 2016-04-06T02:53:05.695-0500 c20011| 2016-04-06T02:52:22.657-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:05.696-0500 c20011| 2016-04-06T02:52:22.657-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.697-0500 c20011| 2016-04-06T02:52:22.657-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.698-0500 c20011| 2016-04-06T02:52:22.657-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.702-0500 c20013| 2016-04-06T02:52:08.910-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.707-0500 c20013| 2016-04-06T02:52:08.910-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 729 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.709-0500 c20013| 2016-04-06T02:52:08.910-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 729 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.709-0500 c20013| 2016-04-06T02:52:08.911-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 729 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.710-0500 c20013| 2016-04-06T02:52:08.911-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 727 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.712-0500 c20013| 2016-04-06T02:52:08.911-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|56, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.715-0500 c20013| 2016-04-06T02:52:08.911-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:05.717-0500 c20013| 2016-04-06T02:52:08.912-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 732 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.912-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|56, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:05.721-0500 c20013| 2016-04-06T02:52:08.912-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 732 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.734-0500 c20013| 2016-04-06T02:52:08.912-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|56, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.736-0500 c20013| 2016-04-06T02:52:08.912-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|56, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:05.744-0500 c20013| 2016-04-06T02:52:08.912-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|56, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.745-0500 c20013| 2016-04-06T02:52:08.912-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:05.749-0500 c20013| 2016-04-06T02:52:08.912-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|56, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:05.778-0500 c20013| 2016-04-06T02:52:08.915-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 732 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|57, t: 1, h: 6895435277470795188, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f192'), state: 2, when: new Date(1459929128914), why: "splitting chunk [{ _id: -90.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.782-0500 c20013| 2016-04-06T02:52:08.915-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|57 and ending at ts: Timestamp 1459929128000|57 [js_test:multi_coll_drop] 2016-04-06T02:53:05.790-0500 c20013| 2016-04-06T02:52:08.915-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:05.791-0500 c20013| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.792-0500 c20013| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.793-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.795-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.796-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.797-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.803-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.808-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.811-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.816-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.818-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.828-0500 c20012| 2016-04-06T02:52:08.907-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:05.828-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.837-0500 c20012| 2016-04-06T02:52:08.907-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:05.839-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.848-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.849-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.858-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.858-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.859-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.859-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.860-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.861-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.862-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.862-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.863-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.863-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.865-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.866-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.866-0500 c20012| 2016-04-06T02:52:08.907-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.871-0500 c20012| 2016-04-06T02:52:08.908-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:05.877-0500 c20012| 2016-04-06T02:52:08.908-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.884-0500 c20012| 2016-04-06T02:52:08.908-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 726 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|55, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.886-0500 c20012| 2016-04-06T02:52:08.908-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 726 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.887-0500 c20012| 2016-04-06T02:52:08.908-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 726 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.889-0500 c20012| 2016-04-06T02:52:08.909-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 728 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.909-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|55, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:05.891-0500 c20012| 2016-04-06T02:52:08.909-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 728 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.896-0500 c20012| 2016-04-06T02:52:08.910-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.901-0500 c20012| 2016-04-06T02:52:08.910-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 729 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:05.902-0500 c20012| 2016-04-06T02:52:08.911-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 729 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.902-0500 c20012| 2016-04-06T02:52:08.911-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 729 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.906-0500 c20012| 2016-04-06T02:52:08.911-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 728 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.907-0500 c20012| 2016-04-06T02:52:08.912-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|56, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.910-0500 c20012| 2016-04-06T02:52:08.912-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:05.912-0500 c20012| 2016-04-06T02:52:08.912-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 732 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.912-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|56, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:05.912-0500 c20012| 2016-04-06T02:52:08.912-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 732 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:05.917-0500 c20012| 2016-04-06T02:52:08.912-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|20 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|56, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.920-0500 c20012| 2016-04-06T02:52:08.912-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|56, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:05.924-0500 c20012| 2016-04-06T02:52:08.912-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|20 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|56, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.927-0500 c20012| 2016-04-06T02:52:08.912-0500 D QUERY [conn7] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:05.931-0500 c20012| 2016-04-06T02:52:08.913-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|20 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|56, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:712 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:05.933-0500 c20012| 2016-04-06T02:52:08.913-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|56, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.934-0500 c20012| 2016-04-06T02:52:08.913-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|56, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:05.938-0500 c20012| 2016-04-06T02:52:08.913-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|56, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.941-0500 c20012| 2016-04-06T02:52:08.913-0500 D QUERY [conn7] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:05.946-0500 c20012| 2016-04-06T02:52:08.913-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|56, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:05.950-0500 c20012| 2016-04-06T02:52:08.915-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 732 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|57, t: 1, h: 6895435277470795188, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f192'), state: 2, when: new Date(1459929128914), why: "splitting chunk [{ _id: -90.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:05.952-0500 c20012| 2016-04-06T02:52:08.915-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|57 and ending at ts: Timestamp 1459929128000|57 [js_test:multi_coll_drop] 2016-04-06T02:53:05.954-0500 c20012| 2016-04-06T02:52:08.915-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:05.954-0500 c20012| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.955-0500 c20012| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.956-0500 c20012| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.957-0500 c20012| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.957-0500 c20012| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.959-0500 c20012| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.959-0500 c20012| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.961-0500 c20012| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.961-0500 c20011| 2016-04-06T02:52:22.657-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.963-0500 c20011| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.964-0500 c20011| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.965-0500 c20011| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.966-0500 c20011| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.966-0500 c20011| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.966-0500 c20011| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.969-0500 c20011| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.970-0500 c20011| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.970-0500 c20011| 2016-04-06T02:52:22.658-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:05.978-0500 c20011| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.985-0500 c20011| 2016-04-06T02:52:22.658-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:05.989-0500 c20011| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:05.993-0500 c20011| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.003-0500 c20011| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.010-0500 c20011| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.012-0500 c20011| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.013-0500 c20011| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.013-0500 c20011| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.015-0500 c20011| 2016-04-06T02:52:22.659-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 122 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.659-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|4, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:06.018-0500 c20011| 2016-04-06T02:52:22.659-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.019-0500 c20011| 2016-04-06T02:52:22.659-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 122 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:06.021-0500 c20011| 2016-04-06T02:52:22.659-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.022-0500 c20011| 2016-04-06T02:52:22.659-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.023-0500 c20011| 2016-04-06T02:52:22.659-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.026-0500 c20011| 2016-04-06T02:52:22.660-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.036-0500 c20011| 2016-04-06T02:52:22.660-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.037-0500 c20011| 2016-04-06T02:52:22.660-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.038-0500 c20011| 2016-04-06T02:52:22.660-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.038-0500 c20011| 2016-04-06T02:52:22.660-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.039-0500 c20011| 2016-04-06T02:52:22.661-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.041-0500 c20012| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.041-0500 c20011| 2016-04-06T02:52:22.661-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.042-0500 c20011| 2016-04-06T02:52:22.661-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.044-0500 c20011| 2016-04-06T02:52:22.661-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:06.048-0500 c20011| 2016-04-06T02:52:22.662-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.049-0500 c20012| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.050-0500 c20012| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.051-0500 c20012| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.053-0500 c20012| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.054-0500 c20012| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.055-0500 c20012| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.055-0500 c20012| 2016-04-06T02:52:08.916-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:06.055-0500 c20012| 2016-04-06T02:52:08.916-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:06.057-0500 c20012| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.058-0500 c20012| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.060-0500 c20012| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.061-0500 c20012| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.062-0500 c20012| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.063-0500 c20012| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.065-0500 c20012| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.068-0500 c20012| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.068-0500 c20012| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.069-0500 c20012| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.070-0500 c20012| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.071-0500 c20012| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.073-0500 c20012| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.076-0500 c20012| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.077-0500 c20012| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.078-0500 c20012| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.079-0500 c20012| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.082-0500 c20012| 2016-04-06T02:52:08.916-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:06.117-0500 c20012| 2016-04-06T02:52:08.916-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.120-0500 c20012| 2016-04-06T02:52:08.916-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 734 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.122-0500 c20012| 2016-04-06T02:52:08.916-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 734 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:06.123-0500 c20012| 2016-04-06T02:52:08.916-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 734 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.124-0500 c20012| 2016-04-06T02:52:08.917-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 736 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.917-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|56, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:06.126-0500 c20012| 2016-04-06T02:52:08.917-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 736 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:06.130-0500 c20012| 2016-04-06T02:52:08.918-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.136-0500 c20012| 2016-04-06T02:52:08.918-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 737 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.138-0500 c20012| 2016-04-06T02:52:08.918-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 737 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:06.139-0500 c20012| 2016-04-06T02:52:08.918-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 737 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.140-0500 c20012| 2016-04-06T02:52:08.919-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 736 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.140-0500 c20012| 2016-04-06T02:52:08.919-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|57, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.141-0500 c20012| 2016-04-06T02:52:08.919-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:06.143-0500 c20012| 2016-04-06T02:52:08.919-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 740 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.919-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|57, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:06.146-0500 c20012| 2016-04-06T02:52:08.919-0500 D COMMAND [conn11] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|22 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|57, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.150-0500 c20012| 2016-04-06T02:52:08.919-0500 D COMMAND [conn11] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|57, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:06.150-0500 c20012| 2016-04-06T02:52:08.919-0500 D COMMAND [conn11] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|22 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|57, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.152-0500 c20012| 2016-04-06T02:52:08.919-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 740 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:06.153-0500 c20012| 2016-04-06T02:52:08.919-0500 D QUERY [conn11] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:06.157-0500 c20012| 2016-04-06T02:52:08.919-0500 I COMMAND [conn11] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|22 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|57, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:06.160-0500 c20012| 2016-04-06T02:52:08.920-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 740 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|58, t: 1, h: 4890531771943418130, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-90.0", lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -90.0 }, max: { _id: -89.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-90.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-89.0", lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -89.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-89.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.162-0500 c20012| 2016-04-06T02:52:08.921-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|58 and ending at ts: Timestamp 1459929128000|58 [js_test:multi_coll_drop] 2016-04-06T02:53:06.163-0500 c20012| 2016-04-06T02:52:08.921-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:06.165-0500 c20012| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.168-0500 c20012| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.169-0500 c20012| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.169-0500 c20012| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.170-0500 c20012| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.171-0500 c20012| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.173-0500 c20012| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.174-0500 c20012| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.175-0500 s20014| 2016-04-06T02:52:48.718-0500 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.100.28:20013, no events [js_test:multi_coll_drop] 2016-04-06T02:53:06.178-0500 c20011| 2016-04-06T02:52:22.662-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 123 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.180-0500 c20011| 2016-04-06T02:52:22.662-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 123 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:06.181-0500 c20011| 2016-04-06T02:52:22.662-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 123 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.183-0500 c20011| 2016-04-06T02:52:22.664-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 122 finished with response: { cursor: { nextBatch: [], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.184-0500 c20011| 2016-04-06T02:52:22.664-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|5, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.185-0500 c20011| 2016-04-06T02:52:22.664-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:06.186-0500 c20011| 2016-04-06T02:52:22.664-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 126 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.664-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|5, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:06.188-0500 c20011| 2016-04-06T02:52:22.664-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 126 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:06.192-0500 c20011| 2016-04-06T02:52:22.666-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.196-0500 c20011| 2016-04-06T02:52:22.666-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 127 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.198-0500 c20011| 2016-04-06T02:52:22.666-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 127 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:06.199-0500 c20011| 2016-04-06T02:52:22.666-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 127 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.206-0500 c20011| 2016-04-06T02:52:22.667-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 126 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929142000|6, t: 2, h: 4143413929093500490, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-80.0", lastmod: Timestamp 1000|43, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -80.0 }, max: { _id: -79.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-80.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-79.0", lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -79.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-79.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.208-0500 c20011| 2016-04-06T02:52:22.667-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929142000|6 and ending at ts: Timestamp 1459929142000|6 [js_test:multi_coll_drop] 2016-04-06T02:53:06.208-0500 c20011| 2016-04-06T02:52:22.667-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:06.210-0500 c20011| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.211-0500 c20011| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.212-0500 c20013| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.213-0500 c20013| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.214-0500 c20013| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.217-0500 c20013| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.218-0500 c20013| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.219-0500 c20013| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.220-0500 c20013| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.221-0500 c20013| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.221-0500 c20013| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.223-0500 c20013| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.223-0500 c20013| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.224-0500 c20013| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.225-0500 c20013| 2016-04-06T02:52:08.915-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:06.225-0500 c20012| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.227-0500 c20012| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.228-0500 c20012| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.228-0500 c20012| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.229-0500 c20012| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.232-0500 c20011| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.233-0500 c20011| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.234-0500 c20011| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.235-0500 c20012| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.236-0500 c20012| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.236-0500 c20011| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.238-0500 c20011| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.241-0500 c20011| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.244-0500 c20011| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.245-0500 c20013| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.248-0500 c20013| 2016-04-06T02:52:08.915-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:06.249-0500 c20013| 2016-04-06T02:52:08.915-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.249-0500 c20013| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.250-0500 c20012| 2016-04-06T02:52:08.921-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:06.250-0500 c20011| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.252-0500 c20011| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.252-0500 c20011| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.253-0500 c20012| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.254-0500 c20012| 2016-04-06T02:52:08.921-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll-_id_-90.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:06.254-0500 c20012| 2016-04-06T02:52:08.921-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll-_id_-89.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:06.257-0500 c20012| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.260-0500 c20012| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.262-0500 c20012| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.263-0500 c20012| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.266-0500 c20011| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.267-0500 c20012| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.267-0500 c20012| 2016-04-06T02:52:08.922-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.269-0500 c20012| 2016-04-06T02:52:08.922-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.270-0500 c20012| 2016-04-06T02:52:08.922-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.271-0500 c20012| 2016-04-06T02:52:08.922-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.272-0500 c20012| 2016-04-06T02:52:08.922-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.273-0500 c20012| 2016-04-06T02:52:08.922-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.275-0500 c20012| 2016-04-06T02:52:08.922-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.275-0500 c20012| 2016-04-06T02:52:08.922-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.277-0500 c20012| 2016-04-06T02:52:08.922-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.279-0500 c20012| 2016-04-06T02:52:08.922-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.279-0500 c20012| 2016-04-06T02:52:08.922-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.282-0500 c20012| 2016-04-06T02:52:08.922-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:06.286-0500 c20012| 2016-04-06T02:52:08.922-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.296-0500 c20012| 2016-04-06T02:52:08.922-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 742 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.297-0500 c20012| 2016-04-06T02:52:08.922-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 742 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:06.301-0500 c20012| 2016-04-06T02:52:08.922-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 742 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.302-0500 c20011| 2016-04-06T02:52:22.668-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.302-0500 c20011| 2016-04-06T02:52:22.668-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:06.303-0500 c20011| 2016-04-06T02:52:22.668-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.303-0500 c20011| 2016-04-06T02:52:22.668-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-80.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:06.305-0500 c20011| 2016-04-06T02:52:22.668-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-79.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:06.306-0500 c20011| 2016-04-06T02:52:22.669-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.306-0500 c20011| 2016-04-06T02:52:22.669-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.308-0500 c20011| 2016-04-06T02:52:22.669-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.309-0500 c20011| 2016-04-06T02:52:22.669-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.310-0500 c20011| 2016-04-06T02:52:22.669-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.312-0500 c20011| 2016-04-06T02:52:22.669-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.314-0500 c20011| 2016-04-06T02:52:22.669-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.315-0500 c20011| 2016-04-06T02:52:22.669-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.316-0500 c20011| 2016-04-06T02:52:22.669-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.317-0500 c20011| 2016-04-06T02:52:22.669-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.318-0500 c20011| 2016-04-06T02:52:22.669-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.319-0500 c20011| 2016-04-06T02:52:22.669-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.322-0500 c20011| 2016-04-06T02:52:22.669-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.323-0500 c20011| 2016-04-06T02:52:22.669-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.324-0500 c20011| 2016-04-06T02:52:22.669-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.338-0500 c20011| 2016-04-06T02:52:22.669-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 130 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.669-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|5, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:06.340-0500 c20011| 2016-04-06T02:52:22.669-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 130 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:06.341-0500 c20011| 2016-04-06T02:52:22.669-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.341-0500 c20011| 2016-04-06T02:52:22.669-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.342-0500 c20011| 2016-04-06T02:52:22.670-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:06.345-0500 c20011| 2016-04-06T02:52:22.670-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.353-0500 c20011| 2016-04-06T02:52:22.670-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 131 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.355-0500 c20011| 2016-04-06T02:52:22.670-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 131 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:06.356-0500 c20011| 2016-04-06T02:52:22.670-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 131 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.364-0500 c20011| 2016-04-06T02:52:22.676-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.369-0500 c20011| 2016-04-06T02:52:22.676-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 133 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.370-0500 c20011| 2016-04-06T02:52:22.676-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 133 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:06.372-0500 c20011| 2016-04-06T02:52:22.676-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 130 finished with response: { cursor: { nextBatch: [], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.374-0500 c20011| 2016-04-06T02:52:22.676-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 133 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.375-0500 c20011| 2016-04-06T02:52:22.676-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|6, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.382-0500 c20011| 2016-04-06T02:52:22.676-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:06.384-0500 c20011| 2016-04-06T02:52:22.676-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 136 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.676-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|6, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:06.388-0500 c20011| 2016-04-06T02:52:22.676-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 136 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:06.390-0500 c20011| 2016-04-06T02:52:22.677-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 136 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929142000|7, t: 2, h: 8677287472431646260, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:22.676-0500-5704c03665c17830b843f1a8", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929142676), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -80.0 }, max: { _id: MaxKey } }, left: { min: { _id: -80.0 }, max: { _id: -79.0 }, lastmod: Timestamp 1000|43, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -79.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.391-0500 c20011| 2016-04-06T02:52:22.677-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929142000|7 and ending at ts: Timestamp 1459929142000|7 [js_test:multi_coll_drop] 2016-04-06T02:53:06.392-0500 c20011| 2016-04-06T02:52:22.677-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:06.392-0500 c20011| 2016-04-06T02:52:22.677-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.394-0500 c20011| 2016-04-06T02:52:22.677-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.394-0500 c20011| 2016-04-06T02:52:22.678-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.395-0500 c20011| 2016-04-06T02:52:22.678-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.396-0500 c20011| 2016-04-06T02:52:22.678-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.397-0500 c20011| 2016-04-06T02:52:22.677-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.397-0500 c20011| 2016-04-06T02:52:22.678-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.399-0500 c20011| 2016-04-06T02:52:22.678-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.403-0500 c20011| 2016-04-06T02:52:22.678-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.404-0500 c20011| 2016-04-06T02:52:22.679-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.404-0500 c20011| 2016-04-06T02:52:22.679-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.405-0500 c20011| 2016-04-06T02:52:22.679-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 138 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.679-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|6, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:06.406-0500 c20011| 2016-04-06T02:52:22.679-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.407-0500 c20011| 2016-04-06T02:52:22.680-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.408-0500 c20011| 2016-04-06T02:52:22.680-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:06.410-0500 c20011| 2016-04-06T02:52:22.680-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.411-0500 c20011| 2016-04-06T02:52:22.680-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.411-0500 c20011| 2016-04-06T02:52:22.681-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.413-0500 c20011| 2016-04-06T02:52:22.686-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 138 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:06.414-0500 c20011| 2016-04-06T02:52:22.686-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.414-0500 c20011| 2016-04-06T02:52:22.686-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.415-0500 c20011| 2016-04-06T02:52:22.686-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.416-0500 c20011| 2016-04-06T02:52:22.686-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.416-0500 c20011| 2016-04-06T02:52:22.686-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.418-0500 c20011| 2016-04-06T02:52:22.686-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.419-0500 c20011| 2016-04-06T02:52:22.687-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.420-0500 c20011| 2016-04-06T02:52:22.687-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.421-0500 c20011| 2016-04-06T02:52:22.687-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.422-0500 c20013| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.423-0500 c20013| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.429-0500 c20012| 2016-04-06T02:52:08.923-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.430-0500 c20011| 2016-04-06T02:52:22.687-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.430-0500 c20011| 2016-04-06T02:52:22.687-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.432-0500 c20011| 2016-04-06T02:52:22.687-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.432-0500 c20011| 2016-04-06T02:52:22.687-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.436-0500 c20012| 2016-04-06T02:52:08.923-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 744 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.439-0500 c20012| 2016-04-06T02:52:08.923-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 744 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:06.441-0500 c20012| 2016-04-06T02:52:08.923-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 744 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.445-0500 c20012| 2016-04-06T02:52:08.923-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 746 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.923-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|57, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:06.447-0500 c20012| 2016-04-06T02:52:08.923-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 746 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:06.452-0500 c20012| 2016-04-06T02:52:08.923-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 746 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|59, t: 1, h: -7629652830017108482, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.923-0500-5704c02865c17830b843f193", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128923), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -90.0 }, max: { _id: MaxKey } }, left: { min: { _id: -90.0 }, max: { _id: -89.0 }, lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -89.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.453-0500 c20012| 2016-04-06T02:52:08.923-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|58, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.456-0500 c20012| 2016-04-06T02:52:08.923-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|59 and ending at ts: Timestamp 1459929128000|59 [js_test:multi_coll_drop] 2016-04-06T02:53:06.463-0500 c20012| 2016-04-06T02:52:08.923-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:06.463-0500 c20012| 2016-04-06T02:52:08.923-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.470-0500 c20012| 2016-04-06T02:52:08.923-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.470-0500 c20012| 2016-04-06T02:52:08.923-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.475-0500 c20012| 2016-04-06T02:52:08.923-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.476-0500 c20011| 2016-04-06T02:52:22.687-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.479-0500 c20013| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.479-0500 c20013| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.481-0500 c20013| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.482-0500 c20013| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.486-0500 c20013| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.489-0500 c20013| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.491-0500 c20013| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.493-0500 c20011| 2016-04-06T02:52:22.687-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.495-0500 c20011| 2016-04-06T02:52:22.687-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.498-0500 c20011| 2016-04-06T02:52:22.687-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:06.500-0500 c20011| 2016-04-06T02:52:22.687-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.508-0500 c20011| 2016-04-06T02:52:22.687-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 139 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.510-0500 c20011| 2016-04-06T02:52:22.687-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 139 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:06.511-0500 c20011| 2016-04-06T02:52:22.688-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 139 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.516-0500 c20011| 2016-04-06T02:52:22.690-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.521-0500 c20011| 2016-04-06T02:52:22.690-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 141 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.522-0500 c20011| 2016-04-06T02:52:22.690-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 141 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:06.525-0500 c20011| 2016-04-06T02:52:22.690-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 141 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.527-0500 c20011| 2016-04-06T02:52:22.690-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 138 finished with response: { cursor: { nextBatch: [], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.528-0500 c20011| 2016-04-06T02:52:22.690-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|7, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.536-0500 c20011| 2016-04-06T02:52:22.691-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:06.541-0500 c20011| 2016-04-06T02:52:22.691-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 144 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.691-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|7, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:06.542-0500 c20011| 2016-04-06T02:52:22.692-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 144 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:06.544-0500 c20011| 2016-04-06T02:52:22.692-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 144 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929142000|8, t: 2, h: 6588589552944971315, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.564-0500 c20011| 2016-04-06T02:52:22.692-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929142000|8 and ending at ts: Timestamp 1459929142000|8 [js_test:multi_coll_drop] 2016-04-06T02:53:06.565-0500 c20011| 2016-04-06T02:52:22.692-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:06.570-0500 c20011| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.575-0500 c20011| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.577-0500 c20011| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.581-0500 c20011| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.582-0500 c20011| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.583-0500 c20011| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.583-0500 c20011| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.583-0500 c20011| 2016-04-06T02:52:22.693-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.583-0500 c20011| 2016-04-06T02:52:22.693-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.586-0500 c20011| 2016-04-06T02:52:22.693-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.586-0500 c20011| 2016-04-06T02:52:22.693-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.594-0500 c20011| 2016-04-06T02:52:22.693-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.595-0500 c20011| 2016-04-06T02:52:22.693-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.596-0500 c20011| 2016-04-06T02:52:22.693-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.598-0500 c20011| 2016-04-06T02:52:22.693-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.598-0500 c20011| 2016-04-06T02:52:22.694-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:06.598-0500 c20011| 2016-04-06T02:52:22.694-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:06.601-0500 c20011| 2016-04-06T02:52:22.694-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 146 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.694-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|7, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:06.602-0500 c20011| 2016-04-06T02:52:22.694-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.603-0500 c20011| 2016-04-06T02:52:22.694-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.604-0500 c20011| 2016-04-06T02:52:22.694-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.605-0500 c20011| 2016-04-06T02:52:22.694-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.606-0500 c20011| 2016-04-06T02:52:22.694-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.607-0500 c20011| 2016-04-06T02:52:22.694-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.608-0500 c20011| 2016-04-06T02:52:22.694-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.610-0500 c20011| 2016-04-06T02:52:22.694-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.614-0500 c20011| 2016-04-06T02:52:22.694-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 146 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:06.615-0500 c20011| 2016-04-06T02:52:22.694-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.616-0500 c20011| 2016-04-06T02:52:22.694-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.616-0500 c20011| 2016-04-06T02:52:22.694-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.619-0500 c20011| 2016-04-06T02:52:22.694-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.619-0500 c20011| 2016-04-06T02:52:22.694-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.620-0500 c20011| 2016-04-06T02:52:22.694-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.620-0500 c20011| 2016-04-06T02:52:22.695-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.621-0500 c20011| 2016-04-06T02:52:22.695-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.621-0500 c20011| 2016-04-06T02:52:22.695-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.623-0500 c20011| 2016-04-06T02:52:22.696-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:06.630-0500 c20011| 2016-04-06T02:52:22.696-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.633-0500 c20011| 2016-04-06T02:52:22.696-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 147 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.634-0500 c20011| 2016-04-06T02:52:22.696-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 147 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:06.636-0500 c20011| 2016-04-06T02:52:22.696-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 147 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.639-0500 c20011| 2016-04-06T02:52:22.699-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.644-0500 c20011| 2016-04-06T02:52:22.699-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 149 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.645-0500 c20011| 2016-04-06T02:52:22.699-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 146 finished with response: { cursor: { nextBatch: [], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.647-0500 c20011| 2016-04-06T02:52:22.699-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 149 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:06.650-0500 c20011| 2016-04-06T02:52:22.699-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|8, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.652-0500 c20011| 2016-04-06T02:52:22.699-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:06.656-0500 c20011| 2016-04-06T02:52:22.699-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 151 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.699-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|8, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:06.657-0500 c20011| 2016-04-06T02:52:22.699-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 149 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.658-0500 c20011| 2016-04-06T02:52:22.699-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 151 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:06.664-0500 c20011| 2016-04-06T02:52:22.700-0500 D COMMAND [conn36] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|8, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.668-0500 c20011| 2016-04-06T02:52:22.700-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|8, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:06.673-0500 c20011| 2016-04-06T02:52:22.700-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|8, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.674-0500 c20011| 2016-04-06T02:52:22.700-0500 D QUERY [conn36] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:06.678-0500 c20011| 2016-04-06T02:52:22.701-0500 I COMMAND [conn36] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|8, t: 2 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:06.682-0500 c20011| 2016-04-06T02:52:22.701-0500 D COMMAND [conn36] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|42 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|8, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.685-0500 c20011| 2016-04-06T02:52:22.701-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|8, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:06.689-0500 c20011| 2016-04-06T02:52:22.701-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|42 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|8, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.692-0500 c20011| 2016-04-06T02:52:22.701-0500 D QUERY [conn36] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:06.695-0500 c20011| 2016-04-06T02:52:22.701-0500 I COMMAND [conn36] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|42 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|8, t: 2 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:712 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:06.698-0500 c20011| 2016-04-06T02:52:22.703-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 151 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929142000|9, t: 2, h: 4988155221125799883, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c03665c17830b843f1a9'), state: 2, when: new Date(1459929142702), why: "splitting chunk [{ _id: -79.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.702-0500 c20011| 2016-04-06T02:52:22.703-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929142000|9 and ending at ts: Timestamp 1459929142000|9 [js_test:multi_coll_drop] 2016-04-06T02:53:06.705-0500 c20011| 2016-04-06T02:52:22.703-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:06.709-0500 c20011| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.711-0500 c20011| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.712-0500 c20011| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.712-0500 c20011| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.713-0500 c20011| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.714-0500 c20011| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.715-0500 c20011| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.716-0500 c20011| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.717-0500 c20011| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.717-0500 c20011| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.718-0500 c20011| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.718-0500 c20011| 2016-04-06T02:52:22.704-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:06.718-0500 c20011| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.718-0500 c20011| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.719-0500 c20011| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.719-0500 c20011| 2016-04-06T02:52:22.704-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:06.720-0500 c20011| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.723-0500 c20011| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.723-0500 c20011| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.723-0500 c20011| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.724-0500 c20011| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.724-0500 c20011| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.734-0500 c20011| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.735-0500 c20011| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.736-0500 c20011| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.736-0500 c20011| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.737-0500 c20011| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.738-0500 c20011| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.738-0500 c20011| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.739-0500 c20011| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.740-0500 c20011| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.740-0500 c20013| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.741-0500 c20013| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.741-0500 c20013| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.742-0500 c20013| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.742-0500 c20013| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.744-0500 c20012| 2016-04-06T02:52:08.923-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.744-0500 c20012| 2016-04-06T02:52:08.923-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.745-0500 c20012| 2016-04-06T02:52:08.924-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.746-0500 c20012| 2016-04-06T02:52:08.924-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.747-0500 c20012| 2016-04-06T02:52:08.924-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.749-0500 c20012| 2016-04-06T02:52:08.924-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.749-0500 c20012| 2016-04-06T02:52:08.924-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.750-0500 c20012| 2016-04-06T02:52:08.924-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.751-0500 c20012| 2016-04-06T02:52:08.924-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.752-0500 c20012| 2016-04-06T02:52:08.924-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.752-0500 c20012| 2016-04-06T02:52:08.924-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.753-0500 c20012| 2016-04-06T02:52:08.924-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:06.754-0500 c20012| 2016-04-06T02:52:08.924-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.755-0500 c20012| 2016-04-06T02:52:08.924-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.755-0500 c20012| 2016-04-06T02:52:08.924-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.756-0500 c20012| 2016-04-06T02:52:08.924-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.757-0500 c20012| 2016-04-06T02:52:08.924-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.757-0500 c20012| 2016-04-06T02:52:08.924-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.757-0500 c20012| 2016-04-06T02:52:08.924-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.759-0500 c20012| 2016-04-06T02:52:08.924-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.760-0500 c20012| 2016-04-06T02:52:08.924-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.762-0500 c20012| 2016-04-06T02:52:08.924-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.762-0500 c20012| 2016-04-06T02:52:08.924-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.763-0500 c20012| 2016-04-06T02:52:08.924-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.764-0500 c20012| 2016-04-06T02:52:08.924-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.765-0500 c20012| 2016-04-06T02:52:08.924-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.765-0500 c20012| 2016-04-06T02:52:08.924-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.766-0500 c20012| 2016-04-06T02:52:08.924-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.767-0500 c20012| 2016-04-06T02:52:08.924-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.768-0500 c20012| 2016-04-06T02:52:08.924-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:06.769-0500 c20012| 2016-04-06T02:52:08.924-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|59, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.773-0500 c20012| 2016-04-06T02:52:08.924-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 748 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|59, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.774-0500 c20012| 2016-04-06T02:52:08.924-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 748 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:06.775-0500 c20012| 2016-04-06T02:52:08.924-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 748 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.778-0500 c20012| 2016-04-06T02:52:08.925-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|59, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|59, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.780-0500 c20012| 2016-04-06T02:52:08.925-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 750 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|59, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|59, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.783-0500 c20012| 2016-04-06T02:52:08.925-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 750 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:06.786-0500 c20012| 2016-04-06T02:52:08.925-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 750 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.790-0500 c20012| 2016-04-06T02:52:08.927-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 752 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.927-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|58, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:06.790-0500 c20012| 2016-04-06T02:52:08.927-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 752 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:06.793-0500 c20012| 2016-04-06T02:52:08.927-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 752 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|60, t: 1, h: 966868069161096116, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.794-0500 c20012| 2016-04-06T02:52:08.927-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|59, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.797-0500 c20012| 2016-04-06T02:52:08.927-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|60 and ending at ts: Timestamp 1459929128000|60 [js_test:multi_coll_drop] 2016-04-06T02:53:06.798-0500 c20012| 2016-04-06T02:52:08.927-0500 D REPL [rsBackgroundSync-0] bgsync buffer has 0 bytes [js_test:multi_coll_drop] 2016-04-06T02:53:06.799-0500 c20012| 2016-04-06T02:52:08.927-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:06.799-0500 c20011| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.800-0500 c20011| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.801-0500 c20011| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.803-0500 c20011| 2016-04-06T02:52:22.705-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:06.805-0500 c20011| 2016-04-06T02:52:22.705-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.811-0500 c20011| 2016-04-06T02:52:22.705-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 154 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.814-0500 c20011| 2016-04-06T02:52:22.705-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 154 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:06.821-0500 c20011| 2016-04-06T02:52:22.705-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 155 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.705-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|8, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:06.822-0500 c20011| 2016-04-06T02:52:22.706-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 155 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:06.825-0500 c20011| 2016-04-06T02:52:22.706-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 154 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.831-0500 c20011| 2016-04-06T02:52:22.708-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.835-0500 c20011| 2016-04-06T02:52:22.708-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 157 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.836-0500 c20011| 2016-04-06T02:52:22.708-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 157 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:06.837-0500 c20011| 2016-04-06T02:52:22.709-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 155 finished with response: { cursor: { nextBatch: [], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.838-0500 c20011| 2016-04-06T02:52:22.709-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 157 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.839-0500 c20011| 2016-04-06T02:52:22.709-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.839-0500 c20011| 2016-04-06T02:52:22.709-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:06.842-0500 c20011| 2016-04-06T02:52:22.709-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 160 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.709-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:06.842-0500 c20011| 2016-04-06T02:52:22.709-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 160 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:06.845-0500 c20011| 2016-04-06T02:52:22.709-0500 D COMMAND [conn40] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|9, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.846-0500 c20011| 2016-04-06T02:52:22.709-0500 D COMMAND [conn40] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|9, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:06.848-0500 c20011| 2016-04-06T02:52:22.709-0500 D COMMAND [conn40] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|9, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.849-0500 c20011| 2016-04-06T02:52:22.709-0500 D QUERY [conn40] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:06.853-0500 c20011| 2016-04-06T02:52:22.709-0500 I COMMAND [conn40] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|9, t: 2 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:492 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:06.857-0500 c20011| 2016-04-06T02:52:22.712-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 160 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929142000|10, t: 2, h: -1872902091255565203, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-79.0", lastmod: Timestamp 1000|45, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -79.0 }, max: { _id: -78.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-79.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-78.0", lastmod: Timestamp 1000|46, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -78.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-78.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.859-0500 c20011| 2016-04-06T02:52:22.712-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929142000|10 and ending at ts: Timestamp 1459929142000|10 [js_test:multi_coll_drop] 2016-04-06T02:53:06.861-0500 c20011| 2016-04-06T02:52:22.712-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:06.863-0500 c20011| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.864-0500 c20011| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.864-0500 c20011| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.866-0500 c20011| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.867-0500 c20011| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.867-0500 c20011| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.869-0500 c20011| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.871-0500 c20011| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.878-0500 c20011| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.880-0500 c20011| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.880-0500 c20011| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.881-0500 c20011| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.883-0500 c20011| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.884-0500 c20011| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.885-0500 c20011| 2016-04-06T02:52:22.713-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:06.887-0500 c20011| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.888-0500 c20011| 2016-04-06T02:52:22.713-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-79.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:06.902-0500 c20011| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.903-0500 c20011| 2016-04-06T02:52:22.713-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-78.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:06.903-0500 c20011| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.903-0500 c20011| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.904-0500 c20011| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.905-0500 c20011| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.906-0500 c20011| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.907-0500 c20011| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.908-0500 c20011| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.909-0500 c20011| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.911-0500 c20011| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.912-0500 c20011| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.913-0500 c20011| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.914-0500 c20011| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.915-0500 c20011| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.916-0500 c20011| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.918-0500 c20011| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.918-0500 c20011| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.919-0500 c20011| 2016-04-06T02:52:22.714-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:06.922-0500 c20011| 2016-04-06T02:52:22.714-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.925-0500 c20011| 2016-04-06T02:52:22.714-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 162 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.927-0500 c20011| 2016-04-06T02:52:22.714-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 162 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:06.929-0500 c20011| 2016-04-06T02:52:22.714-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 163 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.714-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:06.929-0500 c20011| 2016-04-06T02:52:22.714-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 163 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:06.931-0500 c20011| 2016-04-06T02:52:22.714-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 162 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.935-0500 c20011| 2016-04-06T02:52:22.727-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.941-0500 c20011| 2016-04-06T02:52:22.727-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 165 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:06.942-0500 c20011| 2016-04-06T02:52:22.727-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 165 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:06.942-0500 c20011| 2016-04-06T02:52:22.727-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 165 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.945-0500 c20011| 2016-04-06T02:52:22.727-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 163 finished with response: { cursor: { nextBatch: [], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.946-0500 c20011| 2016-04-06T02:52:22.727-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|10, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.947-0500 c20011| 2016-04-06T02:52:22.727-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:06.949-0500 c20011| 2016-04-06T02:52:22.727-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 168 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.727-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:06.950-0500 c20011| 2016-04-06T02:52:22.727-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 168 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:06.954-0500 c20011| 2016-04-06T02:52:22.728-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 168 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929142000|11, t: 2, h: 1869687273915284121, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:22.727-0500-5704c03665c17830b843f1aa", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929142727), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -79.0 }, max: { _id: MaxKey } }, left: { min: { _id: -79.0 }, max: { _id: -78.0 }, lastmod: Timestamp 1000|45, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -78.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|46, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:06.956-0500 c20011| 2016-04-06T02:52:22.729-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929142000|11 and ending at ts: Timestamp 1459929142000|11 [js_test:multi_coll_drop] 2016-04-06T02:53:06.961-0500 c20011| 2016-04-06T02:52:22.730-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:06.962-0500 c20011| 2016-04-06T02:52:22.730-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.966-0500 c20011| 2016-04-06T02:52:22.730-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.968-0500 c20011| 2016-04-06T02:52:22.730-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.969-0500 c20011| 2016-04-06T02:52:22.730-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.973-0500 c20011| 2016-04-06T02:52:22.730-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.975-0500 c20011| 2016-04-06T02:52:22.730-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.976-0500 c20011| 2016-04-06T02:52:22.730-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.980-0500 c20011| 2016-04-06T02:52:22.730-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.981-0500 c20011| 2016-04-06T02:52:22.730-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.983-0500 c20011| 2016-04-06T02:52:22.730-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.987-0500 c20011| 2016-04-06T02:52:22.730-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.988-0500 c20011| 2016-04-06T02:52:22.730-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:06.989-0500 c20011| 2016-04-06T02:52:22.730-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.990-0500 c20011| 2016-04-06T02:52:22.730-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.991-0500 c20011| 2016-04-06T02:52:22.730-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.993-0500 c20011| 2016-04-06T02:52:22.731-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.994-0500 c20011| 2016-04-06T02:52:22.731-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.996-0500 c20011| 2016-04-06T02:52:22.731-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.997-0500 c20011| 2016-04-06T02:52:22.731-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.998-0500 c20011| 2016-04-06T02:52:22.731-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.999-0500 c20011| 2016-04-06T02:52:22.731-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:06.999-0500 c20011| 2016-04-06T02:52:22.731-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.018-0500 c20011| 2016-04-06T02:52:22.731-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 170 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.731-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:07.018-0500 c20011| 2016-04-06T02:52:22.732-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.019-0500 c20011| 2016-04-06T02:52:22.732-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.021-0500 c20011| 2016-04-06T02:52:22.732-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.031-0500 c20011| 2016-04-06T02:52:22.732-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.032-0500 c20011| 2016-04-06T02:52:22.732-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.034-0500 c20011| 2016-04-06T02:52:22.732-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.037-0500 c20011| 2016-04-06T02:52:22.732-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.040-0500 c20011| 2016-04-06T02:52:22.732-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.042-0500 c20011| 2016-04-06T02:52:22.732-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.045-0500 c20011| 2016-04-06T02:52:22.732-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.059-0500 c20011| 2016-04-06T02:52:22.732-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 170 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:07.064-0500 c20011| 2016-04-06T02:52:22.735-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.075-0500 c20011| 2016-04-06T02:52:22.735-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:07.084-0500 c20011| 2016-04-06T02:52:22.735-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:07.089-0500 c20011| 2016-04-06T02:52:22.735-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 171 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:07.090-0500 c20011| 2016-04-06T02:52:22.735-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 171 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:07.091-0500 c20011| 2016-04-06T02:52:22.735-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 171 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.093-0500 c20011| 2016-04-06T02:52:22.741-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 170 finished with response: { cursor: { nextBatch: [], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.094-0500 c20011| 2016-04-06T02:52:22.741-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|11, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.095-0500 c20011| 2016-04-06T02:52:22.741-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:07.096-0500 c20011| 2016-04-06T02:52:22.741-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 174 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.741-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|11, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:07.096-0500 c20011| 2016-04-06T02:52:22.741-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 174 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:07.098-0500 c20011| 2016-04-06T02:52:22.742-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 174 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929142000|12, t: 2, h: -7145308920045400114, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.099-0500 c20011| 2016-04-06T02:52:22.742-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929142000|12 and ending at ts: Timestamp 1459929142000|12 [js_test:multi_coll_drop] 2016-04-06T02:53:07.099-0500 c20011| 2016-04-06T02:52:22.742-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:07.100-0500 c20011| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.100-0500 c20011| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.100-0500 c20011| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.100-0500 c20011| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.102-0500 c20011| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.102-0500 c20011| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.102-0500 c20011| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.103-0500 c20011| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.103-0500 c20011| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.103-0500 c20011| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.104-0500 c20011| 2016-04-06T02:52:22.743-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:07.105-0500 c20011| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.105-0500 c20011| 2016-04-06T02:52:22.743-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:07.106-0500 c20011| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.106-0500 c20011| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.107-0500 c20011| 2016-04-06T02:52:22.744-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.107-0500 c20011| 2016-04-06T02:52:22.744-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.109-0500 c20011| 2016-04-06T02:52:22.744-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 176 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.744-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|11, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:07.112-0500 c20011| 2016-04-06T02:52:22.744-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 176 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:07.113-0500 c20011| 2016-04-06T02:52:22.744-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.113-0500 c20011| 2016-04-06T02:52:22.744-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.122-0500 c20011| 2016-04-06T02:52:22.744-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.125-0500 c20011| 2016-04-06T02:52:22.744-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.129-0500 c20011| 2016-04-06T02:52:22.744-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.129-0500 c20011| 2016-04-06T02:52:22.744-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.130-0500 c20011| 2016-04-06T02:52:22.745-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.133-0500 c20011| 2016-04-06T02:52:22.745-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:07.136-0500 c20011| 2016-04-06T02:52:22.745-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 177 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:07.136-0500 c20011| 2016-04-06T02:52:22.745-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 177 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:07.137-0500 c20011| 2016-04-06T02:52:22.745-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.137-0500 c20011| 2016-04-06T02:52:22.745-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.140-0500 c20011| 2016-04-06T02:52:22.745-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 177 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.140-0500 c20011| 2016-04-06T02:52:22.745-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.142-0500 c20011| 2016-04-06T02:52:22.745-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.143-0500 c20011| 2016-04-06T02:52:22.745-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.145-0500 c20011| 2016-04-06T02:52:22.745-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.146-0500 c20011| 2016-04-06T02:52:22.745-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.147-0500 c20011| 2016-04-06T02:52:22.746-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.148-0500 c20011| 2016-04-06T02:52:22.746-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.149-0500 c20011| 2016-04-06T02:52:22.746-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.149-0500 c20011| 2016-04-06T02:52:22.746-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:07.150-0500 c20011| 2016-04-06T02:52:22.746-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 176 finished with response: { cursor: { nextBatch: [], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.151-0500 c20011| 2016-04-06T02:52:22.746-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.155-0500 c20011| 2016-04-06T02:52:22.746-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:07.160-0500 c20011| 2016-04-06T02:52:22.747-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:07.164-0500 c20011| 2016-04-06T02:52:22.747-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 180 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:07.168-0500 c20011| 2016-04-06T02:52:22.747-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 181 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.747-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|12, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:07.168-0500 c20011| 2016-04-06T02:52:22.747-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 180 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:07.170-0500 c20011| 2016-04-06T02:52:22.747-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 181 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:07.170-0500 c20011| 2016-04-06T02:52:22.747-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 180 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.172-0500 c20011| 2016-04-06T02:52:22.749-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:07.174-0500 c20011| 2016-04-06T02:52:22.749-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 183 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:07.176-0500 c20011| 2016-04-06T02:52:22.749-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 183 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:07.177-0500 c20011| 2016-04-06T02:52:23.553-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 184 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:33.553-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.178-0500 c20011| 2016-04-06T02:52:23.554-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 184 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:07.180-0500 c20011| 2016-04-06T02:52:23.554-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 184 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20012", term: 2, primaryId: 1, durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, opTime: { ts: Timestamp 1459929142000|12, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:07.181-0500 c20011| 2016-04-06T02:52:23.554-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:25.554Z [js_test:multi_coll_drop] 2016-04-06T02:53:07.183-0500 c20011| 2016-04-06T02:52:24.055-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 186 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:34.055-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.184-0500 c20011| 2016-04-06T02:52:24.055-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 186 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:07.185-0500 c20011| 2016-04-06T02:52:24.056-0500 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.188-0500 c20011| 2016-04-06T02:52:24.056-0500 D COMMAND [conn28] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:07.192-0500 c20011| 2016-04-06T02:52:24.056-0500 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:07.192-0500 c20011| 2016-04-06T02:52:25.025-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:60039 #41 (14 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:07.193-0500 c20011| 2016-04-06T02:52:25.025-0500 D COMMAND [conn41] run command admin.$cmd { isMaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.195-0500 c20011| 2016-04-06T02:52:25.025-0500 I COMMAND [conn41] command admin.$cmd command: isMaster { isMaster: 1 } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:07.196-0500 c20011| 2016-04-06T02:52:25.025-0500 D COMMAND [conn41] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.487-0500 c20011| 2016-04-06T02:52:25.026-0500 I COMMAND [conn41] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:07.491-0500 c20011| 2016-04-06T02:52:25.554-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 187 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:35.554-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.497-0500 c20011| 2016-04-06T02:52:25.555-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 187 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:07.511-0500 c20011| 2016-04-06T02:52:25.555-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 187 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20012", term: 2, primaryId: 1, durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, opTime: { ts: Timestamp 1459929142000|12, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:07.522-0500 c20011| 2016-04-06T02:52:25.555-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:27.555Z [js_test:multi_coll_drop] 2016-04-06T02:53:07.526-0500 c20011| 2016-04-06T02:52:26.056-0500 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.527-0500 c20011| 2016-04-06T02:52:26.056-0500 D COMMAND [conn28] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:07.531-0500 c20011| 2016-04-06T02:52:26.056-0500 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:07.536-0500 c20011| 2016-04-06T02:52:26.805-0500 D COMMAND [conn36] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|44 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|12, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.538-0500 c20011| 2016-04-06T02:52:26.805-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|12, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:07.545-0500 c20011| 2016-04-06T02:52:26.805-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|44 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|12, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.548-0500 c20011| 2016-04-06T02:52:26.805-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 183 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.550-0500 c20011| 2016-04-06T02:52:26.806-0500 D QUERY [conn36] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:07.552-0500 c20011| 2016-04-06T02:52:26.806-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 181 finished with response: { cursor: { nextBatch: [], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.552-0500 c20011| 2016-04-06T02:52:26.807-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:07.564-0500 c20011| 2016-04-06T02:52:26.807-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 191 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.807-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|12, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:07.566-0500 c20011| 2016-04-06T02:52:26.807-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 191 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:07.584-0500 c20011| 2016-04-06T02:52:26.809-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 186 finished with response: { ok: 1.0, electionTime: new Date(6270347906482438145), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 2, primaryId: 1, durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, opTime: { ts: Timestamp 1459929142000|12, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:07.586-0500 c20011| 2016-04-06T02:52:26.809-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:28.809Z [js_test:multi_coll_drop] 2016-04-06T02:53:07.589-0500 c20011| 2016-04-06T02:52:26.810-0500 I COMMAND [conn36] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|44 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|12, t: 2 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:712 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:53:07.591-0500 c20011| 2016-04-06T02:52:26.810-0500 D COMMAND [conn36] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|12, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.592-0500 c20011| 2016-04-06T02:52:26.810-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|12, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:07.593-0500 c20011| 2016-04-06T02:52:26.810-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|12, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.594-0500 c20011| 2016-04-06T02:52:26.810-0500 D QUERY [conn36] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:07.598-0500 c20011| 2016-04-06T02:52:26.811-0500 I COMMAND [conn36] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|12, t: 2 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:07.599-0500 c20011| 2016-04-06T02:52:26.811-0500 D COMMAND [conn29] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.599-0500 c20011| 2016-04-06T02:52:26.811-0500 D COMMAND [conn29] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:07.601-0500 c20011| 2016-04-06T02:52:26.811-0500 I COMMAND [conn29] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 2 } numYields:0 reslen:509 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:07.606-0500 c20011| 2016-04-06T02:52:26.812-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 191 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929146000|1, t: 2, h: -9183148587310720839, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c03a65c17830b843f1ab'), state: 2, when: new Date(1459929146811), why: "splitting chunk [{ _id: -78.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.608-0500 c20011| 2016-04-06T02:52:26.812-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929146000|1 and ending at ts: Timestamp 1459929146000|1 [js_test:multi_coll_drop] 2016-04-06T02:53:07.609-0500 c20011| 2016-04-06T02:52:26.812-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:07.612-0500 c20011| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.613-0500 c20011| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.613-0500 c20011| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.614-0500 c20011| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.614-0500 c20011| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.614-0500 c20011| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.616-0500 c20011| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.616-0500 c20011| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.616-0500 c20011| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.621-0500 c20011| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.623-0500 c20011| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.631-0500 c20011| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.632-0500 c20011| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.632-0500 c20011| 2016-04-06T02:52:26.813-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:07.634-0500 c20011| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.634-0500 c20011| 2016-04-06T02:52:26.813-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:07.646-0500 c20011| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.647-0500 c20011| 2016-04-06T02:52:26.814-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.649-0500 c20011| 2016-04-06T02:52:26.814-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.650-0500 c20011| 2016-04-06T02:52:26.814-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.662-0500 c20011| 2016-04-06T02:52:26.814-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.667-0500 c20011| 2016-04-06T02:52:26.814-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.668-0500 c20011| 2016-04-06T02:52:26.814-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.669-0500 c20011| 2016-04-06T02:52:26.814-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.670-0500 c20011| 2016-04-06T02:52:26.814-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.672-0500 c20011| 2016-04-06T02:52:26.814-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.673-0500 c20011| 2016-04-06T02:52:26.814-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.673-0500 c20011| 2016-04-06T02:52:26.814-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.675-0500 c20011| 2016-04-06T02:52:26.814-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.677-0500 c20011| 2016-04-06T02:52:26.814-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.677-0500 c20011| 2016-04-06T02:52:26.814-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.677-0500 c20011| 2016-04-06T02:52:26.814-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.682-0500 c20011| 2016-04-06T02:52:26.814-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.683-0500 c20011| 2016-04-06T02:52:26.814-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.687-0500 c20011| 2016-04-06T02:52:26.814-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 194 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.814-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|12, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:07.688-0500 c20011| 2016-04-06T02:52:26.814-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:07.690-0500 c20011| 2016-04-06T02:52:26.815-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 194 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:07.695-0500 c20011| 2016-04-06T02:52:26.815-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:07.701-0500 c20011| 2016-04-06T02:52:26.815-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 195 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:07.702-0500 c20011| 2016-04-06T02:52:26.815-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 195 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:07.703-0500 c20011| 2016-04-06T02:52:26.815-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 195 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.709-0500 c20011| 2016-04-06T02:52:26.818-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:07.724-0500 c20011| 2016-04-06T02:52:26.818-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 197 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:07.725-0500 c20011| 2016-04-06T02:52:26.818-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 197 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:07.726-0500 c20011| 2016-04-06T02:52:26.819-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 197 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.727-0500 c20011| 2016-04-06T02:52:26.819-0500 D COMMAND [conn40] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|1, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.729-0500 c20011| 2016-04-06T02:52:26.819-0500 D REPL [conn40] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929146000|1, t: 2 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.732-0500 c20011| 2016-04-06T02:52:26.819-0500 D REPL [conn40] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999964μs [js_test:multi_coll_drop] 2016-04-06T02:53:07.734-0500 c20011| 2016-04-06T02:52:26.820-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 194 finished with response: { cursor: { nextBatch: [], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.735-0500 c20011| 2016-04-06T02:52:26.820-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.736-0500 c20011| 2016-04-06T02:52:26.820-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:07.738-0500 c20011| 2016-04-06T02:52:26.820-0500 D COMMAND [conn40] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|1, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:07.740-0500 c20011| 2016-04-06T02:52:26.820-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 200 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.820-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:07.740-0500 c20011| 2016-04-06T02:52:26.820-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 200 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:07.743-0500 c20011| 2016-04-06T02:52:26.820-0500 D COMMAND [conn40] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|1, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.744-0500 c20011| 2016-04-06T02:52:26.820-0500 D QUERY [conn40] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:07.761-0500 c20011| 2016-04-06T02:52:26.820-0500 I COMMAND [conn40] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|1, t: 2 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:492 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:07.772-0500 c20011| 2016-04-06T02:52:26.821-0500 D COMMAND [conn40] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|46 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|1, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.776-0500 c20011| 2016-04-06T02:52:26.821-0500 D COMMAND [conn40] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|1, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:07.777-0500 c20011| 2016-04-06T02:52:26.821-0500 D COMMAND [conn40] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|46 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|1, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.778-0500 c20011| 2016-04-06T02:52:26.821-0500 D QUERY [conn40] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:07.781-0500 c20011| 2016-04-06T02:52:26.821-0500 I COMMAND [conn40] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|46 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|1, t: 2 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:07.782-0500 c20012| 2016-04-06T02:52:08.927-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.790-0500 c20012| 2016-04-06T02:52:08.927-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.790-0500 c20012| 2016-04-06T02:52:08.927-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.794-0500 c20012| 2016-04-06T02:52:08.927-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.797-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.799-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.801-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.802-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.806-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.808-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.810-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.811-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.814-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.816-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.816-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.817-0500 c20012| 2016-04-06T02:52:08.928-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:07.819-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.821-0500 c20012| 2016-04-06T02:52:08.928-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:07.826-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.828-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.829-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.832-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.833-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.833-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.835-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.840-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.843-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.843-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.844-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.845-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.846-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.846-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.846-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.847-0500 c20012| 2016-04-06T02:52:08.928-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.849-0500 c20012| 2016-04-06T02:52:08.928-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:07.853-0500 c20012| 2016-04-06T02:52:08.928-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|59, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|60, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:07.858-0500 c20012| 2016-04-06T02:52:08.928-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 754 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|59, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|60, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:07.863-0500 c20012| 2016-04-06T02:52:08.928-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 754 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:07.865-0500 c20012| 2016-04-06T02:52:08.929-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 754 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.868-0500 c20012| 2016-04-06T02:52:08.929-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|60, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|60, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:07.877-0500 c20012| 2016-04-06T02:52:08.929-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 756 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|60, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|60, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:07.878-0500 c20012| 2016-04-06T02:52:08.929-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 756 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:07.879-0500 c20012| 2016-04-06T02:52:08.929-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 756 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.890-0500 c20012| 2016-04-06T02:52:08.929-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 758 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.929-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|59, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:07.891-0500 c20012| 2016-04-06T02:52:08.930-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 758 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:07.892-0500 c20012| 2016-04-06T02:52:08.930-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 758 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.893-0500 c20012| 2016-04-06T02:52:08.930-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|60, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.896-0500 c20012| 2016-04-06T02:52:08.930-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|22 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|60, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.898-0500 c20012| 2016-04-06T02:52:08.930-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|60, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:07.901-0500 c20012| 2016-04-06T02:52:08.930-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|22 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|60, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.903-0500 c20012| 2016-04-06T02:52:08.930-0500 D QUERY [conn7] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:07.906-0500 c20012| 2016-04-06T02:52:08.930-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:07.910-0500 c20012| 2016-04-06T02:52:08.930-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|22 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|60, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:712 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:07.913-0500 c20012| 2016-04-06T02:52:08.931-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 760 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.931-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|60, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:07.916-0500 c20012| 2016-04-06T02:52:08.931-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 760 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:07.918-0500 c20012| 2016-04-06T02:52:08.932-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 760 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|61, t: 1, h: -5097362160621272068, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f194'), state: 2, when: new Date(1459929128932), why: "splitting chunk [{ _id: -89.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:07.918-0500 c20012| 2016-04-06T02:52:08.932-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|61 and ending at ts: Timestamp 1459929128000|61 [js_test:multi_coll_drop] 2016-04-06T02:53:07.921-0500 c20012| 2016-04-06T02:52:08.932-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:07.922-0500 c20012| 2016-04-06T02:52:08.932-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.923-0500 c20012| 2016-04-06T02:52:08.932-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.925-0500 c20012| 2016-04-06T02:52:08.932-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.926-0500 c20012| 2016-04-06T02:52:08.932-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.929-0500 c20012| 2016-04-06T02:52:08.932-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.931-0500 c20012| 2016-04-06T02:52:08.932-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.934-0500 c20012| 2016-04-06T02:52:08.932-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.934-0500 c20012| 2016-04-06T02:52:08.932-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.948-0500 c20012| 2016-04-06T02:52:08.933-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.966-0500 c20012| 2016-04-06T02:52:08.933-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.969-0500 c20012| 2016-04-06T02:52:08.933-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.971-0500 c20012| 2016-04-06T02:52:08.933-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.971-0500 c20012| 2016-04-06T02:52:08.933-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.972-0500 c20012| 2016-04-06T02:52:08.933-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.972-0500 c20012| 2016-04-06T02:52:08.933-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.975-0500 c20012| 2016-04-06T02:52:08.933-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:07.976-0500 c20012| 2016-04-06T02:52:08.933-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.977-0500 c20012| 2016-04-06T02:52:08.933-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:07.979-0500 c20012| 2016-04-06T02:52:08.933-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.980-0500 c20012| 2016-04-06T02:52:08.933-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.983-0500 c20012| 2016-04-06T02:52:08.933-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.985-0500 c20012| 2016-04-06T02:52:08.933-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.986-0500 c20012| 2016-04-06T02:52:08.933-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.990-0500 c20012| 2016-04-06T02:52:08.933-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.993-0500 c20012| 2016-04-06T02:52:08.933-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.993-0500 c20012| 2016-04-06T02:52:08.933-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:07.995-0500 c20012| 2016-04-06T02:52:08.933-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.028-0500 c20012| 2016-04-06T02:52:08.933-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.030-0500 c20012| 2016-04-06T02:52:08.933-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.031-0500 c20012| 2016-04-06T02:52:08.933-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.031-0500 c20012| 2016-04-06T02:52:08.933-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.031-0500 c20012| 2016-04-06T02:52:08.933-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.033-0500 c20012| 2016-04-06T02:52:08.933-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.033-0500 c20012| 2016-04-06T02:52:08.933-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.033-0500 c20012| 2016-04-06T02:52:08.933-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:08.043-0500 c20012| 2016-04-06T02:52:08.933-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|60, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:08.050-0500 c20012| 2016-04-06T02:52:08.933-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 762 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|60, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:08.052-0500 c20012| 2016-04-06T02:52:08.933-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 762 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:08.053-0500 c20012| 2016-04-06T02:52:08.934-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 762 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.062-0500 c20012| 2016-04-06T02:52:08.934-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:08.066-0500 c20012| 2016-04-06T02:52:08.934-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 764 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:08.070-0500 c20012| 2016-04-06T02:52:08.934-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 764 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:08.072-0500 c20012| 2016-04-06T02:52:08.934-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 764 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.077-0500 c20012| 2016-04-06T02:52:08.934-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 766 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.934-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|60, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:08.079-0500 c20012| 2016-04-06T02:52:08.934-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 766 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:08.084-0500 c20012| 2016-04-06T02:52:08.934-0500 D COMMAND [conn11] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|61, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.086-0500 c20012| 2016-04-06T02:52:08.935-0500 D REPL [conn11] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929128000|61, t: 1 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929128000|60, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.088-0500 c20012| 2016-04-06T02:52:08.935-0500 D REPL [conn11] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999984μs [js_test:multi_coll_drop] 2016-04-06T02:53:08.094-0500 c20012| 2016-04-06T02:52:08.935-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 766 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.095-0500 c20012| 2016-04-06T02:52:08.935-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|61, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.104-0500 c20012| 2016-04-06T02:52:08.935-0500 D COMMAND [conn11] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|61, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:08.107-0500 c20012| 2016-04-06T02:52:08.935-0500 D COMMAND [conn11] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|61, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.108-0500 c20012| 2016-04-06T02:52:08.935-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:08.109-0500 c20012| 2016-04-06T02:52:08.935-0500 D QUERY [conn11] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:08.115-0500 c20012| 2016-04-06T02:52:08.935-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 768 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.935-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|61, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:08.117-0500 c20012| 2016-04-06T02:52:08.935-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 768 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:08.121-0500 c20012| 2016-04-06T02:52:08.935-0500 I COMMAND [conn11] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|61, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:492 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:08.121-0500 c20012| 2016-04-06T02:52:08.936-0500 D COMMAND [conn11] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|24 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|61, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.123-0500 c20012| 2016-04-06T02:52:08.936-0500 D COMMAND [conn11] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|61, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:08.125-0500 c20012| 2016-04-06T02:52:08.936-0500 D COMMAND [conn11] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|24 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|61, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.126-0500 c20012| 2016-04-06T02:52:08.936-0500 D QUERY [conn11] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:08.128-0500 c20012| 2016-04-06T02:52:08.936-0500 I COMMAND [conn11] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|24 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|61, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:08.133-0500 c20012| 2016-04-06T02:52:08.937-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 768 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|62, t: 1, h: 7031161474010338798, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-89.0", lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -89.0 }, max: { _id: -88.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-89.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-88.0", lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -88.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-88.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.136-0500 c20012| 2016-04-06T02:52:08.937-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|62 and ending at ts: Timestamp 1459929128000|62 [js_test:multi_coll_drop] 2016-04-06T02:53:08.138-0500 c20012| 2016-04-06T02:52:08.937-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:08.138-0500 c20012| 2016-04-06T02:52:08.937-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.139-0500 c20012| 2016-04-06T02:52:08.937-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.140-0500 c20012| 2016-04-06T02:52:08.937-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.145-0500 c20012| 2016-04-06T02:52:08.937-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.147-0500 c20012| 2016-04-06T02:52:08.937-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.148-0500 c20012| 2016-04-06T02:52:08.937-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.148-0500 c20012| 2016-04-06T02:52:08.937-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.148-0500 c20012| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.148-0500 c20012| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.150-0500 c20012| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.152-0500 c20012| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.155-0500 c20012| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.158-0500 c20012| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.160-0500 c20012| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.163-0500 c20012| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.174-0500 c20012| 2016-04-06T02:52:08.938-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:08.175-0500 c20012| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.177-0500 c20012| 2016-04-06T02:52:08.938-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll-_id_-89.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:08.178-0500 c20012| 2016-04-06T02:52:08.938-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll-_id_-88.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:08.183-0500 c20012| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.185-0500 c20012| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.185-0500 c20012| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.189-0500 c20012| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.190-0500 c20012| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.192-0500 c20012| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.192-0500 c20012| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.194-0500 c20012| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.195-0500 c20012| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.195-0500 c20012| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.196-0500 c20012| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.197-0500 c20012| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.202-0500 c20012| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.207-0500 c20012| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.215-0500 c20012| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.216-0500 c20012| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.225-0500 c20012| 2016-04-06T02:52:08.938-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:08.235-0500 c20012| 2016-04-06T02:52:08.938-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:08.242-0500 c20012| 2016-04-06T02:52:08.938-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 770 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:08.245-0500 c20012| 2016-04-06T02:52:08.938-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 770 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:08.246-0500 c20012| 2016-04-06T02:52:08.939-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 770 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.249-0500 c20012| 2016-04-06T02:52:08.939-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 772 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.939-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|61, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:08.254-0500 c20012| 2016-04-06T02:52:08.939-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:08.261-0500 c20012| 2016-04-06T02:52:08.939-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 773 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:08.262-0500 c20012| 2016-04-06T02:52:08.939-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 772 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:08.263-0500 c20012| 2016-04-06T02:52:08.939-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 773 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:08.265-0500 c20012| 2016-04-06T02:52:08.940-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 773 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.266-0500 c20012| 2016-04-06T02:52:08.940-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 772 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.266-0500 c20012| 2016-04-06T02:52:08.940-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|62, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.267-0500 c20012| 2016-04-06T02:52:08.940-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:08.272-0500 c20012| 2016-04-06T02:52:08.940-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 776 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.940-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|62, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:08.273-0500 c20012| 2016-04-06T02:52:08.941-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 776 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:08.278-0500 c20012| 2016-04-06T02:52:08.941-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 776 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|63, t: 1, h: 964671473381320939, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.940-0500-5704c02865c17830b843f195", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128940), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -89.0 }, max: { _id: MaxKey } }, left: { min: { _id: -89.0 }, max: { _id: -88.0 }, lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -88.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.279-0500 c20012| 2016-04-06T02:52:08.941-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|63 and ending at ts: Timestamp 1459929128000|63 [js_test:multi_coll_drop] 2016-04-06T02:53:08.280-0500 c20012| 2016-04-06T02:52:08.941-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:08.282-0500 c20012| 2016-04-06T02:52:08.941-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.283-0500 c20012| 2016-04-06T02:52:08.941-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.283-0500 c20012| 2016-04-06T02:52:08.941-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.284-0500 c20012| 2016-04-06T02:52:08.941-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.285-0500 c20012| 2016-04-06T02:52:08.941-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.286-0500 c20012| 2016-04-06T02:52:08.941-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.287-0500 c20012| 2016-04-06T02:52:08.941-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.287-0500 c20012| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.288-0500 c20012| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.289-0500 c20012| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.290-0500 c20012| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.291-0500 c20012| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.292-0500 c20012| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.292-0500 c20012| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.295-0500 c20012| 2016-04-06T02:52:08.942-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:08.296-0500 c20012| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.297-0500 c20012| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.298-0500 c20012| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.299-0500 c20012| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.300-0500 c20012| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.311-0500 c20012| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.313-0500 c20012| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.316-0500 c20012| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.317-0500 c20012| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.336-0500 c20012| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.351-0500 c20012| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.355-0500 c20012| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.357-0500 c20012| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.374-0500 c20012| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.380-0500 c20012| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.382-0500 c20012| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.385-0500 c20012| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.386-0500 c20012| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.387-0500 c20012| 2016-04-06T02:52:08.942-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:08.405-0500 c20012| 2016-04-06T02:52:08.943-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:08.422-0500 c20012| 2016-04-06T02:52:08.943-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 778 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:08.430-0500 c20012| 2016-04-06T02:52:08.943-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 778 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:08.441-0500 c20012| 2016-04-06T02:52:08.943-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 778 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.456-0500 c20012| 2016-04-06T02:52:08.943-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 780 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.943-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|62, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:08.461-0500 c20012| 2016-04-06T02:52:08.943-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 780 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:08.465-0500 c20012| 2016-04-06T02:52:08.944-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 780 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.468-0500 c20012| 2016-04-06T02:52:08.944-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|63, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.472-0500 c20012| 2016-04-06T02:52:08.944-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:08.477-0500 c20012| 2016-04-06T02:52:08.944-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 782 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.944-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|63, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:08.481-0500 c20012| 2016-04-06T02:52:08.944-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:08.487-0500 c20012| 2016-04-06T02:52:08.944-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 783 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:08.495-0500 c20012| 2016-04-06T02:52:08.944-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 783 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:08.496-0500 c20012| 2016-04-06T02:52:08.944-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 782 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:08.498-0500 c20012| 2016-04-06T02:52:08.944-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 783 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.499-0500 c20012| 2016-04-06T02:52:08.945-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 782 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|64, t: 1, h: -930003874952597810, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.501-0500 c20012| 2016-04-06T02:52:08.945-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|64 and ending at ts: Timestamp 1459929128000|64 [js_test:multi_coll_drop] 2016-04-06T02:53:08.504-0500 c20012| 2016-04-06T02:52:08.945-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:08.508-0500 c20012| 2016-04-06T02:52:08.945-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.509-0500 c20012| 2016-04-06T02:52:08.945-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.523-0500 c20012| 2016-04-06T02:52:08.945-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.524-0500 c20012| 2016-04-06T02:52:08.945-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.525-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.526-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.527-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.528-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.530-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.530-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.531-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.531-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.533-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.536-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.537-0500 c20012| 2016-04-06T02:52:08.947-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:08.537-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.538-0500 c20012| 2016-04-06T02:52:08.947-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:08.541-0500 c20012| 2016-04-06T02:52:08.947-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 786 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.947-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|63, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:08.544-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.545-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.547-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.547-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.548-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.551-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.551-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.553-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.556-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.558-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.560-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.562-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.565-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.567-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.568-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.569-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.573-0500 c20012| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.578-0500 c20012| 2016-04-06T02:52:08.948-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 786 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:08.579-0500 c20012| 2016-04-06T02:52:08.948-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:08.582-0500 c20012| 2016-04-06T02:52:08.948-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:08.592-0500 c20012| 2016-04-06T02:52:08.948-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 787 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:08.594-0500 c20012| 2016-04-06T02:52:08.948-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 787 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:08.595-0500 c20012| 2016-04-06T02:52:08.948-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 787 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.605-0500 c20012| 2016-04-06T02:52:08.950-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:08.612-0500 c20012| 2016-04-06T02:52:08.950-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 789 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:08.613-0500 c20012| 2016-04-06T02:52:08.950-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 789 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:08.617-0500 c20012| 2016-04-06T02:52:08.950-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 789 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.621-0500 c20012| 2016-04-06T02:52:08.951-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 786 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.627-0500 c20012| 2016-04-06T02:52:08.951-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|64, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.627-0500 c20012| 2016-04-06T02:52:08.951-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:08.632-0500 c20012| 2016-04-06T02:52:08.951-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 792 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.951-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|64, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:08.633-0500 c20012| 2016-04-06T02:52:08.951-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 792 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:08.635-0500 c20012| 2016-04-06T02:52:08.954-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 792 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|65, t: 1, h: 2692489107514904355, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f196'), state: 2, when: new Date(1459929128953), why: "splitting chunk [{ _id: -88.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.638-0500 c20012| 2016-04-06T02:52:08.954-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|65 and ending at ts: Timestamp 1459929128000|65 [js_test:multi_coll_drop] 2016-04-06T02:53:08.641-0500 c20012| 2016-04-06T02:52:08.954-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:08.644-0500 c20012| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.645-0500 c20012| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.646-0500 c20012| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.646-0500 c20012| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.647-0500 c20012| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.647-0500 c20012| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.648-0500 c20012| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.649-0500 c20012| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.650-0500 c20012| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.651-0500 c20012| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.651-0500 c20012| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.652-0500 c20012| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.652-0500 c20012| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.653-0500 c20012| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.654-0500 c20012| 2016-04-06T02:52:08.955-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:08.655-0500 c20012| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.657-0500 c20012| 2016-04-06T02:52:08.955-0500 D QUERY [repl writer worker 4] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:08.658-0500 c20012| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.658-0500 c20012| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.661-0500 c20012| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.662-0500 c20012| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.664-0500 c20012| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.666-0500 c20012| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.667-0500 c20012| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.667-0500 c20012| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.670-0500 c20012| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.672-0500 c20012| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.673-0500 c20012| 2016-04-06T02:52:08.956-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.675-0500 c20012| 2016-04-06T02:52:08.956-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.676-0500 c20012| 2016-04-06T02:52:08.956-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.677-0500 c20012| 2016-04-06T02:52:08.956-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.678-0500 c20012| 2016-04-06T02:52:08.956-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.679-0500 c20012| 2016-04-06T02:52:08.956-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.680-0500 c20012| 2016-04-06T02:52:08.956-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.682-0500 c20012| 2016-04-06T02:52:08.956-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:08.688-0500 c20012| 2016-04-06T02:52:08.956-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:08.702-0500 c20012| 2016-04-06T02:52:08.956-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 794 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:08.712-0500 c20012| 2016-04-06T02:52:08.956-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 794 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:08.714-0500 c20012| 2016-04-06T02:52:08.956-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 795 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.956-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|64, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:08.715-0500 c20012| 2016-04-06T02:52:08.957-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 795 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:08.715-0500 c20012| 2016-04-06T02:52:08.957-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 794 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.724-0500 c20012| 2016-04-06T02:52:08.961-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 795 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.738-0500 c20012| 2016-04-06T02:52:08.961-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|65, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.738-0500 c20012| 2016-04-06T02:52:08.961-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:08.761-0500 c20012| 2016-04-06T02:52:08.961-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 798 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.961-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|65, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:08.763-0500 c20012| 2016-04-06T02:52:08.961-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 798 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:08.772-0500 c20012| 2016-04-06T02:52:08.962-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:08.775-0500 c20012| 2016-04-06T02:52:08.962-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 799 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:08.779-0500 c20012| 2016-04-06T02:52:08.962-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 799 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:08.784-0500 c20012| 2016-04-06T02:52:08.962-0500 D COMMAND [conn11] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|65, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.787-0500 c20012| 2016-04-06T02:52:08.962-0500 D COMMAND [conn11] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|65, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:08.794-0500 c20012| 2016-04-06T02:52:08.962-0500 D COMMAND [conn11] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|65, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.799-0500 c20012| 2016-04-06T02:52:08.962-0500 D QUERY [conn11] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:08.808-0500 c20012| 2016-04-06T02:52:08.963-0500 I COMMAND [conn11] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|65, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:492 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:08.808-0500 c20012| 2016-04-06T02:52:08.964-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 799 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.813-0500 c20012| 2016-04-06T02:52:08.964-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 798 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|66, t: 1, h: -6638103080377994745, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-88.0", lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -88.0 }, max: { _id: -87.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-88.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-87.0", lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -87.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-87.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.814-0500 c20012| 2016-04-06T02:52:08.964-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|66 and ending at ts: Timestamp 1459929128000|66 [js_test:multi_coll_drop] 2016-04-06T02:53:08.816-0500 c20012| 2016-04-06T02:52:08.964-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:08.817-0500 c20012| 2016-04-06T02:52:08.964-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.817-0500 c20012| 2016-04-06T02:52:08.964-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.818-0500 c20012| 2016-04-06T02:52:08.964-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.819-0500 c20012| 2016-04-06T02:52:08.964-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.821-0500 c20012| 2016-04-06T02:52:08.964-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.821-0500 c20012| 2016-04-06T02:52:08.964-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.821-0500 c20012| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.823-0500 c20012| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.823-0500 c20012| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.824-0500 c20012| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.824-0500 c20012| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.837-0500 c20012| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.842-0500 c20012| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.847-0500 c20012| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.852-0500 c20012| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.853-0500 c20012| 2016-04-06T02:52:08.965-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:08.856-0500 c20012| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.856-0500 c20012| 2016-04-06T02:52:08.965-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll-_id_-88.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:08.856-0500 c20012| 2016-04-06T02:52:08.965-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll-_id_-87.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:08.857-0500 c20012| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.857-0500 c20012| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.859-0500 c20012| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.860-0500 c20012| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.862-0500 c20012| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.863-0500 c20012| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.868-0500 c20012| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.869-0500 c20012| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.870-0500 c20012| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.871-0500 c20012| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.872-0500 c20012| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.874-0500 c20012| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.875-0500 c20012| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.875-0500 c20012| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.876-0500 c20012| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.877-0500 c20012| 2016-04-06T02:52:08.966-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.879-0500 c20012| 2016-04-06T02:52:08.966-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:08.881-0500 c20012| 2016-04-06T02:52:08.966-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:08.895-0500 c20012| 2016-04-06T02:52:08.966-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 802 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:08.897-0500 c20012| 2016-04-06T02:52:08.966-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 802 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:08.899-0500 c20012| 2016-04-06T02:52:08.966-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 802 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.906-0500 c20012| 2016-04-06T02:52:08.966-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 804 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.966-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|65, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:08.910-0500 c20012| 2016-04-06T02:52:08.966-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 804 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:08.915-0500 c20012| 2016-04-06T02:52:08.967-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:08.920-0500 c20012| 2016-04-06T02:52:08.967-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 805 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:08.922-0500 c20012| 2016-04-06T02:52:08.967-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 805 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:08.924-0500 c20012| 2016-04-06T02:52:08.967-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 805 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.927-0500 c20012| 2016-04-06T02:52:08.967-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 804 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.928-0500 c20012| 2016-04-06T02:52:08.968-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|66, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.932-0500 c20012| 2016-04-06T02:52:08.968-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:08.944-0500 c20012| 2016-04-06T02:52:08.968-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 808 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.968-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|66, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:08.945-0500 c20012| 2016-04-06T02:52:08.968-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 808 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:08.955-0500 c20012| 2016-04-06T02:52:08.968-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 808 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|67, t: 1, h: -1218800546483451830, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.967-0500-5704c02865c17830b843f197", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128967), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -88.0 }, max: { _id: MaxKey } }, left: { min: { _id: -88.0 }, max: { _id: -87.0 }, lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -87.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:08.958-0500 c20012| 2016-04-06T02:52:08.968-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|67 and ending at ts: Timestamp 1459929128000|67 [js_test:multi_coll_drop] 2016-04-06T02:53:08.961-0500 c20012| 2016-04-06T02:52:08.969-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:08.964-0500 c20012| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.967-0500 c20012| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.969-0500 c20012| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.972-0500 c20012| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.972-0500 c20012| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.974-0500 c20012| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.975-0500 c20012| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.976-0500 c20012| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.978-0500 c20012| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.981-0500 c20012| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.983-0500 c20012| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.984-0500 c20012| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.985-0500 c20012| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.988-0500 c20012| 2016-04-06T02:52:08.969-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:08.990-0500 c20012| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.991-0500 c20012| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.992-0500 c20012| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.992-0500 c20012| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.993-0500 c20012| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.993-0500 c20012| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.994-0500 c20012| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.995-0500 c20012| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.997-0500 c20012| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.998-0500 c20012| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:08.999-0500 c20012| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.005-0500 c20012| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.005-0500 c20012| 2016-04-06T02:52:08.970-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.005-0500 c20012| 2016-04-06T02:52:08.970-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.005-0500 c20012| 2016-04-06T02:52:08.970-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.005-0500 c20012| 2016-04-06T02:52:08.970-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.006-0500 c20012| 2016-04-06T02:52:08.970-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.007-0500 c20012| 2016-04-06T02:52:08.970-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.011-0500 c20012| 2016-04-06T02:52:08.970-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.011-0500 c20012| 2016-04-06T02:52:08.970-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:09.012-0500 c20012| 2016-04-06T02:52:08.970-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.013-0500 c20012| 2016-04-06T02:52:08.970-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 810 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.013-0500 c20012| 2016-04-06T02:52:08.970-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 810 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.014-0500 c20012| 2016-04-06T02:52:08.971-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 811 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.971-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|66, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:09.014-0500 c20012| 2016-04-06T02:52:08.971-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 811 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.015-0500 c20012| 2016-04-06T02:52:08.971-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 810 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.016-0500 c20012| 2016-04-06T02:52:08.971-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.017-0500 c20012| 2016-04-06T02:52:08.972-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 813 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.018-0500 c20012| 2016-04-06T02:52:08.972-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 813 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.018-0500 c20012| 2016-04-06T02:52:08.972-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 813 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.018-0500 c20012| 2016-04-06T02:52:08.972-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 811 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.018-0500 c20012| 2016-04-06T02:52:08.972-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|67, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.019-0500 c20012| 2016-04-06T02:52:08.972-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:09.019-0500 c20012| 2016-04-06T02:52:08.972-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 816 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.972-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|67, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:09.020-0500 c20012| 2016-04-06T02:52:08.972-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 816 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.027-0500 c20012| 2016-04-06T02:52:08.973-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 816 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|68, t: 1, h: -2285432667988156004, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.029-0500 c20012| 2016-04-06T02:52:08.973-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|68 and ending at ts: Timestamp 1459929128000|68 [js_test:multi_coll_drop] 2016-04-06T02:53:09.030-0500 c20012| 2016-04-06T02:52:08.973-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:09.031-0500 c20012| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.032-0500 c20012| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.035-0500 c20012| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.035-0500 c20012| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.038-0500 c20012| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.040-0500 c20012| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.041-0500 c20012| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.044-0500 c20012| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.045-0500 c20012| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.046-0500 c20012| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.047-0500 c20012| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.050-0500 c20012| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.051-0500 c20012| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.052-0500 c20012| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.052-0500 c20012| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.053-0500 c20012| 2016-04-06T02:52:08.974-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:09.056-0500 c20012| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.058-0500 c20012| 2016-04-06T02:52:08.974-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:09.059-0500 c20012| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.059-0500 c20012| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.062-0500 c20012| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.063-0500 c20012| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.066-0500 c20012| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.066-0500 c20012| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.067-0500 c20012| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.069-0500 c20012| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.074-0500 c20012| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.075-0500 c20012| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.077-0500 c20012| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.079-0500 c20012| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.090-0500 c20012| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.091-0500 c20012| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.093-0500 c20012| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.093-0500 c20012| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.096-0500 c20012| 2016-04-06T02:52:08.974-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:09.101-0500 c20012| 2016-04-06T02:52:08.975-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.108-0500 c20012| 2016-04-06T02:52:08.975-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 818 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.110-0500 c20012| 2016-04-06T02:52:08.975-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 818 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.113-0500 c20012| 2016-04-06T02:52:08.975-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 819 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.975-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|67, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:09.115-0500 c20012| 2016-04-06T02:52:08.975-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 819 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.118-0500 c20012| 2016-04-06T02:52:08.975-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 818 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.122-0500 c20012| 2016-04-06T02:52:08.976-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.128-0500 c20012| 2016-04-06T02:52:08.976-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 821 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.129-0500 c20012| 2016-04-06T02:52:08.976-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 821 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.130-0500 c20012| 2016-04-06T02:52:08.976-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 821 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.132-0500 c20012| 2016-04-06T02:52:08.976-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 819 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.134-0500 c20012| 2016-04-06T02:52:08.976-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|68, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.134-0500 c20012| 2016-04-06T02:52:08.976-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:09.137-0500 c20012| 2016-04-06T02:52:08.976-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|68, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.138-0500 c20012| 2016-04-06T02:52:08.976-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|68, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:09.140-0500 c20012| 2016-04-06T02:52:08.976-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|68, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.143-0500 c20012| 2016-04-06T02:52:08.976-0500 D QUERY [conn7] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:09.148-0500 c20012| 2016-04-06T02:52:08.977-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 824 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.977-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|68, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:09.153-0500 c20012| 2016-04-06T02:52:08.977-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 824 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.157-0500 c20012| 2016-04-06T02:52:08.977-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|68, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:09.160-0500 c20012| 2016-04-06T02:52:08.978-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|68, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.161-0500 c20012| 2016-04-06T02:52:08.978-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|68, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:09.164-0500 c20012| 2016-04-06T02:52:08.978-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|68, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.166-0500 c20012| 2016-04-06T02:52:08.978-0500 D QUERY [conn7] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:09.169-0500 c20012| 2016-04-06T02:52:08.978-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|68, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:09.184-0500 c20012| 2016-04-06T02:52:08.982-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 824 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|69, t: 1, h: -6723415074916916584, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f198'), state: 2, when: new Date(1459929128978), why: "splitting chunk [{ _id: -87.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.189-0500 c20012| 2016-04-06T02:52:08.982-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|69 and ending at ts: Timestamp 1459929128000|69 [js_test:multi_coll_drop] 2016-04-06T02:53:09.190-0500 c20012| 2016-04-06T02:52:08.982-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:09.190-0500 c20012| 2016-04-06T02:52:08.982-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.192-0500 c20012| 2016-04-06T02:52:08.982-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.192-0500 c20012| 2016-04-06T02:52:08.982-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.196-0500 c20012| 2016-04-06T02:52:08.982-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.196-0500 c20012| 2016-04-06T02:52:08.982-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.198-0500 c20012| 2016-04-06T02:52:08.983-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.200-0500 c20012| 2016-04-06T02:52:08.983-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.201-0500 c20012| 2016-04-06T02:52:08.983-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.202-0500 c20012| 2016-04-06T02:52:08.983-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.203-0500 c20012| 2016-04-06T02:52:08.983-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.204-0500 c20012| 2016-04-06T02:52:08.983-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.205-0500 c20012| 2016-04-06T02:52:08.983-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.205-0500 c20012| 2016-04-06T02:52:08.983-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.208-0500 c20012| 2016-04-06T02:52:08.983-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.214-0500 c20012| 2016-04-06T02:52:08.983-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.216-0500 c20012| 2016-04-06T02:52:08.983-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:09.222-0500 c20012| 2016-04-06T02:52:08.983-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.224-0500 c20012| 2016-04-06T02:52:08.983-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:09.224-0500 c20012| 2016-04-06T02:52:08.983-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.228-0500 c20012| 2016-04-06T02:52:08.983-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.229-0500 c20012| 2016-04-06T02:52:08.983-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.232-0500 c20012| 2016-04-06T02:52:08.983-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.235-0500 c20012| 2016-04-06T02:52:08.983-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.236-0500 c20012| 2016-04-06T02:52:08.983-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.237-0500 c20012| 2016-04-06T02:52:08.983-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.238-0500 c20012| 2016-04-06T02:52:08.983-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.240-0500 c20012| 2016-04-06T02:52:08.983-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.241-0500 c20012| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.244-0500 c20012| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.244-0500 c20012| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.245-0500 c20012| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.246-0500 c20012| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.246-0500 c20012| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.249-0500 c20012| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.252-0500 c20012| 2016-04-06T02:52:08.984-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:09.256-0500 c20012| 2016-04-06T02:52:08.984-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.261-0500 c20012| 2016-04-06T02:52:08.984-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 826 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.262-0500 c20012| 2016-04-06T02:52:08.984-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 826 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.265-0500 c20012| 2016-04-06T02:52:08.984-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 827 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.984-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|68, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:09.267-0500 c20012| 2016-04-06T02:52:08.984-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 827 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.269-0500 c20012| 2016-04-06T02:52:08.984-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 826 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.273-0500 c20012| 2016-04-06T02:52:08.987-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.279-0500 c20012| 2016-04-06T02:52:08.987-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 829 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.281-0500 c20012| 2016-04-06T02:52:08.987-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 829 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.283-0500 c20012| 2016-04-06T02:52:08.987-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 827 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.290-0500 c20012| 2016-04-06T02:52:08.987-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 829 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.294-0500 c20012| 2016-04-06T02:52:08.987-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|69, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.296-0500 c20012| 2016-04-06T02:52:08.987-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:09.299-0500 c20012| 2016-04-06T02:52:08.987-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 832 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.987-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|69, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:09.299-0500 c20012| 2016-04-06T02:52:08.987-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 832 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.303-0500 c20012| 2016-04-06T02:52:08.988-0500 D COMMAND [conn11] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|69, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.306-0500 c20012| 2016-04-06T02:52:08.988-0500 D COMMAND [conn11] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|69, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:09.313-0500 c20012| 2016-04-06T02:52:08.988-0500 D COMMAND [conn11] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|69, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.322-0500 c20012| 2016-04-06T02:52:08.989-0500 D QUERY [conn11] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:09.325-0500 c20012| 2016-04-06T02:52:08.989-0500 I COMMAND [conn11] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929128000|69, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:492 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:09.328-0500 c20012| 2016-04-06T02:52:08.991-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 832 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|70, t: 1, h: 3091193383868667392, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-87.0", lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -87.0 }, max: { _id: -86.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-87.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-86.0", lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -86.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-86.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.333-0500 c20012| 2016-04-06T02:52:08.991-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|70 and ending at ts: Timestamp 1459929128000|70 [js_test:multi_coll_drop] 2016-04-06T02:53:09.334-0500 c20012| 2016-04-06T02:52:08.991-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:09.336-0500 c20012| 2016-04-06T02:52:08.991-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.337-0500 c20012| 2016-04-06T02:52:08.991-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.338-0500 c20012| 2016-04-06T02:52:08.991-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.338-0500 c20012| 2016-04-06T02:52:08.991-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.339-0500 c20012| 2016-04-06T02:52:08.991-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.341-0500 c20012| 2016-04-06T02:52:08.991-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.342-0500 c20012| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.343-0500 c20012| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.343-0500 c20012| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.343-0500 c20012| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.346-0500 c20012| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.347-0500 c20012| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.348-0500 c20012| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.349-0500 c20012| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.355-0500 c20012| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.355-0500 c20012| 2016-04-06T02:52:08.992-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:09.356-0500 c20012| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.357-0500 c20012| 2016-04-06T02:52:08.992-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll-_id_-87.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:09.360-0500 c20012| 2016-04-06T02:52:08.992-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll-_id_-86.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:09.360-0500 c20012| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.360-0500 c20012| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.363-0500 c20012| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.363-0500 c20012| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.366-0500 c20012| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.372-0500 c20012| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.376-0500 c20012| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.383-0500 c20012| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.384-0500 c20012| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.384-0500 c20012| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.386-0500 c20012| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.387-0500 c20012| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.388-0500 c20012| 2016-04-06T02:52:08.993-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.388-0500 c20012| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.390-0500 c20012| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.391-0500 c20012| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.393-0500 c20012| 2016-04-06T02:52:08.993-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:09.441-0500 c20012| 2016-04-06T02:52:08.993-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.463-0500 c20012| 2016-04-06T02:52:08.993-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 834 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.993-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|69, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:09.477-0500 c20012| 2016-04-06T02:52:08.993-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 835 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.481-0500 c20012| 2016-04-06T02:52:08.993-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 835 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.502-0500 c20012| 2016-04-06T02:52:08.993-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 834 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.504-0500 c20012| 2016-04-06T02:52:08.993-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 835 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.524-0500 c20012| 2016-04-06T02:52:08.997-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 834 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.528-0500 c20012| 2016-04-06T02:52:08.997-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|70, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.532-0500 c20012| 2016-04-06T02:52:08.997-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:09.543-0500 c20012| 2016-04-06T02:52:08.997-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 838 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.997-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|70, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:09.544-0500 c20012| 2016-04-06T02:52:08.997-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 838 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.554-0500 c20012| 2016-04-06T02:52:09.014-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.561-0500 c20012| 2016-04-06T02:52:09.014-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 839 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.566-0500 c20012| 2016-04-06T02:52:09.014-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 839 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.566-0500 c20012| 2016-04-06T02:52:09.014-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 839 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.585-0500 c20012| 2016-04-06T02:52:09.020-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 838 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929129000|1, t: 1, h: 1591298908171832149, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:09.014-0500-5704c02965c17830b843f199", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929129014), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -87.0 }, max: { _id: MaxKey } }, left: { min: { _id: -87.0 }, max: { _id: -86.0 }, lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -86.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.590-0500 c20012| 2016-04-06T02:52:09.020-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929129000|1 and ending at ts: Timestamp 1459929129000|1 [js_test:multi_coll_drop] 2016-04-06T02:53:09.592-0500 c20012| 2016-04-06T02:52:09.020-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:09.596-0500 c20012| 2016-04-06T02:52:09.020-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.598-0500 c20012| 2016-04-06T02:52:09.020-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.599-0500 c20012| 2016-04-06T02:52:09.020-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.599-0500 c20012| 2016-04-06T02:52:09.020-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.600-0500 c20012| 2016-04-06T02:52:09.020-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.618-0500 c20012| 2016-04-06T02:52:09.020-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.620-0500 c20012| 2016-04-06T02:52:09.020-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.631-0500 c20012| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.634-0500 c20012| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.634-0500 c20012| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.638-0500 c20012| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.640-0500 c20012| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.643-0500 c20012| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.661-0500 c20012| 2016-04-06T02:52:09.021-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:09.662-0500 c20012| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.662-0500 c20012| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.663-0500 c20012| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.663-0500 c20012| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.664-0500 c20012| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.664-0500 c20012| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.665-0500 c20012| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.666-0500 c20012| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.668-0500 c20012| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.670-0500 c20012| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.670-0500 c20012| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.671-0500 c20012| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.674-0500 c20012| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.674-0500 c20012| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.675-0500 c20012| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.675-0500 c20012| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.677-0500 c20012| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.677-0500 c20012| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.677-0500 c20012| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.678-0500 c20012| 2016-04-06T02:52:09.021-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:09.680-0500 c20012| 2016-04-06T02:52:09.022-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.683-0500 c20012| 2016-04-06T02:52:09.022-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 842 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.683-0500 c20012| 2016-04-06T02:52:09.022-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 842 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.685-0500 c20012| 2016-04-06T02:52:09.022-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 843 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.022-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|70, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:09.685-0500 c20012| 2016-04-06T02:52:09.022-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 842 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.689-0500 c20012| 2016-04-06T02:52:09.022-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 843 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.692-0500 c20012| 2016-04-06T02:52:09.024-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.699-0500 c20012| 2016-04-06T02:52:09.024-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 845 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.701-0500 c20012| 2016-04-06T02:52:09.024-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 845 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.703-0500 c20012| 2016-04-06T02:52:09.025-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 845 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.705-0500 c20012| 2016-04-06T02:52:09.025-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 843 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.706-0500 c20012| 2016-04-06T02:52:09.025-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.706-0500 c20012| 2016-04-06T02:52:09.025-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:09.712-0500 c20012| 2016-04-06T02:52:09.025-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 848 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.025-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:09.713-0500 c20012| 2016-04-06T02:52:09.025-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 848 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.716-0500 c20012| 2016-04-06T02:52:09.029-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 848 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929129000|2, t: 1, h: 1364947328691333013, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.719-0500 c20012| 2016-04-06T02:52:09.029-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929129000|2 and ending at ts: Timestamp 1459929129000|2 [js_test:multi_coll_drop] 2016-04-06T02:53:09.721-0500 c20012| 2016-04-06T02:52:09.029-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:09.721-0500 c20012| 2016-04-06T02:52:09.029-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.723-0500 c20012| 2016-04-06T02:52:09.029-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.724-0500 c20012| 2016-04-06T02:52:09.029-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.725-0500 c20012| 2016-04-06T02:52:09.029-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.725-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.726-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.729-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.732-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.734-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.734-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.735-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.736-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.737-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.737-0500 c20012| 2016-04-06T02:52:09.030-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:09.738-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.742-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.744-0500 c20012| 2016-04-06T02:52:09.030-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:09.746-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.748-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.749-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.751-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.752-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.753-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.753-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.754-0500 c20013| 2016-04-06T02:52:08.916-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.757-0500 c20013| 2016-04-06T02:52:08.916-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:09.764-0500 c20013| 2016-04-06T02:52:08.916-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.769-0500 c20013| 2016-04-06T02:52:08.916-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 734 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|56, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.770-0500 c20013| 2016-04-06T02:52:08.916-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 734 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.772-0500 c20013| 2016-04-06T02:52:08.917-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 734 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.775-0500 c20013| 2016-04-06T02:52:08.917-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 736 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.917-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|56, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:09.776-0500 c20013| 2016-04-06T02:52:08.917-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 736 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.779-0500 c20013| 2016-04-06T02:52:08.918-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.781-0500 c20013| 2016-04-06T02:52:08.918-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 737 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.783-0500 c20013| 2016-04-06T02:52:08.918-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 737 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.787-0500 c20013| 2016-04-06T02:52:08.918-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 737 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.788-0500 c20013| 2016-04-06T02:52:08.919-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 736 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.789-0500 c20013| 2016-04-06T02:52:08.919-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|57, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.794-0500 c20013| 2016-04-06T02:52:08.919-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:09.802-0500 c20013| 2016-04-06T02:52:08.919-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 740 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.919-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|57, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:09.804-0500 c20013| 2016-04-06T02:52:08.919-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 740 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.809-0500 c20013| 2016-04-06T02:52:08.920-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 740 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|58, t: 1, h: 4890531771943418130, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-90.0", lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -90.0 }, max: { _id: -89.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-90.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-89.0", lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -89.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-89.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.811-0500 c20013| 2016-04-06T02:52:08.921-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|58 and ending at ts: Timestamp 1459929128000|58 [js_test:multi_coll_drop] 2016-04-06T02:53:09.818-0500 c20013| 2016-04-06T02:52:08.921-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:09.820-0500 c20013| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.822-0500 c20013| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.824-0500 c20013| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.825-0500 c20013| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.826-0500 c20013| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.830-0500 c20013| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.832-0500 c20013| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.834-0500 c20013| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.835-0500 c20013| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.837-0500 c20013| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.839-0500 c20013| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.840-0500 c20013| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.841-0500 c20013| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.842-0500 c20013| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.843-0500 c20013| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.843-0500 c20013| 2016-04-06T02:52:08.921-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:09.844-0500 c20013| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.847-0500 c20013| 2016-04-06T02:52:08.921-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll-_id_-90.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:09.848-0500 c20013| 2016-04-06T02:52:08.921-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll-_id_-89.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:09.849-0500 c20013| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.850-0500 c20013| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.850-0500 c20013| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.851-0500 c20013| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.852-0500 c20013| 2016-04-06T02:52:08.922-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.855-0500 c20013| 2016-04-06T02:52:08.922-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.859-0500 c20013| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.859-0500 c20013| 2016-04-06T02:52:08.921-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.862-0500 c20013| 2016-04-06T02:52:08.922-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.870-0500 c20013| 2016-04-06T02:52:08.922-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.871-0500 c20013| 2016-04-06T02:52:08.922-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.884-0500 c20013| 2016-04-06T02:52:08.922-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.891-0500 c20013| 2016-04-06T02:52:08.922-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.892-0500 c20013| 2016-04-06T02:52:08.922-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.901-0500 c20013| 2016-04-06T02:52:08.922-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.903-0500 c20013| 2016-04-06T02:52:08.922-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.904-0500 c20013| 2016-04-06T02:52:08.922-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:09.909-0500 c20013| 2016-04-06T02:52:08.922-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.918-0500 c20013| 2016-04-06T02:52:08.922-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 742 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|57, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.928-0500 c20013| 2016-04-06T02:52:08.922-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 742 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.929-0500 c20013| 2016-04-06T02:52:08.922-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 742 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.934-0500 c20013| 2016-04-06T02:52:08.922-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.939-0500 c20013| 2016-04-06T02:52:08.922-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 744 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:09.942-0500 c20013| 2016-04-06T02:52:08.922-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 744 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.943-0500 c20013| 2016-04-06T02:52:08.923-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 744 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.948-0500 c20013| 2016-04-06T02:52:08.923-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 746 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.923-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|57, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:09.951-0500 c20013| 2016-04-06T02:52:08.923-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 746 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:09.951-0500 c20013| 2016-04-06T02:52:08.923-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 746 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.952-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.954-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.955-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.957-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.970-0500 c20011| 2016-04-06T02:52:26.822-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 200 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929146000|2, t: 2, h: -8119450810825688742, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-78.0", lastmod: Timestamp 1000|47, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -78.0 }, max: { _id: -77.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-78.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-77.0", lastmod: Timestamp 1000|48, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -77.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-77.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:09.972-0500 c20011| 2016-04-06T02:52:26.822-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929146000|2 and ending at ts: Timestamp 1459929146000|2 [js_test:multi_coll_drop] 2016-04-06T02:53:09.972-0500 c20011| 2016-04-06T02:52:26.822-0500 D REPL [rsBackgroundSync-0] bgsync buffer has 0 bytes [js_test:multi_coll_drop] 2016-04-06T02:53:09.974-0500 c20011| 2016-04-06T02:52:26.823-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:09.977-0500 c20011| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.978-0500 c20011| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.980-0500 c20011| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.982-0500 c20011| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.985-0500 c20011| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.986-0500 c20011| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.988-0500 c20011| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.988-0500 c20011| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.990-0500 c20011| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.990-0500 c20011| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.991-0500 c20011| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.993-0500 c20011| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.995-0500 c20011| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:09.995-0500 c20011| 2016-04-06T02:52:26.823-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:09.997-0500 c20011| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.015-0500 c20011| 2016-04-06T02:52:26.824-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-78.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:10.020-0500 c20011| 2016-04-06T02:52:26.824-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-77.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:10.022-0500 c20011| 2016-04-06T02:52:26.824-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.028-0500 s20014| 2016-04-06T02:52:51.766-0500 D ASIO [Balancer] startCommand: RemoteCommand 387 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:21.765-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929171765), up: 44, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.029-0500 s20015| 2016-04-06T02:52:51.721-0500 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:10.030-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.031-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.031-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.038-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.042-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.042-0500 c20012| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.045-0500 c20012| 2016-04-06T02:52:09.031-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:10.050-0500 c20012| 2016-04-06T02:52:09.031-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.055-0500 c20012| 2016-04-06T02:52:09.031-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 850 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.058-0500 c20012| 2016-04-06T02:52:09.031-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 850 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:10.059-0500 c20012| 2016-04-06T02:52:09.031-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 850 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.061-0500 c20012| 2016-04-06T02:52:09.031-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 852 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.031-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:10.063-0500 s20015| 2016-04-06T02:52:51.721-0500 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:10.064-0500 s20014| 2016-04-06T02:52:51.766-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 387 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:10.064-0500 s20014| 2016-04-06T02:52:53.714-0500 I NETWORK [ReplicaSetMonitorWatcher] Socket recv() timeout 192.168.100.28:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:10.068-0500 s20014| 2016-04-06T02:52:53.714-0500 I NETWORK [ReplicaSetMonitorWatcher] SocketException: remote: (NONE):0 error: 9001 socket exception [RECV_TIMEOUT] server [192.168.100.28:20013] [js_test:multi_coll_drop] 2016-04-06T02:53:10.069-0500 s20014| 2016-04-06T02:52:53.714-0500 D - [ReplicaSetMonitorWatcher] User Assertion: 6:network error while attempting to run command 'ismaster' on host 'mongovm16:20013' [js_test:multi_coll_drop] 2016-04-06T02:53:10.071-0500 s20014| 2016-04-06T02:52:53.714-0500 I NETWORK [ReplicaSetMonitorWatcher] Detected bad connection created at 1459929123721858 microSec, clearing pool for mongovm16:20013 of 0 connections [js_test:multi_coll_drop] 2016-04-06T02:53:10.071-0500 s20014| 2016-04-06T02:52:53.714-0500 D NETWORK [ReplicaSetMonitorWatcher] Marking host mongovm16:20013 as failed [js_test:multi_coll_drop] 2016-04-06T02:53:10.072-0500 s20014| 2016-04-06T02:52:53.714-0500 D NETWORK [ReplicaSetMonitorWatcher] creating new connection to:mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:10.074-0500 s20014| 2016-04-06T02:52:53.714-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:53:10.076-0500 s20014| 2016-04-06T02:52:53.714-0500 D NETWORK [ReplicaSetMonitorWatcher] connected to server mongovm16:20012 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:53:10.078-0500 s20014| 2016-04-06T02:52:53.715-0500 D NETWORK [ReplicaSetMonitorWatcher] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:53:10.078-0500 s20015| 2016-04-06T02:52:51.721-0500 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.100.28:20011, no events [js_test:multi_coll_drop] 2016-04-06T02:53:10.080-0500 s20015| 2016-04-06T02:52:51.722-0500 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.100.28:20013, no events [js_test:multi_coll_drop] 2016-04-06T02:53:10.084-0500 s20015| 2016-04-06T02:52:51.774-0500 D ASIO [Balancer] startCommand: RemoteCommand 83 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:21.773-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929171773), up: 44, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.085-0500 s20015| 2016-04-06T02:52:51.774-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 83 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:10.086-0500 c20012| 2016-04-06T02:52:09.032-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 852 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:10.090-0500 c20012| 2016-04-06T02:52:09.033-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.092-0500 c20012| 2016-04-06T02:52:09.033-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 853 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.093-0500 c20012| 2016-04-06T02:52:09.033-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 853 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:10.094-0500 c20012| 2016-04-06T02:52:09.033-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 853 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.095-0500 c20012| 2016-04-06T02:52:09.034-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 852 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.096-0500 c20012| 2016-04-06T02:52:09.034-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.099-0500 c20012| 2016-04-06T02:52:09.034-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:10.102-0500 c20012| 2016-04-06T02:52:09.035-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 856 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.035-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:10.105-0500 c20012| 2016-04-06T02:52:09.035-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 856 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:10.107-0500 c20012| 2016-04-06T02:52:09.036-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.108-0500 c20012| 2016-04-06T02:52:09.036-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|2, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:10.110-0500 c20012| 2016-04-06T02:52:09.036-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.113-0500 c20012| 2016-04-06T02:52:09.036-0500 D QUERY [conn7] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:10.116-0500 c20012| 2016-04-06T02:52:09.036-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:10.118-0500 c20011| 2016-04-06T02:52:26.824-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.120-0500 c20011| 2016-04-06T02:52:26.825-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 202 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.825-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:10.122-0500 c20011| 2016-04-06T02:52:26.825-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.122-0500 c20011| 2016-04-06T02:52:26.825-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.123-0500 c20011| 2016-04-06T02:52:26.825-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.126-0500 c20011| 2016-04-06T02:52:26.825-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 202 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:10.132-0500 c20012| 2016-04-06T02:52:09.038-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 856 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929129000|3, t: 1, h: -6195657287990773069, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02965c17830b843f19a'), state: 2, when: new Date(1459929129036), why: "splitting chunk [{ _id: -86.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.135-0500 c20012| 2016-04-06T02:52:09.039-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929129000|3 and ending at ts: Timestamp 1459929129000|3 [js_test:multi_coll_drop] 2016-04-06T02:53:10.136-0500 c20012| 2016-04-06T02:52:09.041-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 858 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.041-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:10.137-0500 c20012| 2016-04-06T02:52:09.041-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 858 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:10.139-0500 c20012| 2016-04-06T02:52:09.042-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:10.139-0500 c20012| 2016-04-06T02:52:09.042-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.139-0500 c20012| 2016-04-06T02:52:09.042-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.140-0500 c20012| 2016-04-06T02:52:09.042-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.141-0500 c20011| 2016-04-06T02:52:26.825-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.141-0500 c20011| 2016-04-06T02:52:26.825-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.144-0500 c20011| 2016-04-06T02:52:26.825-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.144-0500 c20011| 2016-04-06T02:52:26.825-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.145-0500 c20011| 2016-04-06T02:52:26.825-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.145-0500 c20011| 2016-04-06T02:52:26.825-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.147-0500 c20011| 2016-04-06T02:52:26.825-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.147-0500 c20011| 2016-04-06T02:52:26.825-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.148-0500 c20011| 2016-04-06T02:52:26.825-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.149-0500 c20011| 2016-04-06T02:52:26.825-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.150-0500 c20011| 2016-04-06T02:52:26.825-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.150-0500 c20011| 2016-04-06T02:52:26.825-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.153-0500 c20011| 2016-04-06T02:52:26.825-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.154-0500 c20011| 2016-04-06T02:52:26.825-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:10.157-0500 c20011| 2016-04-06T02:52:26.826-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.162-0500 c20011| 2016-04-06T02:52:26.826-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 203 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.164-0500 c20011| 2016-04-06T02:52:26.826-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 203 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:10.165-0500 c20011| 2016-04-06T02:52:26.826-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 203 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.167-0500 c20011| 2016-04-06T02:52:26.827-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.171-0500 c20011| 2016-04-06T02:52:26.827-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 205 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.176-0500 c20011| 2016-04-06T02:52:26.827-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 205 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:10.181-0500 c20011| 2016-04-06T02:52:26.827-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 202 finished with response: { cursor: { nextBatch: [], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.182-0500 c20011| 2016-04-06T02:52:26.827-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 205 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.185-0500 c20011| 2016-04-06T02:52:26.828-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|2, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.186-0500 c20011| 2016-04-06T02:52:26.828-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:10.194-0500 c20011| 2016-04-06T02:52:26.828-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 208 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.828-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|2, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:10.195-0500 c20011| 2016-04-06T02:52:26.828-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 208 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:10.205-0500 c20011| 2016-04-06T02:52:26.828-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 208 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929146000|3, t: 2, h: 7943051809962790375, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:26.828-0500-5704c03a65c17830b843f1ac", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929146828), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -78.0 }, max: { _id: MaxKey } }, left: { min: { _id: -78.0 }, max: { _id: -77.0 }, lastmod: Timestamp 1000|47, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -77.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|48, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.208-0500 c20011| 2016-04-06T02:52:26.828-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929146000|3 and ending at ts: Timestamp 1459929146000|3 [js_test:multi_coll_drop] 2016-04-06T02:53:10.211-0500 c20011| 2016-04-06T02:52:26.828-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:10.217-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.220-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.222-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.222-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.224-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.224-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.225-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.227-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.228-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.230-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.230-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.231-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.233-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.233-0500 c20011| 2016-04-06T02:52:26.829-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:10.238-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.238-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.240-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.241-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.243-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.244-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.245-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.246-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.246-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.251-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.251-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.253-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.254-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.257-0500 c20011| 2016-04-06T02:52:26.829-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.258-0500 c20011| 2016-04-06T02:52:26.830-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.258-0500 c20011| 2016-04-06T02:52:26.830-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.264-0500 c20011| 2016-04-06T02:52:26.831-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 210 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.830-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|2, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:10.266-0500 c20011| 2016-04-06T02:52:26.831-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 210 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:10.267-0500 c20011| 2016-04-06T02:52:26.831-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.269-0500 c20011| 2016-04-06T02:52:26.831-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.271-0500 c20011| 2016-04-06T02:52:26.831-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.274-0500 c20011| 2016-04-06T02:52:26.831-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:10.278-0500 c20012| 2016-04-06T02:52:09.042-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.284-0500 c20012| 2016-04-06T02:52:09.042-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.288-0500 c20012| 2016-04-06T02:52:09.042-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.289-0500 c20012| 2016-04-06T02:52:09.042-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.289-0500 c20012| 2016-04-06T02:52:09.042-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.290-0500 c20012| 2016-04-06T02:52:09.042-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.290-0500 c20012| 2016-04-06T02:52:09.042-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.292-0500 c20012| 2016-04-06T02:52:09.042-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.297-0500 c20011| 2016-04-06T02:52:26.831-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.307-0500 c20011| 2016-04-06T02:52:26.831-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 211 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.309-0500 c20011| 2016-04-06T02:52:26.831-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 211 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:10.311-0500 c20011| 2016-04-06T02:52:26.831-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 211 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.319-0500 c20011| 2016-04-06T02:52:26.832-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.331-0500 c20011| 2016-04-06T02:52:26.832-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 213 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.332-0500 c20011| 2016-04-06T02:52:26.832-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 213 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:10.345-0500 c20011| 2016-04-06T02:52:26.832-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 213 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.346-0500 c20011| 2016-04-06T02:52:26.832-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 210 finished with response: { cursor: { nextBatch: [], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.346-0500 c20011| 2016-04-06T02:52:26.833-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|3, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.346-0500 c20011| 2016-04-06T02:52:26.833-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:10.349-0500 c20011| 2016-04-06T02:52:26.833-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 216 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.833-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|3, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:10.350-0500 c20011| 2016-04-06T02:52:26.833-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 216 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:10.355-0500 c20011| 2016-04-06T02:52:26.833-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 216 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929146000|4, t: 2, h: 9033909893478134583, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.356-0500 c20012| 2016-04-06T02:52:09.042-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.356-0500 c20012| 2016-04-06T02:52:09.042-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.357-0500 c20012| 2016-04-06T02:52:09.042-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:10.357-0500 c20012| 2016-04-06T02:52:09.042-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.358-0500 c20012| 2016-04-06T02:52:09.042-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.359-0500 c20012| 2016-04-06T02:52:09.042-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.359-0500 c20012| 2016-04-06T02:52:09.042-0500 D QUERY [repl writer worker 12] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:10.360-0500 c20012| 2016-04-06T02:52:09.042-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.360-0500 c20012| 2016-04-06T02:52:09.042-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.362-0500 c20012| 2016-04-06T02:52:09.042-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.363-0500 c20012| 2016-04-06T02:52:09.042-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.366-0500 c20012| 2016-04-06T02:52:09.042-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.368-0500 c20012| 2016-04-06T02:52:09.043-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.371-0500 c20012| 2016-04-06T02:52:09.043-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.371-0500 c20012| 2016-04-06T02:52:09.043-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.372-0500 c20012| 2016-04-06T02:52:09.043-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.372-0500 c20012| 2016-04-06T02:52:09.043-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.373-0500 c20012| 2016-04-06T02:52:09.043-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.375-0500 c20012| 2016-04-06T02:52:09.043-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.376-0500 c20012| 2016-04-06T02:52:09.043-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.379-0500 c20012| 2016-04-06T02:52:09.043-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.382-0500 c20012| 2016-04-06T02:52:09.043-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.383-0500 c20012| 2016-04-06T02:52:09.043-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.385-0500 c20012| 2016-04-06T02:52:09.043-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:10.390-0500 c20012| 2016-04-06T02:52:09.043-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.394-0500 c20012| 2016-04-06T02:52:09.043-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 859 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.395-0500 c20012| 2016-04-06T02:52:09.043-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 859 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:10.399-0500 c20012| 2016-04-06T02:52:09.044-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 859 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.404-0500 c20012| 2016-04-06T02:52:09.050-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 858 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.405-0500 c20012| 2016-04-06T02:52:09.050-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.406-0500 c20012| 2016-04-06T02:52:09.050-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:10.408-0500 c20012| 2016-04-06T02:52:09.050-0500 D COMMAND [conn11] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|3, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.419-0500 c20012| 2016-04-06T02:52:09.050-0500 D COMMAND [conn11] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|3, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:10.420-0500 c20012| 2016-04-06T02:52:09.050-0500 D COMMAND [conn11] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|3, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.420-0500 c20012| 2016-04-06T02:52:09.050-0500 D QUERY [conn11] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:10.428-0500 c20012| 2016-04-06T02:52:09.050-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 862 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.050-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:10.428-0500 c20012| 2016-04-06T02:52:09.050-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 862 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:10.437-0500 c20012| 2016-04-06T02:52:09.050-0500 I COMMAND [conn11] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|3, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:492 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:10.439-0500 c20012| 2016-04-06T02:52:09.051-0500 D COMMAND [conn11] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|30 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|3, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.440-0500 c20012| 2016-04-06T02:52:09.051-0500 D COMMAND [conn11] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|3, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:10.442-0500 c20012| 2016-04-06T02:52:09.051-0500 D COMMAND [conn11] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|30 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|3, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.447-0500 c20012| 2016-04-06T02:52:09.051-0500 D QUERY [conn11] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:10.452-0500 c20012| 2016-04-06T02:52:09.051-0500 I COMMAND [conn11] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|30 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|3, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:10.459-0500 c20012| 2016-04-06T02:52:09.052-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.466-0500 c20012| 2016-04-06T02:52:09.052-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 863 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.468-0500 c20012| 2016-04-06T02:52:09.052-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 863 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:10.471-0500 c20012| 2016-04-06T02:52:09.052-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 863 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.475-0500 c20012| 2016-04-06T02:52:09.056-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 862 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929129000|4, t: 1, h: 6878295864364967569, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-86.0", lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -86.0 }, max: { _id: -85.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-86.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-85.0", lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -85.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-85.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.477-0500 c20012| 2016-04-06T02:52:09.056-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929129000|4 and ending at ts: Timestamp 1459929129000|4 [js_test:multi_coll_drop] 2016-04-06T02:53:10.478-0500 c20012| 2016-04-06T02:52:09.056-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:10.479-0500 c20012| 2016-04-06T02:52:09.056-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.480-0500 c20012| 2016-04-06T02:52:09.056-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.481-0500 c20012| 2016-04-06T02:52:09.056-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.484-0500 c20012| 2016-04-06T02:52:09.056-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.487-0500 c20012| 2016-04-06T02:52:09.056-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.489-0500 c20012| 2016-04-06T02:52:09.056-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.490-0500 c20012| 2016-04-06T02:52:09.056-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.490-0500 c20012| 2016-04-06T02:52:09.056-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.491-0500 c20012| 2016-04-06T02:52:09.056-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.492-0500 c20011| 2016-04-06T02:52:26.834-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929146000|4 and ending at ts: Timestamp 1459929146000|4 [js_test:multi_coll_drop] 2016-04-06T02:53:10.499-0500 c20011| 2016-04-06T02:52:26.834-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:10.499-0500 c20011| 2016-04-06T02:52:26.834-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.500-0500 c20011| 2016-04-06T02:52:26.834-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.501-0500 c20011| 2016-04-06T02:52:26.834-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.501-0500 c20011| 2016-04-06T02:52:26.834-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.503-0500 c20011| 2016-04-06T02:52:26.834-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.513-0500 c20011| 2016-04-06T02:52:26.834-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.513-0500 c20011| 2016-04-06T02:52:26.834-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.519-0500 c20011| 2016-04-06T02:52:26.834-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.520-0500 c20011| 2016-04-06T02:52:26.834-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.521-0500 c20011| 2016-04-06T02:52:26.834-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.522-0500 c20011| 2016-04-06T02:52:26.834-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.523-0500 c20011| 2016-04-06T02:52:26.834-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.524-0500 c20011| 2016-04-06T02:52:26.834-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:10.537-0500 c20011| 2016-04-06T02:52:26.834-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.541-0500 c20011| 2016-04-06T02:52:26.834-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.542-0500 c20011| 2016-04-06T02:52:26.834-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.549-0500 c20011| 2016-04-06T02:52:26.834-0500 D QUERY [repl writer worker 3] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:10.550-0500 c20011| 2016-04-06T02:52:26.834-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.554-0500 c20011| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.555-0500 c20011| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.566-0500 c20011| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.567-0500 c20011| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.568-0500 c20011| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.569-0500 c20011| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.572-0500 c20011| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.573-0500 c20011| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.577-0500 c20011| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.578-0500 c20011| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.579-0500 c20011| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.581-0500 c20011| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.582-0500 c20011| 2016-04-06T02:52:26.836-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.582-0500 c20012| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.584-0500 c20012| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.584-0500 c20012| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.586-0500 c20012| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.587-0500 c20012| 2016-04-06T02:52:09.057-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:10.588-0500 c20012| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.590-0500 c20012| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.591-0500 c20012| 2016-04-06T02:52:09.057-0500 D QUERY [repl writer worker 4] Using idhack: { _id: "multidrop.coll-_id_-86.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:10.591-0500 c20012| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.592-0500 c20012| 2016-04-06T02:52:09.057-0500 D QUERY [repl writer worker 4] Using idhack: { _id: "multidrop.coll-_id_-85.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:10.592-0500 c20012| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.593-0500 c20012| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.593-0500 c20012| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.594-0500 c20012| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.598-0500 c20012| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.600-0500 c20012| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.601-0500 c20012| 2016-04-06T02:52:09.058-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.601-0500 c20012| 2016-04-06T02:52:09.058-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.605-0500 c20012| 2016-04-06T02:52:09.058-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.607-0500 c20012| 2016-04-06T02:52:09.058-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.608-0500 c20012| 2016-04-06T02:52:09.058-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.610-0500 c20011| 2016-04-06T02:52:26.836-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.611-0500 c20011| 2016-04-06T02:52:26.836-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.612-0500 c20011| 2016-04-06T02:52:26.836-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.613-0500 c20011| 2016-04-06T02:52:26.836-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:10.614-0500 c20012| 2016-04-06T02:52:09.058-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.621-0500 c20012| 2016-04-06T02:52:09.058-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 866 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.058-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:10.622-0500 c20012| 2016-04-06T02:52:09.058-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.625-0500 c20012| 2016-04-06T02:52:09.058-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.627-0500 c20012| 2016-04-06T02:52:09.058-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.629-0500 c20012| 2016-04-06T02:52:09.058-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.633-0500 c20012| 2016-04-06T02:52:09.059-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 866 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:10.635-0500 c20012| 2016-04-06T02:52:09.059-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:10.646-0500 c20012| 2016-04-06T02:52:09.059-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.650-0500 c20012| 2016-04-06T02:52:09.059-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 867 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.652-0500 c20012| 2016-04-06T02:52:09.059-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 867 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:10.659-0500 c20012| 2016-04-06T02:52:09.059-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 867 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.666-0500 c20011| 2016-04-06T02:52:26.836-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 218 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.836-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|3, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:10.670-0500 c20011| 2016-04-06T02:52:26.836-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.674-0500 c20011| 2016-04-06T02:52:26.836-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 219 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.675-0500 c20011| 2016-04-06T02:52:26.836-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 218 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:10.677-0500 c20011| 2016-04-06T02:52:26.836-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 219 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:10.678-0500 c20011| 2016-04-06T02:52:26.836-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 219 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.680-0500 c20011| 2016-04-06T02:52:26.837-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.681-0500 c20011| 2016-04-06T02:52:26.837-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 221 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.683-0500 c20011| 2016-04-06T02:52:26.837-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 221 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:10.684-0500 c20012| 2016-04-06T02:52:09.062-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 866 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.686-0500 c20012| 2016-04-06T02:52:09.063-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.687-0500 c20012| 2016-04-06T02:52:09.063-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:10.688-0500 c20012| 2016-04-06T02:52:09.063-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 870 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.063-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:10.689-0500 c20012| 2016-04-06T02:52:09.063-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 870 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:10.691-0500 c20012| 2016-04-06T02:52:09.063-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 870 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929129000|5, t: 1, h: -2747954062576067140, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:09.062-0500-5704c02965c17830b843f19b", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929129062), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -86.0 }, max: { _id: MaxKey } }, left: { min: { _id: -86.0 }, max: { _id: -85.0 }, lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -85.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.692-0500 2016-04-06T02:52:56.714-0500 I NETWORK [ReplicaSetMonitorWatcher] Socket recv() timeout 192.168.100.28:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:10.694-0500 c20012| 2016-04-06T02:52:09.063-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929129000|5 and ending at ts: Timestamp 1459929129000|5 [js_test:multi_coll_drop] 2016-04-06T02:53:10.694-0500 c20012| 2016-04-06T02:52:09.063-0500 D REPL [rsBackgroundSync-0] bgsync buffer has 0 bytes [js_test:multi_coll_drop] 2016-04-06T02:53:10.701-0500 c20012| 2016-04-06T02:52:09.064-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:10.704-0500 c20012| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.705-0500 c20012| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.706-0500 c20012| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.706-0500 c20012| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.712-0500 2016-04-06T02:52:56.738-0500c20012| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.712-0500 I NETWORK c20012| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.713-0500 [ReplicaSetMonitorWatcher] SocketException: remote: (NONE):0 error: 9001 socket exception [RECV_TIMEOUT] server [192.168.100.28:20013] [js_test:multi_coll_drop] 2016-04-06T02:53:10.719-0500 c20012| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.720-0500 c20012| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.723-0500 c20012| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.729-0500 c20012| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.729-0500 c20012| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.735-0500 c20012| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.738-0500 c20012| 2016-04-06T02:52:09.064-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:10.739-0500 c20012| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.740-0500 c20012| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.748-0500 c20012| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.751-0500 c20012| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.754-0500 c20012| 2016-04-06T02:52:09.065-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.758-0500 2016-04-06T02:52:56.738-0500 I NETWORK [ReplicaSetMonitorWatcher] Detected bad connection created at 1459929123687558 microSec, clearing pool for mongovm16:20013 of 0 connections [js_test:multi_coll_drop] 2016-04-06T02:53:10.762-0500 c20012| 2016-04-06T02:52:09.065-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.763-0500 c20012| 2016-04-06T02:52:09.065-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.764-0500 c20012| 2016-04-06T02:52:09.065-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.767-0500 c20012| 2016-04-06T02:52:09.065-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.770-0500 c20012| 2016-04-06T02:52:09.065-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.774-0500 c20012| 2016-04-06T02:52:09.066-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.777-0500 c20012| 2016-04-06T02:52:09.066-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 872 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.778-0500 c20012| 2016-04-06T02:52:09.066-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.780-0500 2016-04-06T02:52:56.738-0500c20012| 2016-04-06T02:52:09.066-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 872 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:10.780-0500 I NETWORK [ReplicaSetMonitorWatcher] Socket closed remotely, no longer connected (idle 20 secs, remote host 192.168.100.28:20012) [js_test:multi_coll_drop] 2016-04-06T02:53:10.782-0500 c20012| 2016-04-06T02:52:09.066-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.783-0500 c20012| 2016-04-06T02:52:09.066-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.785-0500 c20012| 2016-04-06T02:52:09.066-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.789-0500 c20012| 2016-04-06T02:52:09.066-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 872 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.794-0500 c20012| 2016-04-06T02:52:09.066-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 874 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.066-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:10.795-0500 c20012| 2016-04-06T02:52:09.066-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.798-0500 c20012| 2016-04-06T02:52:09.066-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 874 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:10.798-0500 c20012| 2016-04-06T02:52:09.066-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.799-0500 c20012| 2016-04-06T02:52:09.066-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.801-0500 c20012| 2016-04-06T02:52:09.066-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.805-0500 c20012| 2016-04-06T02:52:09.066-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.805-0500 c20012| 2016-04-06T02:52:09.066-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.809-0500 c20012| 2016-04-06T02:52:09.066-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:10.815-0500 c20012| 2016-04-06T02:52:09.067-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.819-0500 c20012| 2016-04-06T02:52:09.067-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 875 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.821-0500 c20012| 2016-04-06T02:52:09.067-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 875 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:10.822-0500 c20012| 2016-04-06T02:52:09.067-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 875 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.825-0500 c20012| 2016-04-06T02:52:09.073-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.830-0500 c20012| 2016-04-06T02:52:09.073-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 877 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.833-0500 c20012| 2016-04-06T02:52:09.073-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 877 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:10.833-0500 c20012| 2016-04-06T02:52:09.074-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 877 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.834-0500 c20012| 2016-04-06T02:52:09.074-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 874 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.836-0500 c20012| 2016-04-06T02:52:09.074-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.837-0500 c20012| 2016-04-06T02:52:09.074-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:10.838-0500 c20012| 2016-04-06T02:52:09.074-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 880 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.074-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:10.839-0500 c20012| 2016-04-06T02:52:09.074-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 880 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:10.841-0500 c20012| 2016-04-06T02:52:09.074-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 880 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929129000|6, t: 1, h: 1904439408712808447, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.843-0500 c20012| 2016-04-06T02:52:09.075-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929129000|6 and ending at ts: Timestamp 1459929129000|6 [js_test:multi_coll_drop] 2016-04-06T02:53:10.844-0500 c20012| 2016-04-06T02:52:09.075-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:10.844-0500 c20012| 2016-04-06T02:52:09.075-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.845-0500 c20012| 2016-04-06T02:52:09.075-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.845-0500 c20012| 2016-04-06T02:52:09.075-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.846-0500 c20012| 2016-04-06T02:52:09.075-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.848-0500 c20012| 2016-04-06T02:52:09.075-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.848-0500 c20012| 2016-04-06T02:52:09.075-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.848-0500 c20012| 2016-04-06T02:52:09.075-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.849-0500 c20012| 2016-04-06T02:52:09.075-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.849-0500 c20012| 2016-04-06T02:52:09.075-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.850-0500 c20012| 2016-04-06T02:52:09.075-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.851-0500 c20012| 2016-04-06T02:52:09.075-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.852-0500 c20012| 2016-04-06T02:52:09.075-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.853-0500 c20012| 2016-04-06T02:52:09.075-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.854-0500 c20012| 2016-04-06T02:52:09.075-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.854-0500 c20012| 2016-04-06T02:52:09.075-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.855-0500 c20012| 2016-04-06T02:52:09.075-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:10.855-0500 c20012| 2016-04-06T02:52:09.076-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:10.855-0500 c20012| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.856-0500 c20012| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.857-0500 c20012| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.858-0500 c20012| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.860-0500 c20012| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.861-0500 c20012| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.863-0500 c20012| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.864-0500 c20012| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.864-0500 c20012| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.867-0500 c20012| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.868-0500 c20012| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.869-0500 c20012| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.869-0500 c20012| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.870-0500 c20012| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.871-0500 c20012| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.872-0500 c20012| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.874-0500 c20012| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.876-0500 c20012| 2016-04-06T02:52:09.076-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:10.881-0500 c20012| 2016-04-06T02:52:09.076-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.886-0500 c20012| 2016-04-06T02:52:09.076-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 882 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.888-0500 c20012| 2016-04-06T02:52:09.076-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 882 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:10.889-0500 c20012| 2016-04-06T02:52:09.077-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 882 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.897-0500 c20012| 2016-04-06T02:52:09.077-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 884 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.077-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:10.900-0500 c20012| 2016-04-06T02:52:09.079-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 884 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:10.906-0500 c20012| 2016-04-06T02:52:09.086-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|6, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.908-0500 c20012| 2016-04-06T02:52:09.086-0500 D REPL [conn7] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929129000|6, t: 1 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929129000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.910-0500 c20012| 2016-04-06T02:52:09.086-0500 D REPL [conn7] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999964μs [js_test:multi_coll_drop] 2016-04-06T02:53:10.915-0500 c20012| 2016-04-06T02:52:09.087-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.920-0500 c20012| 2016-04-06T02:52:09.087-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 885 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:10.922-0500 c20012| 2016-04-06T02:52:09.087-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 885 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:10.927-0500 c20012| 2016-04-06T02:52:09.087-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 885 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.929-0500 c20012| 2016-04-06T02:52:09.088-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 884 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.932-0500 c20012| 2016-04-06T02:52:09.088-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.932-0500 c20012| 2016-04-06T02:52:09.088-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:10.936-0500 c20012| 2016-04-06T02:52:09.088-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 888 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.088-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:10.941-0500 c20012| 2016-04-06T02:52:09.088-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|6, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:10.946-0500 c20012| 2016-04-06T02:52:09.088-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|6, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.949-0500 c20012| 2016-04-06T02:52:09.088-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 888 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:10.951-0500 c20012| 2016-04-06T02:52:09.088-0500 D QUERY [conn7] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:10.959-0500 c20012| 2016-04-06T02:52:09.088-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|6, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:10.963-0500 c20012| 2016-04-06T02:52:09.094-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 888 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929129000|7, t: 1, h: 7424373951997247397, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02965c17830b843f19c'), state: 2, when: new Date(1459929129093), why: "splitting chunk [{ _id: -85.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:10.972-0500 c20012| 2016-04-06T02:52:09.094-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929129000|7 and ending at ts: Timestamp 1459929129000|7 [js_test:multi_coll_drop] 2016-04-06T02:53:10.975-0500 c20012| 2016-04-06T02:52:09.094-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:10.976-0500 c20012| 2016-04-06T02:52:09.094-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.979-0500 c20012| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.981-0500 c20012| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.982-0500 c20012| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.986-0500 c20012| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.988-0500 c20012| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.988-0500 c20012| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.989-0500 c20012| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.989-0500 c20012| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.991-0500 c20012| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.991-0500 c20012| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.992-0500 c20012| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.993-0500 c20012| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.994-0500 c20012| 2016-04-06T02:52:09.095-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:10.996-0500 c20012| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.997-0500 c20012| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:10.998-0500 c20012| 2016-04-06T02:52:09.095-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:10.999-0500 c20012| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.008-0500 c20012| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.010-0500 c20012| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.010-0500 c20012| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.011-0500 c20012| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.011-0500 c20012| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.011-0500 c20012| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.016-0500 c20012| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.016-0500 c20012| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.016-0500 c20012| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.017-0500 c20012| 2016-04-06T02:52:09.096-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.022-0500 c20012| 2016-04-06T02:52:09.096-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.023-0500 c20012| 2016-04-06T02:52:09.096-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.029-0500 c20012| 2016-04-06T02:52:09.096-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.030-0500 c20012| 2016-04-06T02:52:09.096-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.033-0500 c20012| 2016-04-06T02:52:09.096-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.033-0500 c20012| 2016-04-06T02:52:09.096-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.038-0500 c20012| 2016-04-06T02:52:09.096-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:11.042-0500 c20012| 2016-04-06T02:52:09.096-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:11.048-0500 c20012| 2016-04-06T02:52:09.096-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 890 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:11.050-0500 c20012| 2016-04-06T02:52:09.096-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 890 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:11.051-0500 c20012| 2016-04-06T02:52:09.096-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 890 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.053-0500 c20012| 2016-04-06T02:52:09.096-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 892 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.096-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:11.055-0500 c20012| 2016-04-06T02:52:09.096-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 892 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:11.059-0500 c20012| 2016-04-06T02:52:09.100-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:11.085-0500 c20012| 2016-04-06T02:52:09.100-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 893 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:11.087-0500 c20012| 2016-04-06T02:52:09.100-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 893 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:11.091-0500 c20012| 2016-04-06T02:52:09.100-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 893 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.094-0500 c20012| 2016-04-06T02:52:09.100-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 892 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.095-0500 c20012| 2016-04-06T02:52:09.100-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.095-0500 c20012| 2016-04-06T02:52:09.100-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:11.099-0500 c20012| 2016-04-06T02:52:09.100-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 896 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.100-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:11.100-0500 c20012| 2016-04-06T02:52:09.101-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 896 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:11.107-0500 c20012| 2016-04-06T02:52:09.103-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 896 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929129000|8, t: 1, h: -8286090448525995533, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-85.0", lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -85.0 }, max: { _id: -84.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-85.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-84.0", lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -84.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-84.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.111-0500 c20012| 2016-04-06T02:52:09.103-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929129000|8 and ending at ts: Timestamp 1459929129000|8 [js_test:multi_coll_drop] 2016-04-06T02:53:11.118-0500 c20012| 2016-04-06T02:52:09.103-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:11.121-0500 c20012| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.123-0500 c20012| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.126-0500 c20012| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.129-0500 c20012| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.130-0500 c20012| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.130-0500 c20012| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.132-0500 c20012| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.134-0500 c20012| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.134-0500 c20012| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.147-0500 c20012| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.147-0500 c20012| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.151-0500 c20012| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.153-0500 c20012| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.153-0500 c20012| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.157-0500 c20012| 2016-04-06T02:52:09.103-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:11.158-0500 c20012| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.159-0500 c20012| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.159-0500 c20012| 2016-04-06T02:52:09.103-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll-_id_-85.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:11.160-0500 c20012| 2016-04-06T02:52:09.104-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll-_id_-84.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:11.162-0500 c20012| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.165-0500 c20012| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.169-0500 c20012| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.170-0500 c20012| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.170-0500 c20012| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.171-0500 c20012| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.172-0500 c20012| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.173-0500 c20012| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.174-0500 c20012| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.178-0500 c20012| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.178-0500 c20012| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.179-0500 c20012| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.180-0500 c20012| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.180-0500 c20012| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.181-0500 c20012| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.182-0500 c20012| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.184-0500 c20012| 2016-04-06T02:52:09.104-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:11.187-0500 c20012| 2016-04-06T02:52:09.104-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:11.191-0500 c20012| 2016-04-06T02:52:09.104-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 898 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:11.192-0500 c20012| 2016-04-06T02:52:09.104-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 898 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:11.193-0500 c20012| 2016-04-06T02:52:09.104-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 898 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.195-0500 c20012| 2016-04-06T02:52:09.106-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 900 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.106-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:11.199-0500 c20012| 2016-04-06T02:52:09.107-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:11.201-0500 c20012| 2016-04-06T02:52:09.107-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 901 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:11.202-0500 c20012| 2016-04-06T02:52:09.107-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 901 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:11.205-0500 c20012| 2016-04-06T02:52:09.107-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 901 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.210-0500 c20012| 2016-04-06T02:52:09.109-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 900 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:11.217-0500 c20012| 2016-04-06T02:52:09.111-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 900 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929129000|9, t: 1, h: 6671296048852295689, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:09.107-0500-5704c02965c17830b843f19d", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929129107), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -85.0 }, max: { _id: MaxKey } }, left: { min: { _id: -85.0 }, max: { _id: -84.0 }, lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -84.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.218-0500 c20012| 2016-04-06T02:52:09.111-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.218-0500 c20012| 2016-04-06T02:52:09.111-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929129000|9 and ending at ts: Timestamp 1459929129000|9 [js_test:multi_coll_drop] 2016-04-06T02:53:11.221-0500 c20012| 2016-04-06T02:52:09.111-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:11.223-0500 c20012| 2016-04-06T02:52:09.111-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.225-0500 c20012| 2016-04-06T02:52:09.111-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.226-0500 c20012| 2016-04-06T02:52:09.111-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.227-0500 c20012| 2016-04-06T02:52:09.111-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.229-0500 c20012| 2016-04-06T02:52:09.111-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.231-0500 c20012| 2016-04-06T02:52:09.111-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.254-0500 c20012| 2016-04-06T02:52:09.111-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.257-0500 c20012| 2016-04-06T02:52:09.111-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.258-0500 c20012| 2016-04-06T02:52:09.111-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.259-0500 c20012| 2016-04-06T02:52:09.111-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.263-0500 c20012| 2016-04-06T02:52:09.111-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.264-0500 c20012| 2016-04-06T02:52:09.111-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.265-0500 c20012| 2016-04-06T02:52:09.111-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.268-0500 c20012| 2016-04-06T02:52:09.111-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.270-0500 c20012| 2016-04-06T02:52:09.111-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.270-0500 c20012| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.272-0500 c20012| 2016-04-06T02:52:09.112-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:11.275-0500 c20012| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.277-0500 c20012| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.278-0500 c20012| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.279-0500 c20012| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.280-0500 c20012| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.281-0500 c20012| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.283-0500 c20012| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.285-0500 c20012| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.290-0500 c20012| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.291-0500 c20012| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.300-0500 c20012| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.302-0500 c20012| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.307-0500 c20012| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.317-0500 c20012| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.321-0500 c20012| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.322-0500 c20012| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.325-0500 c20012| 2016-04-06T02:52:09.112-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:11.331-0500 c20012| 2016-04-06T02:52:09.112-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:11.343-0500 c20012| 2016-04-06T02:52:09.112-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 904 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:11.347-0500 c20012| 2016-04-06T02:52:09.112-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 904 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:11.353-0500 c20012| 2016-04-06T02:52:09.113-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 904 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.367-0500 c20012| 2016-04-06T02:52:09.113-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 906 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.113-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:11.371-0500 c20012| 2016-04-06T02:52:09.113-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 906 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:11.389-0500 c20012| 2016-04-06T02:52:09.113-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:11.393-0500 c20012| 2016-04-06T02:52:09.113-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 907 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:11.394-0500 c20012| 2016-04-06T02:52:09.113-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 907 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:11.395-0500 c20012| 2016-04-06T02:52:09.114-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 907 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.396-0500 c20012| 2016-04-06T02:52:09.114-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 906 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.397-0500 c20012| 2016-04-06T02:52:09.114-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.398-0500 c20012| 2016-04-06T02:52:09.114-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:11.399-0500 c20012| 2016-04-06T02:52:09.114-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 910 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.114-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:11.400-0500 c20012| 2016-04-06T02:52:09.114-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 910 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:11.404-0500 c20012| 2016-04-06T02:52:09.114-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 910 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929129000|10, t: 1, h: -8221257626238961736, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.406-0500 c20012| 2016-04-06T02:52:09.114-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929129000|10 and ending at ts: Timestamp 1459929129000|10 [js_test:multi_coll_drop] 2016-04-06T02:53:11.407-0500 c20012| 2016-04-06T02:52:09.114-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:11.407-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.408-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.408-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.409-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.410-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.410-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.414-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.415-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.416-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.418-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.423-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.430-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.431-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.432-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.433-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.434-0500 c20012| 2016-04-06T02:52:09.115-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:11.435-0500 c20012| 2016-04-06T02:52:09.115-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:11.437-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.440-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.442-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.445-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.445-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.446-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.451-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.453-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.455-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.455-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.456-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.459-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.464-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.464-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.466-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.466-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.468-0500 c20012| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.470-0500 c20012| 2016-04-06T02:52:09.116-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:11.476-0500 c20012| 2016-04-06T02:52:09.116-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:11.486-0500 c20012| 2016-04-06T02:52:09.116-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 912 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:11.498-0500 c20012| 2016-04-06T02:52:09.116-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 912 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:11.499-0500 c20012| 2016-04-06T02:52:09.116-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 912 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.502-0500 c20012| 2016-04-06T02:52:09.117-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 914 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.117-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:11.505-0500 c20012| 2016-04-06T02:52:09.117-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 914 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:11.507-0500 c20012| 2016-04-06T02:52:09.125-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:11.509-0500 c20012| 2016-04-06T02:52:09.125-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 915 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:11.510-0500 c20012| 2016-04-06T02:52:09.125-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 915 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:11.512-0500 c20012| 2016-04-06T02:52:09.126-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 915 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.512-0500 c20012| 2016-04-06T02:52:09.126-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 914 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.514-0500 c20012| 2016-04-06T02:52:09.126-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.514-0500 c20012| 2016-04-06T02:52:09.126-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:11.515-0500 c20012| 2016-04-06T02:52:09.126-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 918 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.126-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:11.517-0500 c20012| 2016-04-06T02:52:09.127-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 918 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:11.519-0500 c20012| 2016-04-06T02:52:09.127-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|10, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.520-0500 c20012| 2016-04-06T02:52:09.127-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|10, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:11.522-0500 c20012| 2016-04-06T02:52:09.127-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|10, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.524-0500 c20012| 2016-04-06T02:52:09.127-0500 D QUERY [conn7] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:11.529-0500 c20012| 2016-04-06T02:52:09.127-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|10, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:11.533-0500 c20012| 2016-04-06T02:52:09.127-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|32 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|10, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.535-0500 c20012| 2016-04-06T02:52:09.127-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|10, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:11.536-0500 c20012| 2016-04-06T02:52:09.128-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|32 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|10, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.538-0500 c20012| 2016-04-06T02:52:09.128-0500 D QUERY [conn7] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:11.540-0500 c20012| 2016-04-06T02:52:09.128-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|32 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|10, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:712 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:11.547-0500 c20012| 2016-04-06T02:52:09.130-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 918 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929129000|11, t: 1, h: -3977388700970809932, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02965c17830b843f19e'), state: 2, when: new Date(1459929129129), why: "splitting chunk [{ _id: -84.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.549-0500 c20012| 2016-04-06T02:52:09.130-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929129000|11 and ending at ts: Timestamp 1459929129000|11 [js_test:multi_coll_drop] 2016-04-06T02:53:11.550-0500 c20012| 2016-04-06T02:52:09.130-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:11.552-0500 c20012| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.552-0500 c20012| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.553-0500 c20012| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.554-0500 c20012| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.554-0500 c20012| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.562-0500 c20012| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.577-0500 c20012| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.579-0500 c20012| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.583-0500 c20012| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.583-0500 c20012| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.584-0500 c20012| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.588-0500 c20012| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.590-0500 c20012| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.594-0500 c20012| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.595-0500 c20012| 2016-04-06T02:52:09.130-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:11.596-0500 c20012| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.597-0500 c20012| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.599-0500 c20012| 2016-04-06T02:52:09.130-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:11.602-0500 c20012| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.606-0500 c20012| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.611-0500 c20012| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.613-0500 c20012| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.617-0500 c20012| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.617-0500 c20012| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.618-0500 c20012| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.621-0500 c20012| 2016-04-06T02:52:09.131-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.621-0500 c20012| 2016-04-06T02:52:09.131-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.625-0500 c20012| 2016-04-06T02:52:09.131-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.628-0500 c20012| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.630-0500 c20012| 2016-04-06T02:52:09.131-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.630-0500 c20012| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.634-0500 c20012| 2016-04-06T02:52:09.131-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.635-0500 c20012| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.643-0500 c20012| 2016-04-06T02:52:09.131-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.655-0500 c20012| 2016-04-06T02:52:09.131-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:11.700-0500 c20012| 2016-04-06T02:52:09.131-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:11.706-0500 c20012| 2016-04-06T02:52:09.131-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 920 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:11.709-0500 c20012| 2016-04-06T02:52:09.131-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 920 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:11.714-0500 c20012| 2016-04-06T02:52:09.131-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 920 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.727-0500 c20012| 2016-04-06T02:52:09.132-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 922 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.132-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:11.733-0500 c20012| 2016-04-06T02:52:09.132-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 922 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:11.739-0500 c20012| 2016-04-06T02:52:09.133-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:11.745-0500 c20012| 2016-04-06T02:52:09.133-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 923 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:11.747-0500 c20012| 2016-04-06T02:52:09.133-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 923 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:11.749-0500 c20012| 2016-04-06T02:52:09.133-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 923 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.750-0500 c20012| 2016-04-06T02:52:09.139-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 922 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.753-0500 c20012| 2016-04-06T02:52:09.139-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.755-0500 c20012| 2016-04-06T02:52:09.140-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:11.758-0500 c20012| 2016-04-06T02:52:09.140-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 926 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.140-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:11.759-0500 c20012| 2016-04-06T02:52:09.142-0500 D COMMAND [conn11] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|11, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.762-0500 c20012| 2016-04-06T02:52:09.142-0500 D COMMAND [conn11] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|11, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:11.767-0500 c20012| 2016-04-06T02:52:09.142-0500 D COMMAND [conn11] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|11, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.768-0500 c20012| 2016-04-06T02:52:09.142-0500 D QUERY [conn11] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:11.769-0500 c20012| 2016-04-06T02:52:09.143-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 926 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:11.779-0500 c20012| 2016-04-06T02:52:09.143-0500 I COMMAND [conn11] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|11, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:492 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:11.782-0500 c20012| 2016-04-06T02:52:09.143-0500 D COMMAND [conn11] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|34 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|11, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.782-0500 c20012| 2016-04-06T02:52:09.143-0500 D COMMAND [conn11] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|11, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:11.786-0500 c20012| 2016-04-06T02:52:09.143-0500 D COMMAND [conn11] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|34 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|11, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.789-0500 c20012| 2016-04-06T02:52:09.143-0500 D QUERY [conn11] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:11.796-0500 c20012| 2016-04-06T02:52:09.143-0500 I COMMAND [conn11] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|34 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|11, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:11.805-0500 c20012| 2016-04-06T02:52:09.145-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 926 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929129000|12, t: 1, h: 8940339967816449048, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-84.0", lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -84.0 }, max: { _id: -83.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-84.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-83.0", lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -83.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-83.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.808-0500 c20012| 2016-04-06T02:52:09.145-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929129000|12 and ending at ts: Timestamp 1459929129000|12 [js_test:multi_coll_drop] 2016-04-06T02:53:11.810-0500 c20012| 2016-04-06T02:52:09.145-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:11.810-0500 c20012| 2016-04-06T02:52:09.145-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.812-0500 c20012| 2016-04-06T02:52:09.145-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.813-0500 c20012| 2016-04-06T02:52:09.145-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.816-0500 c20012| 2016-04-06T02:52:09.145-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.816-0500 c20012| 2016-04-06T02:52:09.145-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.818-0500 c20012| 2016-04-06T02:52:09.145-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.819-0500 c20012| 2016-04-06T02:52:09.145-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.819-0500 c20012| 2016-04-06T02:52:09.145-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.820-0500 c20012| 2016-04-06T02:52:09.145-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.821-0500 c20012| 2016-04-06T02:52:09.145-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.822-0500 c20012| 2016-04-06T02:52:09.145-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.822-0500 c20012| 2016-04-06T02:52:09.145-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.823-0500 c20012| 2016-04-06T02:52:09.145-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.825-0500 c20012| 2016-04-06T02:52:09.145-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:11.826-0500 c20012| 2016-04-06T02:52:09.146-0500 D QUERY [repl writer worker 3] Using idhack: { _id: "multidrop.coll-_id_-84.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:11.830-0500 c20012| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.835-0500 c20012| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.837-0500 c20012| 2016-04-06T02:52:09.146-0500 D QUERY [repl writer worker 3] Using idhack: { _id: "multidrop.coll-_id_-83.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:11.838-0500 c20012| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.840-0500 c20012| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.843-0500 c20012| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.847-0500 c20012| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.847-0500 c20012| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.850-0500 c20012| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.851-0500 c20012| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.852-0500 c20012| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.852-0500 c20012| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.853-0500 c20012| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.857-0500 c20012| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.858-0500 c20012| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.858-0500 c20012| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.860-0500 c20012| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.863-0500 c20012| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.865-0500 c20012| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.866-0500 c20012| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.867-0500 c20012| 2016-04-06T02:52:09.146-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:11.874-0500 c20012| 2016-04-06T02:52:09.147-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:11.877-0500 c20012| 2016-04-06T02:52:09.147-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 928 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:11.879-0500 c20012| 2016-04-06T02:52:09.147-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 928 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:11.881-0500 c20012| 2016-04-06T02:52:09.147-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 928 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.886-0500 c20012| 2016-04-06T02:52:09.152-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 930 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.152-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:11.887-0500 c20012| 2016-04-06T02:52:09.153-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 930 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:11.895-0500 c20012| 2016-04-06T02:52:09.155-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:11.903-0500 c20012| 2016-04-06T02:52:09.155-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 931 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:11.908-0500 c20012| 2016-04-06T02:52:09.155-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 931 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:11.909-0500 c20012| 2016-04-06T02:52:09.156-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 931 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.910-0500 c20012| 2016-04-06T02:52:10.074-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 933 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:20.074-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.911-0500 c20012| 2016-04-06T02:52:10.074-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 933 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:11.913-0500 c20012| 2016-04-06T02:52:10.080-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 934 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:20.080-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.915-0500 c20012| 2016-04-06T02:52:10.080-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 934 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:11.918-0500 c20012| 2016-04-06T02:52:10.081-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 934 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, opTime: { ts: Timestamp 1459929129000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:11.922-0500 c20012| 2016-04-06T02:52:10.081-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.922-0500 c20012| 2016-04-06T02:52:10.081-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:12.081Z [js_test:multi_coll_drop] 2016-04-06T02:53:11.926-0500 c20012| 2016-04-06T02:52:10.082-0500 D COMMAND [conn5] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.926-0500 c20012| 2016-04-06T02:52:10.082-0500 D COMMAND [conn5] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:11.931-0500 c20012| 2016-04-06T02:52:10.082-0500 I COMMAND [conn5] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:11.935-0500 c20012| 2016-04-06T02:52:10.164-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 930 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.938-0500 c20012| 2016-04-06T02:52:10.164-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 933 finished with response: { ok: 1.0, electionTime: new Date(6270347837762961409), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, opTime: { ts: Timestamp 1459929129000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:11.940-0500 c20012| 2016-04-06T02:52:10.165-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.941-0500 c20012| 2016-04-06T02:52:10.165-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:11.941-0500 c20012| 2016-04-06T02:52:10.183-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:12.183Z [js_test:multi_coll_drop] 2016-04-06T02:53:11.943-0500 c20012| 2016-04-06T02:52:10.183-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:11.945-0500 c20012| 2016-04-06T02:52:10.183-0500 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } numYields:0 reslen:489 locks:{} protocol:op_command 17ms [js_test:multi_coll_drop] 2016-04-06T02:53:11.946-0500 c20012| 2016-04-06T02:52:10.183-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 938 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.183-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:11.947-0500 c20012| 2016-04-06T02:52:10.183-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 938 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:11.953-0500 c20012| 2016-04-06T02:52:10.184-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 938 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929130000|1, t: 1, h: -7830848170959971096, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:10.165-0500-5704c02a65c17830b843f19f", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929130165), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -84.0 }, max: { _id: MaxKey } }, left: { min: { _id: -84.0 }, max: { _id: -83.0 }, lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -83.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.955-0500 c20012| 2016-04-06T02:52:10.184-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:36863 #12 (10 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:11.959-0500 c20012| 2016-04-06T02:52:10.184-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929130000|1 and ending at ts: Timestamp 1459929130000|1 [js_test:multi_coll_drop] 2016-04-06T02:53:11.960-0500 c20012| 2016-04-06T02:52:10.184-0500 D COMMAND [conn12] run command admin.$cmd { isMaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:11.962-0500 c20012| 2016-04-06T02:52:10.184-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:11.965-0500 c20012| 2016-04-06T02:52:10.184-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { isMaster: 1 } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:11.966-0500 c20012| 2016-04-06T02:52:10.185-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.967-0500 c20012| 2016-04-06T02:52:10.185-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.968-0500 c20012| 2016-04-06T02:52:10.185-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.969-0500 c20012| 2016-04-06T02:52:10.185-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.970-0500 c20012| 2016-04-06T02:52:10.185-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.974-0500 c20012| 2016-04-06T02:52:10.185-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.977-0500 c20012| 2016-04-06T02:52:10.185-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.978-0500 c20012| 2016-04-06T02:52:10.185-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.978-0500 c20012| 2016-04-06T02:52:10.185-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.979-0500 c20012| 2016-04-06T02:52:10.185-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.980-0500 c20012| 2016-04-06T02:52:10.185-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.980-0500 c20012| 2016-04-06T02:52:10.185-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.981-0500 c20012| 2016-04-06T02:52:10.185-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.982-0500 c20012| 2016-04-06T02:52:10.185-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.985-0500 c20012| 2016-04-06T02:52:10.185-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.986-0500 c20012| 2016-04-06T02:52:10.185-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:11.987-0500 c20012| 2016-04-06T02:52:10.185-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.988-0500 c20012| 2016-04-06T02:52:10.185-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.989-0500 c20012| 2016-04-06T02:52:10.185-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.992-0500 c20012| 2016-04-06T02:52:10.185-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.993-0500 c20012| 2016-04-06T02:52:10.185-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:11.999-0500 c20012| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.017-0500 c20012| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.018-0500 c20012| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.031-0500 c20012| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.031-0500 c20012| 2016-04-06T02:52:10.186-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.044-0500 c20012| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.045-0500 c20012| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.050-0500 c20012| 2016-04-06T02:52:10.187-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 940 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.187-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:12.050-0500 c20012| 2016-04-06T02:52:10.187-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 940 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:12.052-0500 c20012| 2016-04-06T02:52:10.187-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:12.054-0500 c20012| 2016-04-06T02:52:10.187-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.055-0500 c20012| 2016-04-06T02:52:10.187-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.056-0500 c20012| 2016-04-06T02:52:10.187-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.058-0500 c20012| 2016-04-06T02:52:10.187-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.061-0500 c20012| 2016-04-06T02:52:10.187-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.062-0500 c20012| 2016-04-06T02:52:10.192-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.065-0500 c20012| 2016-04-06T02:52:10.192-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:12.070-0500 c20012| 2016-04-06T02:52:10.192-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:12.078-0500 c20012| 2016-04-06T02:52:10.192-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 941 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:12.079-0500 c20012| 2016-04-06T02:52:10.192-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 941 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:12.082-0500 c20012| 2016-04-06T02:52:10.192-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 941 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.086-0500 c20012| 2016-04-06T02:52:10.216-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:12.090-0500 c20012| 2016-04-06T02:52:10.216-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 943 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:12.091-0500 c20012| 2016-04-06T02:52:10.216-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 943 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:12.100-0500 c20012| 2016-04-06T02:52:10.216-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 943 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.103-0500 c20012| 2016-04-06T02:52:10.216-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 940 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.103-0500 c20012| 2016-04-06T02:52:10.216-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.105-0500 c20012| 2016-04-06T02:52:10.216-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:12.110-0500 c20012| 2016-04-06T02:52:10.217-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 946 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.217-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:12.115-0500 c20012| 2016-04-06T02:52:10.217-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 946 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:12.119-0500 c20012| 2016-04-06T02:52:10.217-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 946 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929130000|2, t: 1, h: 1200965899079533550, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.128-0500 c20012| 2016-04-06T02:52:10.217-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929130000|2 and ending at ts: Timestamp 1459929130000|2 [js_test:multi_coll_drop] 2016-04-06T02:53:12.129-0500 c20012| 2016-04-06T02:52:10.218-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:12.131-0500 c20012| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.133-0500 c20012| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.134-0500 c20012| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.138-0500 c20012| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.138-0500 c20012| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.139-0500 c20012| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.140-0500 c20012| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.142-0500 c20012| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.145-0500 c20012| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.147-0500 c20012| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.148-0500 c20012| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.149-0500 c20012| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.151-0500 c20012| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.152-0500 c20012| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.152-0500 c20012| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.153-0500 c20012| 2016-04-06T02:52:10.218-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:12.155-0500 c20012| 2016-04-06T02:52:10.218-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:12.162-0500 c20012| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.163-0500 c20012| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.164-0500 c20012| 2016-04-06T02:52:10.219-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.168-0500 c20012| 2016-04-06T02:52:10.219-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.169-0500 c20012| 2016-04-06T02:52:10.219-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.172-0500 c20012| 2016-04-06T02:52:10.219-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.172-0500 c20012| 2016-04-06T02:52:10.219-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.174-0500 c20012| 2016-04-06T02:52:10.219-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.175-0500 c20012| 2016-04-06T02:52:10.219-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.176-0500 c20012| 2016-04-06T02:52:10.219-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.177-0500 c20012| 2016-04-06T02:52:10.219-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.180-0500 c20012| 2016-04-06T02:52:10.219-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.180-0500 c20012| 2016-04-06T02:52:10.219-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.182-0500 c20012| 2016-04-06T02:52:10.219-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.184-0500 c20012| 2016-04-06T02:52:10.219-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.186-0500 c20012| 2016-04-06T02:52:10.219-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.186-0500 c20012| 2016-04-06T02:52:10.219-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.187-0500 c20012| 2016-04-06T02:52:10.219-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:12.193-0500 c20012| 2016-04-06T02:52:10.219-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:12.195-0500 c20012| 2016-04-06T02:52:10.219-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 948 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:12.197-0500 c20012| 2016-04-06T02:52:10.219-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 948 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:12.199-0500 c20012| 2016-04-06T02:52:10.219-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 948 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.204-0500 c20012| 2016-04-06T02:52:10.220-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 950 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.220-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:12.205-0500 c20012| 2016-04-06T02:52:10.220-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 950 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:12.211-0500 c20012| 2016-04-06T02:52:10.221-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:12.216-0500 c20012| 2016-04-06T02:52:10.221-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 951 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:12.218-0500 c20012| 2016-04-06T02:52:10.221-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 951 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:12.220-0500 c20012| 2016-04-06T02:52:10.221-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 951 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.221-0500 c20012| 2016-04-06T02:52:10.221-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 950 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.223-0500 c20012| 2016-04-06T02:52:10.221-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.226-0500 c20012| 2016-04-06T02:52:10.221-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:12.227-0500 c20012| 2016-04-06T02:52:10.222-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 954 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.222-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:12.230-0500 c20012| 2016-04-06T02:52:10.222-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 954 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:12.232-0500 c20012| 2016-04-06T02:52:10.225-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|34 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.233-0500 c20012| 2016-04-06T02:52:10.225-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|2, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:12.235-0500 c20012| 2016-04-06T02:52:10.225-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|34 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.238-0500 c20012| 2016-04-06T02:52:10.226-0500 D QUERY [conn7] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:12.243-0500 c20012| 2016-04-06T02:52:10.226-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|34 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|2, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:712 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:12.248-0500 c20012| 2016-04-06T02:52:10.227-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.250-0500 c20012| 2016-04-06T02:52:10.227-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|2, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:12.253-0500 c20012| 2016-04-06T02:52:10.227-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.255-0500 c20012| 2016-04-06T02:52:10.227-0500 D QUERY [conn7] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:12.265-0500 c20012| 2016-04-06T02:52:10.227-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:12.271-0500 c20012| 2016-04-06T02:52:10.228-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 954 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929130000|3, t: 1, h: 4850188129135545978, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02a65c17830b843f1a0'), state: 2, when: new Date(1459929130228), why: "splitting chunk [{ _id: -83.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.273-0500 c20012| 2016-04-06T02:52:10.228-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929130000|3 and ending at ts: Timestamp 1459929130000|3 [js_test:multi_coll_drop] 2016-04-06T02:53:12.275-0500 c20012| 2016-04-06T02:52:10.229-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:12.276-0500 c20012| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.277-0500 c20012| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.279-0500 c20012| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.280-0500 c20012| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.282-0500 c20012| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.282-0500 c20012| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.283-0500 c20012| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.283-0500 c20012| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.284-0500 c20012| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.285-0500 c20012| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.287-0500 c20012| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.289-0500 c20012| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.290-0500 c20012| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.291-0500 c20012| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.294-0500 c20012| 2016-04-06T02:52:10.229-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:12.295-0500 c20012| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.296-0500 c20012| 2016-04-06T02:52:10.229-0500 D QUERY [repl writer worker 3] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:12.297-0500 c20012| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.298-0500 c20012| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.299-0500 c20012| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.301-0500 c20012| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.302-0500 c20012| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.303-0500 c20012| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.306-0500 c20012| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.306-0500 c20012| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.308-0500 c20012| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.309-0500 c20012| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.310-0500 s20015| 2016-04-06T02:52:56.714-0500 I NETWORK [ReplicaSetMonitorWatcher] Socket recv() timeout 192.168.100.28:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:12.311-0500 s20015| 2016-04-06T02:52:56.714-0500 I NETWORK [ReplicaSetMonitorWatcher] SocketException: remote: (NONE):0 error: 9001 socket exception [RECV_TIMEOUT] server [192.168.100.28:20013] [js_test:multi_coll_drop] 2016-04-06T02:53:12.313-0500 s20015| 2016-04-06T02:52:56.714-0500 D - [ReplicaSetMonitorWatcher] User Assertion: 6:network error while attempting to run command 'ismaster' on host 'mongovm16:20013' [js_test:multi_coll_drop] 2016-04-06T02:53:12.317-0500 s20015| 2016-04-06T02:52:56.714-0500 I NETWORK [ReplicaSetMonitorWatcher] Detected bad connection created at 1459929137335889 microSec, clearing pool for mongovm16:20013 of 0 connections [js_test:multi_coll_drop] 2016-04-06T02:53:12.318-0500 s20015| 2016-04-06T02:52:56.714-0500 D NETWORK [ReplicaSetMonitorWatcher] Marking host mongovm16:20013 as failed [js_test:multi_coll_drop] 2016-04-06T02:53:12.319-0500 s20015| 2016-04-06T02:52:56.714-0500 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.100.28:20012, event detected [js_test:multi_coll_drop] 2016-04-06T02:53:12.321-0500 s20015| 2016-04-06T02:52:56.714-0500 I NETWORK [ReplicaSetMonitorWatcher] Socket closed remotely, no longer connected (idle 19 secs, remote host 192.168.100.28:20012) [js_test:multi_coll_drop] 2016-04-06T02:53:12.322-0500 s20015| 2016-04-06T02:52:56.714-0500 D NETWORK [ReplicaSetMonitorWatcher] creating new connection to:mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:12.323-0500 s20015| 2016-04-06T02:52:56.714-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:53:12.326-0500 s20015| 2016-04-06T02:52:56.715-0500 D NETWORK [ReplicaSetMonitorWatcher] connected to server mongovm16:20012 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:53:12.327-0500 s20015| 2016-04-06T02:52:56.715-0500 D NETWORK [ReplicaSetMonitorWatcher] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:53:12.330-0500 c20013| 2016-04-06T02:52:08.923-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|58, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.332-0500 c20013| 2016-04-06T02:52:08.923-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:12.333-0500 c20013| 2016-04-06T02:52:08.934-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 748 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.934-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|58, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:12.336-0500 c20013| 2016-04-06T02:52:08.934-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 748 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:12.342-0500 c20013| 2016-04-06T02:52:08.934-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 748 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|59, t: 1, h: -7629652830017108482, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.923-0500-5704c02865c17830b843f193", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128923), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -90.0 }, max: { _id: MaxKey } }, left: { min: { _id: -90.0 }, max: { _id: -89.0 }, lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -89.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } }, { ts: Timestamp 1459929128000|60, t: 1, h: 966868069161096116, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } }, { ts: Timestamp 1459929128000|61, t: 1, h: -5097362160621272068, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f194'), state: 2, when: new Date(1459929128932), why: "splitting chunk [{ _id: -89.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.343-0500 c20013| 2016-04-06T02:52:08.935-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|61, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.345-0500 c20013| 2016-04-06T02:52:08.935-0500 D REPL [rsBackgroundSync-0] fetcher read 3 operations from remote oplog starting at ts: Timestamp 1459929128000|59 and ending at ts: Timestamp 1459929128000|61 [js_test:multi_coll_drop] 2016-04-06T02:53:12.347-0500 c20013| 2016-04-06T02:52:08.935-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:12.348-0500 c20013| 2016-04-06T02:52:08.935-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.349-0500 c20013| 2016-04-06T02:52:08.935-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.349-0500 c20013| 2016-04-06T02:52:08.935-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.351-0500 c20013| 2016-04-06T02:52:08.935-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.353-0500 c20013| 2016-04-06T02:52:08.935-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.362-0500 c20013| 2016-04-06T02:52:08.935-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.364-0500 c20013| 2016-04-06T02:52:08.935-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.367-0500 c20013| 2016-04-06T02:52:08.935-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.370-0500 c20013| 2016-04-06T02:52:08.935-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.372-0500 c20013| 2016-04-06T02:52:08.935-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.373-0500 c20013| 2016-04-06T02:52:08.935-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.374-0500 c20013| 2016-04-06T02:52:08.935-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.378-0500 c20013| 2016-04-06T02:52:08.935-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.379-0500 c20013| 2016-04-06T02:52:08.935-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.382-0500 c20013| 2016-04-06T02:52:08.935-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.384-0500 c20013| 2016-04-06T02:52:08.935-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.384-0500 c20013| 2016-04-06T02:52:08.935-0500 D REPL [rsSync] replication batch size is 3 [js_test:multi_coll_drop] 2016-04-06T02:53:12.384-0500 c20013| 2016-04-06T02:52:08.936-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:12.385-0500 c20013| 2016-04-06T02:52:08.936-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:12.386-0500 c20013| 2016-04-06T02:52:08.936-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.387-0500 c20013| 2016-04-06T02:52:08.936-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.388-0500 c20013| 2016-04-06T02:52:08.936-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.389-0500 c20013| 2016-04-06T02:52:08.936-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.390-0500 c20013| 2016-04-06T02:52:08.936-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.392-0500 c20013| 2016-04-06T02:52:08.936-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.395-0500 c20013| 2016-04-06T02:52:08.936-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.396-0500 c20013| 2016-04-06T02:52:08.936-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.397-0500 c20013| 2016-04-06T02:52:08.936-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.398-0500 c20013| 2016-04-06T02:52:08.936-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.401-0500 c20013| 2016-04-06T02:52:08.936-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.401-0500 c20013| 2016-04-06T02:52:08.936-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.404-0500 c20013| 2016-04-06T02:52:08.936-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.405-0500 c20013| 2016-04-06T02:52:08.936-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.408-0500 c20013| 2016-04-06T02:52:08.936-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.410-0500 c20013| 2016-04-06T02:52:08.936-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.411-0500 c20013| 2016-04-06T02:52:08.936-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:12.416-0500 c20013| 2016-04-06T02:52:08.936-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:12.426-0500 c20013| 2016-04-06T02:52:08.936-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 750 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|58, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:12.427-0500 c20013| 2016-04-06T02:52:08.936-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 750 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:12.430-0500 c20013| 2016-04-06T02:52:08.937-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 750 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.434-0500 c20013| 2016-04-06T02:52:08.937-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 752 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.937-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|61, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:12.435-0500 c20013| 2016-04-06T02:52:08.937-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 752 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:12.439-0500 c20013| 2016-04-06T02:52:08.938-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 752 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|62, t: 1, h: 7031161474010338798, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-89.0", lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -89.0 }, max: { _id: -88.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-89.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-88.0", lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -88.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-88.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.443-0500 c20013| 2016-04-06T02:52:08.938-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|62 and ending at ts: Timestamp 1459929128000|62 [js_test:multi_coll_drop] 2016-04-06T02:53:12.443-0500 c20013| 2016-04-06T02:52:08.938-0500 D REPL [rsBackgroundSync-0] bgsync buffer has 0 bytes [js_test:multi_coll_drop] 2016-04-06T02:53:12.445-0500 c20013| 2016-04-06T02:52:08.938-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:12.450-0500 c20013| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.451-0500 c20013| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.452-0500 c20013| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.453-0500 c20013| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.455-0500 c20013| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.456-0500 c20013| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.462-0500 c20013| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.463-0500 d20010| 2016-04-06T02:52:56.628-0500 I NETWORK [PeriodicTaskRunner] Socket closed remotely, no longer connected (idle 18 secs, remote host 192.168.100.28:20012) [js_test:multi_coll_drop] 2016-04-06T02:53:12.467-0500 c20011| 2016-04-06T02:52:26.837-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 221 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.468-0500 c20013| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.470-0500 d20010| 2016-04-06T02:52:56.714-0500 I NETWORK [ReplicaSetMonitorWatcher] Socket recv() timeout 192.168.100.28:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:12.472-0500 c20011| 2016-04-06T02:52:26.837-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 218 finished with response: { cursor: { nextBatch: [], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.474-0500 c20011| 2016-04-06T02:52:26.837-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|4, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.477-0500 c20011| 2016-04-06T02:52:26.837-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:12.479-0500 c20011| 2016-04-06T02:52:26.837-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 224 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.837-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|4, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:12.481-0500 c20011| 2016-04-06T02:52:26.838-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 224 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:12.482-0500 c20013| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.484-0500 d20010| 2016-04-06T02:52:56.714-0500 I NETWORK [ReplicaSetMonitorWatcher] SocketException: remote: (NONE):0 error: 9001 socket exception [RECV_TIMEOUT] server [192.168.100.28:20013] [js_test:multi_coll_drop] 2016-04-06T02:53:12.485-0500 d20010| 2016-04-06T02:52:56.714-0500 I NETWORK [ReplicaSetMonitorWatcher] Detected bad connection created at 1459929134050447 microSec, clearing pool for mongovm16:20013 of 0 connections [js_test:multi_coll_drop] 2016-04-06T02:53:12.487-0500 c20011| 2016-04-06T02:52:26.842-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 224 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929146000|5, t: 2, h: -9208531786049148683, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c03a65c17830b843f1ad'), state: 2, when: new Date(1459929146841), why: "splitting chunk [{ _id: -77.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.490-0500 c20011| 2016-04-06T02:52:26.842-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929146000|5 and ending at ts: Timestamp 1459929146000|5 [js_test:multi_coll_drop] 2016-04-06T02:53:12.493-0500 c20011| 2016-04-06T02:52:26.842-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:12.494-0500 c20011| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.501-0500 c20011| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.502-0500 c20011| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.503-0500 c20013| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.505-0500 c20013| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.509-0500 c20013| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.510-0500 c20013| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.525-0500 c20011| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.526-0500 c20011| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.528-0500 c20011| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.528-0500 c20011| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.529-0500 c20011| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.530-0500 c20011| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.532-0500 c20011| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.535-0500 c20011| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.538-0500 c20011| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.538-0500 c20011| 2016-04-06T02:52:26.843-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.539-0500 c20011| 2016-04-06T02:52:26.843-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.541-0500 c20011| 2016-04-06T02:52:26.843-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:12.544-0500 c20011| 2016-04-06T02:52:26.843-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.545-0500 c20011| 2016-04-06T02:52:26.843-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.545-0500 c20011| 2016-04-06T02:52:26.843-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:12.547-0500 c20011| 2016-04-06T02:52:26.843-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.548-0500 c20011| 2016-04-06T02:52:26.843-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.548-0500 c20011| 2016-04-06T02:52:26.843-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.549-0500 c20011| 2016-04-06T02:52:26.843-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.550-0500 c20011| 2016-04-06T02:52:26.843-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.552-0500 c20011| 2016-04-06T02:52:26.843-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.553-0500 c20011| 2016-04-06T02:52:26.843-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.554-0500 c20011| 2016-04-06T02:52:26.843-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.556-0500 c20011| 2016-04-06T02:52:26.843-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.557-0500 c20011| 2016-04-06T02:52:26.843-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.557-0500 c20011| 2016-04-06T02:52:26.843-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.560-0500 c20011| 2016-04-06T02:52:26.843-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.563-0500 c20011| 2016-04-06T02:52:26.843-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.563-0500 c20011| 2016-04-06T02:52:26.843-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.566-0500 c20011| 2016-04-06T02:52:26.843-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.567-0500 c20011| 2016-04-06T02:52:26.843-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.571-0500 c20011| 2016-04-06T02:52:26.843-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:12.573-0500 c20011| 2016-04-06T02:52:26.844-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:12.579-0500 c20011| 2016-04-06T02:52:26.844-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 226 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:12.580-0500 c20011| 2016-04-06T02:52:26.844-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 226 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:12.584-0500 c20011| 2016-04-06T02:52:26.844-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 227 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.844-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|4, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:12.586-0500 c20011| 2016-04-06T02:52:26.844-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 226 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.587-0500 c20011| 2016-04-06T02:52:26.844-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 227 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:12.592-0500 c20011| 2016-04-06T02:52:26.846-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:12.599-0500 c20011| 2016-04-06T02:52:26.846-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 229 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:12.600-0500 c20011| 2016-04-06T02:52:26.846-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 229 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:12.600-0500 c20011| 2016-04-06T02:52:26.846-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 229 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.602-0500 c20011| 2016-04-06T02:52:26.846-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 227 finished with response: { cursor: { nextBatch: [], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.604-0500 c20011| 2016-04-06T02:52:26.846-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|5, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.605-0500 c20011| 2016-04-06T02:52:26.846-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:12.608-0500 c20011| 2016-04-06T02:52:26.846-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 232 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.846-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|5, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:12.609-0500 c20011| 2016-04-06T02:52:26.846-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 232 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:12.611-0500 c20011| 2016-04-06T02:52:26.853-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 232 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929146000|6, t: 2, h: -5811817306687838428, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-77.0", lastmod: Timestamp 1000|49, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -77.0 }, max: { _id: -76.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-77.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-76.0", lastmod: Timestamp 1000|50, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -76.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-76.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.614-0500 c20011| 2016-04-06T02:52:26.853-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929146000|6 and ending at ts: Timestamp 1459929146000|6 [js_test:multi_coll_drop] 2016-04-06T02:53:12.615-0500 c20011| 2016-04-06T02:52:26.853-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:12.617-0500 c20011| 2016-04-06T02:52:26.853-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.618-0500 c20011| 2016-04-06T02:52:26.853-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.619-0500 c20011| 2016-04-06T02:52:26.853-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.621-0500 c20011| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.622-0500 c20011| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.624-0500 c20011| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.625-0500 c20011| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.627-0500 c20011| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.628-0500 c20011| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.629-0500 c20011| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.630-0500 c20011| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.630-0500 c20011| 2016-04-06T02:52:26.854-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:12.632-0500 c20011| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.632-0500 c20011| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.633-0500 c20011| 2016-04-06T02:52:26.854-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-77.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:12.635-0500 c20011| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.636-0500 c20011| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.637-0500 c20012| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.638-0500 c20012| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.640-0500 c20012| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.641-0500 c20012| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.642-0500 c20012| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.643-0500 c20012| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.644-0500 c20012| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.646-0500 c20012| 2016-04-06T02:52:10.230-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:12.652-0500 c20012| 2016-04-06T02:52:10.230-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:12.656-0500 c20012| 2016-04-06T02:52:10.230-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 956 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:12.659-0500 c20012| 2016-04-06T02:52:10.230-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 956 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:12.659-0500 c20012| 2016-04-06T02:52:10.230-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 956 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.661-0500 c20012| 2016-04-06T02:52:10.231-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 958 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.231-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:12.663-0500 c20012| 2016-04-06T02:52:10.231-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 958 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:12.668-0500 c20012| 2016-04-06T02:52:10.232-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:12.672-0500 c20012| 2016-04-06T02:52:10.232-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 959 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:12.672-0500 c20012| 2016-04-06T02:52:10.232-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 959 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:12.674-0500 c20012| 2016-04-06T02:52:10.232-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 959 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.677-0500 c20012| 2016-04-06T02:52:10.232-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 958 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.680-0500 c20012| 2016-04-06T02:52:10.233-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.681-0500 c20012| 2016-04-06T02:52:10.233-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:12.685-0500 c20012| 2016-04-06T02:52:10.233-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 962 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.233-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:12.685-0500 c20012| 2016-04-06T02:52:10.233-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 962 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:12.692-0500 c20012| 2016-04-06T02:52:10.234-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 962 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929130000|4, t: 1, h: -5215253636266494371, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-83.0", lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -83.0 }, max: { _id: -82.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-83.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-82.0", lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -82.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-82.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.697-0500 c20012| 2016-04-06T02:52:10.234-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929130000|4 and ending at ts: Timestamp 1459929130000|4 [js_test:multi_coll_drop] 2016-04-06T02:53:12.698-0500 c20012| 2016-04-06T02:52:10.234-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:12.699-0500 c20012| 2016-04-06T02:52:10.234-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.700-0500 c20012| 2016-04-06T02:52:10.234-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.700-0500 c20012| 2016-04-06T02:52:10.234-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.701-0500 c20012| 2016-04-06T02:52:10.234-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.702-0500 c20013| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.704-0500 c20013| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.705-0500 c20013| 2016-04-06T02:52:08.938-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:12.707-0500 c20013| 2016-04-06T02:52:08.938-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.708-0500 c20013| 2016-04-06T02:52:08.938-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll-_id_-89.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:12.709-0500 c20013| 2016-04-06T02:52:08.938-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll-_id_-88.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:12.713-0500 c20013| 2016-04-06T02:52:08.938-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:12.719-0500 c20013| 2016-04-06T02:52:08.938-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 754 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:12.719-0500 c20013| 2016-04-06T02:52:08.938-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 754 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:12.721-0500 c20013| 2016-04-06T02:52:08.939-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.725-0500 c20013| 2016-04-06T02:52:08.939-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.727-0500 c20013| 2016-04-06T02:52:08.939-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.730-0500 c20013| 2016-04-06T02:52:08.939-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.733-0500 c20013| 2016-04-06T02:52:08.939-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.736-0500 c20013| 2016-04-06T02:52:08.939-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:12.737-0500 c20013| 2016-04-06T02:52:08.939-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 754 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:12.738-0500 c20013| 2016-04-06T02:52:08.939-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.031-0500 c20013| 2016-04-06T02:52:08.939-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.032-0500 c20013| 2016-04-06T02:52:08.939-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.033-0500 c20013| 2016-04-06T02:52:08.939-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.035-0500 c20013| 2016-04-06T02:52:08.939-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.036-0500 c20013| 2016-04-06T02:52:08.939-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.036-0500 c20013| 2016-04-06T02:52:08.939-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.038-0500 c20013| 2016-04-06T02:52:08.939-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.038-0500 c20013| 2016-04-06T02:52:08.939-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.039-0500 c20013| 2016-04-06T02:52:08.939-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.041-0500 c20013| 2016-04-06T02:52:08.939-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:13.047-0500 c20013| 2016-04-06T02:52:08.939-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:13.055-0500 c20013| 2016-04-06T02:52:08.939-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 756 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|61, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:13.056-0500 c20013| 2016-04-06T02:52:08.939-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 756 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:13.057-0500 c20013| 2016-04-06T02:52:08.939-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 756 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.059-0500 c20013| 2016-04-06T02:52:08.940-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 758 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.940-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|61, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:13.061-0500 c20013| 2016-04-06T02:52:08.940-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 758 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:13.061-0500 c20013| 2016-04-06T02:52:08.940-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 758 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.063-0500 c20013| 2016-04-06T02:52:08.940-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|62, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.065-0500 c20013| 2016-04-06T02:52:08.940-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:13.066-0500 c20013| 2016-04-06T02:52:08.940-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 760 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.940-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|62, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:13.067-0500 c20013| 2016-04-06T02:52:08.940-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 760 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:13.070-0500 c20013| 2016-04-06T02:52:08.941-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:13.073-0500 c20013| 2016-04-06T02:52:08.941-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 761 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:13.073-0500 c20013| 2016-04-06T02:52:08.941-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 761 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:13.076-0500 c20013| 2016-04-06T02:52:08.941-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 761 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.081-0500 c20013| 2016-04-06T02:52:08.941-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 760 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|63, t: 1, h: 964671473381320939, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.940-0500-5704c02865c17830b843f195", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128940), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -89.0 }, max: { _id: MaxKey } }, left: { min: { _id: -89.0 }, max: { _id: -88.0 }, lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -88.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.084-0500 c20013| 2016-04-06T02:52:08.941-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|63 and ending at ts: Timestamp 1459929128000|63 [js_test:multi_coll_drop] 2016-04-06T02:53:13.085-0500 c20013| 2016-04-06T02:52:08.941-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:13.089-0500 c20013| 2016-04-06T02:52:08.941-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.090-0500 c20012| 2016-04-06T02:52:10.234-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.092-0500 c20013| 2016-04-06T02:52:08.941-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.094-0500 c20013| 2016-04-06T02:52:08.941-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.098-0500 c20013| 2016-04-06T02:52:08.941-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.099-0500 c20011| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.100-0500 c20011| 2016-04-06T02:52:26.854-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-76.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:13.102-0500 c20011| 2016-04-06T02:52:26.855-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.102-0500 c20011| 2016-04-06T02:52:26.855-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.103-0500 c20011| 2016-04-06T02:52:26.855-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.104-0500 c20011| 2016-04-06T02:52:26.855-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.110-0500 c20011| 2016-04-06T02:52:26.855-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.113-0500 c20011| 2016-04-06T02:52:26.855-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.116-0500 c20011| 2016-04-06T02:52:26.855-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.117-0500 c20013| 2016-04-06T02:52:08.941-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.120-0500 c20012| 2016-04-06T02:52:10.234-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.122-0500 c20011| 2016-04-06T02:52:26.855-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 234 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.855-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|5, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:13.130-0500 c20013| 2016-04-06T02:52:08.941-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.132-0500 c20013| 2016-04-06T02:52:08.941-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.133-0500 c20013| 2016-04-06T02:52:08.941-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.136-0500 c20013| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.137-0500 c20013| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.137-0500 c20013| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.138-0500 c20013| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.138-0500 c20013| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.142-0500 c20013| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.144-0500 c20013| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.148-0500 c20013| 2016-04-06T02:52:08.942-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:13.149-0500 c20013| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.150-0500 c20013| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.150-0500 c20013| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.151-0500 c20013| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.152-0500 c20011| 2016-04-06T02:52:26.855-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.153-0500 c20011| 2016-04-06T02:52:26.855-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 234 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:13.154-0500 c20011| 2016-04-06T02:52:26.855-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.155-0500 c20011| 2016-04-06T02:52:26.857-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.157-0500 c20011| 2016-04-06T02:52:26.857-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.158-0500 c20011| 2016-04-06T02:52:26.857-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.158-0500 c20011| 2016-04-06T02:52:26.857-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.159-0500 c20011| 2016-04-06T02:52:26.857-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.160-0500 c20011| 2016-04-06T02:52:26.857-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.161-0500 c20011| 2016-04-06T02:52:26.861-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.162-0500 c20011| 2016-04-06T02:52:26.861-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:13.165-0500 c20011| 2016-04-06T02:52:26.861-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:13.170-0500 c20011| 2016-04-06T02:52:26.861-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 235 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:13.170-0500 c20011| 2016-04-06T02:52:26.861-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 235 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:13.171-0500 c20011| 2016-04-06T02:52:26.862-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 235 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.172-0500 c20011| 2016-04-06T02:52:26.862-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 234 finished with response: { cursor: { nextBatch: [], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.176-0500 c20011| 2016-04-06T02:52:26.863-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:13.179-0500 c20011| 2016-04-06T02:52:26.863-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 238 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:13.180-0500 c20011| 2016-04-06T02:52:26.863-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 238 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:13.181-0500 c20011| 2016-04-06T02:52:26.863-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 238 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.182-0500 c20011| 2016-04-06T02:52:26.863-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|6, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.182-0500 c20011| 2016-04-06T02:52:26.863-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:13.199-0500 c20011| 2016-04-06T02:52:26.863-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 240 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.863-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|6, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:13.204-0500 c20011| 2016-04-06T02:52:26.864-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 240 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:13.207-0500 c20011| 2016-04-06T02:52:26.864-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 240 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929146000|7, t: 2, h: -8448965826059055622, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:26.862-0500-5704c03a65c17830b843f1ae", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929146862), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -77.0 }, max: { _id: MaxKey } }, left: { min: { _id: -77.0 }, max: { _id: -76.0 }, lastmod: Timestamp 1000|49, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -76.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|50, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.210-0500 c20011| 2016-04-06T02:52:26.871-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929146000|7 and ending at ts: Timestamp 1459929146000|7 [js_test:multi_coll_drop] 2016-04-06T02:53:13.216-0500 c20011| 2016-04-06T02:52:26.871-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:13.217-0500 c20011| 2016-04-06T02:52:26.872-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.219-0500 c20011| 2016-04-06T02:52:26.872-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.221-0500 c20011| 2016-04-06T02:52:26.872-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.221-0500 c20011| 2016-04-06T02:52:26.872-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.225-0500 c20011| 2016-04-06T02:52:26.872-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.226-0500 c20011| 2016-04-06T02:52:26.872-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.228-0500 c20011| 2016-04-06T02:52:26.872-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.229-0500 c20011| 2016-04-06T02:52:26.872-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.229-0500 c20011| 2016-04-06T02:52:26.872-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.230-0500 c20011| 2016-04-06T02:52:26.872-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.231-0500 c20011| 2016-04-06T02:52:26.872-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.234-0500 c20011| 2016-04-06T02:52:26.872-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.234-0500 c20011| 2016-04-06T02:52:26.872-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.235-0500 c20011| 2016-04-06T02:52:26.872-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:13.237-0500 c20011| 2016-04-06T02:52:26.872-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.238-0500 c20011| 2016-04-06T02:52:26.872-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.240-0500 c20011| 2016-04-06T02:52:26.873-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.242-0500 c20011| 2016-04-06T02:52:26.873-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.242-0500 c20011| 2016-04-06T02:52:26.873-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.244-0500 c20011| 2016-04-06T02:52:26.873-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.245-0500 c20011| 2016-04-06T02:52:26.873-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.248-0500 c20011| 2016-04-06T02:52:26.873-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.249-0500 c20011| 2016-04-06T02:52:26.873-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.249-0500 c20011| 2016-04-06T02:52:26.873-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.250-0500 c20011| 2016-04-06T02:52:26.873-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.251-0500 c20011| 2016-04-06T02:52:26.873-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.253-0500 c20011| 2016-04-06T02:52:26.873-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.255-0500 c20011| 2016-04-06T02:52:26.873-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.256-0500 c20011| 2016-04-06T02:52:26.873-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.257-0500 c20011| 2016-04-06T02:52:26.873-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.259-0500 c20011| 2016-04-06T02:52:26.873-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.260-0500 c20011| 2016-04-06T02:52:26.875-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.262-0500 c20011| 2016-04-06T02:52:26.875-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.266-0500 c20011| 2016-04-06T02:52:26.875-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 242 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.875-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|6, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:13.268-0500 c20011| 2016-04-06T02:52:26.875-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 242 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:13.269-0500 c20011| 2016-04-06T02:52:26.876-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:13.274-0500 c20011| 2016-04-06T02:52:26.876-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:13.277-0500 c20011| 2016-04-06T02:52:26.876-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 243 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:13.279-0500 c20011| 2016-04-06T02:52:26.876-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 243 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:13.281-0500 c20011| 2016-04-06T02:52:26.876-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 243 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.282-0500 c20011| 2016-04-06T02:52:26.877-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 242 finished with response: { cursor: { nextBatch: [], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.285-0500 c20011| 2016-04-06T02:52:26.877-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|7, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.285-0500 c20011| 2016-04-06T02:52:26.877-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:13.288-0500 c20011| 2016-04-06T02:52:26.877-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 246 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.877-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|7, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:13.289-0500 c20011| 2016-04-06T02:52:26.877-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 246 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:13.291-0500 c20011| 2016-04-06T02:52:26.877-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 246 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929146000|8, t: 2, h: -1200371352031369196, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.292-0500 c20011| 2016-04-06T02:52:26.878-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929146000|8 and ending at ts: Timestamp 1459929146000|8 [js_test:multi_coll_drop] 2016-04-06T02:53:13.296-0500 c20011| 2016-04-06T02:52:26.878-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:13.305-0500 c20011| 2016-04-06T02:52:26.878-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 248 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:13.306-0500 c20011| 2016-04-06T02:52:26.878-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 248 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:13.308-0500 c20011| 2016-04-06T02:52:26.878-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 248 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.309-0500 c20011| 2016-04-06T02:52:26.878-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:13.310-0500 c20011| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.310-0500 c20011| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.314-0500 c20011| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.316-0500 c20011| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.321-0500 c20011| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.323-0500 c20011| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.324-0500 c20011| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.325-0500 c20011| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.326-0500 c20011| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.327-0500 c20011| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.327-0500 c20011| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.328-0500 c20011| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.328-0500 c20011| 2016-04-06T02:52:26.879-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.330-0500 c20011| 2016-04-06T02:52:26.879-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.330-0500 c20011| 2016-04-06T02:52:26.879-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:13.332-0500 c20011| 2016-04-06T02:52:26.879-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.332-0500 c20011| 2016-04-06T02:52:26.879-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:13.333-0500 c20011| 2016-04-06T02:52:26.879-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.333-0500 c20011| 2016-04-06T02:52:26.879-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.335-0500 c20011| 2016-04-06T02:52:26.879-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.335-0500 c20011| 2016-04-06T02:52:26.879-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.336-0500 c20011| 2016-04-06T02:52:26.879-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.345-0500 c20011| 2016-04-06T02:52:26.879-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.345-0500 c20011| 2016-04-06T02:52:26.879-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.345-0500 c20011| 2016-04-06T02:52:26.879-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.346-0500 c20011| 2016-04-06T02:52:26.879-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.347-0500 c20011| 2016-04-06T02:52:26.879-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.347-0500 c20011| 2016-04-06T02:52:26.879-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.348-0500 c20011| 2016-04-06T02:52:26.879-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.348-0500 c20011| 2016-04-06T02:52:26.879-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.349-0500 c20011| 2016-04-06T02:52:26.879-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.350-0500 c20011| 2016-04-06T02:52:26.879-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.352-0500 c20011| 2016-04-06T02:52:26.880-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 250 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.880-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|7, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:13.353-0500 c20011| 2016-04-06T02:52:26.880-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 250 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:13.355-0500 c20011| 2016-04-06T02:52:26.881-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 250 finished with response: { cursor: { nextBatch: [], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.357-0500 c20011| 2016-04-06T02:52:26.881-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|8, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.357-0500 c20011| 2016-04-06T02:52:26.881-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:13.362-0500 c20011| 2016-04-06T02:52:26.881-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 252 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.881-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|8, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:13.363-0500 c20011| 2016-04-06T02:52:26.881-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 252 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:13.365-0500 c20011| 2016-04-06T02:52:26.884-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 252 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929146000|9, t: 2, h: 622575099516940850, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c03a65c17830b843f1af'), state: 2, when: new Date(1459929146883), why: "splitting chunk [{ _id: -76.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.366-0500 c20011| 2016-04-06T02:52:26.884-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929146000|9 and ending at ts: Timestamp 1459929146000|9 [js_test:multi_coll_drop] 2016-04-06T02:53:13.367-0500 c20011| 2016-04-06T02:52:26.885-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.367-0500 c20011| 2016-04-06T02:52:26.885-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.368-0500 c20011| 2016-04-06T02:52:26.887-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 254 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.887-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|8, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:13.368-0500 c20011| 2016-04-06T02:52:26.887-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 254 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:13.369-0500 c20011| 2016-04-06T02:52:26.887-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:13.370-0500 c20011| 2016-04-06T02:52:26.888-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:13.372-0500 c20011| 2016-04-06T02:52:26.888-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 255 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:13.375-0500 c20011| 2016-04-06T02:52:26.888-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 255 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:13.381-0500 c20011| 2016-04-06T02:52:26.888-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 255 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.382-0500 c20011| 2016-04-06T02:52:26.888-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:13.384-0500 c20011| 2016-04-06T02:52:26.888-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.386-0500 c20011| 2016-04-06T02:52:26.888-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.386-0500 c20011| 2016-04-06T02:52:26.888-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.388-0500 c20011| 2016-04-06T02:52:26.888-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.389-0500 c20011| 2016-04-06T02:52:26.888-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.392-0500 c20011| 2016-04-06T02:52:26.888-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.393-0500 c20011| 2016-04-06T02:52:26.888-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.393-0500 c20011| 2016-04-06T02:52:26.888-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.394-0500 c20011| 2016-04-06T02:52:26.888-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.395-0500 c20011| 2016-04-06T02:52:26.888-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.396-0500 c20011| 2016-04-06T02:52:26.889-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.397-0500 c20011| 2016-04-06T02:52:26.889-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.397-0500 c20011| 2016-04-06T02:52:26.889-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.397-0500 c20011| 2016-04-06T02:52:26.889-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:13.407-0500 c20011| 2016-04-06T02:52:26.889-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.408-0500 c20011| 2016-04-06T02:52:26.889-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.410-0500 c20011| 2016-04-06T02:52:26.889-0500 D QUERY [repl writer worker 5] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:13.411-0500 c20011| 2016-04-06T02:52:26.889-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.413-0500 c20011| 2016-04-06T02:52:26.889-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.413-0500 c20011| 2016-04-06T02:52:26.889-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.414-0500 c20011| 2016-04-06T02:52:26.889-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.422-0500 c20011| 2016-04-06T02:52:26.889-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.422-0500 c20011| 2016-04-06T02:52:26.889-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.425-0500 c20011| 2016-04-06T02:52:26.889-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.427-0500 c20011| 2016-04-06T02:52:26.889-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.429-0500 c20011| 2016-04-06T02:52:26.889-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.430-0500 c20011| 2016-04-06T02:52:26.889-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.432-0500 c20011| 2016-04-06T02:52:26.889-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.433-0500 c20011| 2016-04-06T02:52:26.889-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.434-0500 c20011| 2016-04-06T02:52:26.889-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.434-0500 c20011| 2016-04-06T02:52:26.889-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.468-0500 c20011| 2016-04-06T02:52:26.889-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.471-0500 c20011| 2016-04-06T02:52:26.891-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:13.476-0500 c20011| 2016-04-06T02:52:26.891-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 257 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:13.476-0500 c20011| 2016-04-06T02:52:26.891-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.480-0500 c20011| 2016-04-06T02:52:26.891-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.480-0500 c20011| 2016-04-06T02:52:26.891-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 257 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:13.481-0500 c20011| 2016-04-06T02:52:26.891-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 257 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.482-0500 c20011| 2016-04-06T02:52:26.891-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 254 finished with response: { cursor: { nextBatch: [], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.485-0500 c20011| 2016-04-06T02:52:26.891-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:13.487-0500 c20011| 2016-04-06T02:52:26.891-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.487-0500 c20011| 2016-04-06T02:52:26.892-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:13.492-0500 c20011| 2016-04-06T02:52:26.892-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 260 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.892-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:13.498-0500 c20011| 2016-04-06T02:52:26.892-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:13.499-0500 c20011| 2016-04-06T02:52:26.892-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 260 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:13.506-0500 c20011| 2016-04-06T02:52:26.892-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 261 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:13.507-0500 c20011| 2016-04-06T02:52:26.892-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 261 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:13.518-0500 c20011| 2016-04-06T02:52:26.892-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 261 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.522-0500 c20011| 2016-04-06T02:52:26.892-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:13.550-0500 c20011| 2016-04-06T02:52:26.892-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 263 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:13.552-0500 c20011| 2016-04-06T02:52:26.892-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 263 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:13.559-0500 c20011| 2016-04-06T02:52:26.892-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 263 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.575-0500 c20011| 2016-04-06T02:52:26.894-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 260 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929146000|10, t: 2, h: 8129632561130330747, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-76.0", lastmod: Timestamp 1000|51, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -76.0 }, max: { _id: -75.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-76.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-75.0", lastmod: Timestamp 1000|52, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -75.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-75.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 22197973872, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.583-0500 c20011| 2016-04-06T02:52:26.894-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929146000|10 and ending at ts: Timestamp 1459929146000|10 [js_test:multi_coll_drop] 2016-04-06T02:53:13.590-0500 c20011| 2016-04-06T02:52:26.894-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:13.592-0500 c20011| 2016-04-06T02:52:26.894-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.600-0500 c20011| 2016-04-06T02:52:26.894-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.603-0500 c20011| 2016-04-06T02:52:26.894-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.603-0500 c20011| 2016-04-06T02:52:26.894-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.605-0500 c20011| 2016-04-06T02:52:26.894-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.605-0500 c20011| 2016-04-06T02:52:26.894-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.606-0500 c20011| 2016-04-06T02:52:26.894-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.607-0500 c20011| 2016-04-06T02:52:26.894-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.608-0500 c20011| 2016-04-06T02:52:26.894-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.609-0500 c20011| 2016-04-06T02:52:26.894-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.612-0500 c20011| 2016-04-06T02:52:26.894-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.613-0500 c20011| 2016-04-06T02:52:26.894-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.614-0500 c20011| 2016-04-06T02:52:26.894-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.615-0500 c20011| 2016-04-06T02:52:26.894-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.617-0500 c20011| 2016-04-06T02:52:26.894-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:13.618-0500 c20011| 2016-04-06T02:52:26.894-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.618-0500 c20011| 2016-04-06T02:52:26.895-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll-_id_-76.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:13.620-0500 c20011| 2016-04-06T02:52:26.895-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll-_id_-75.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:13.621-0500 c20011| 2016-04-06T02:52:26.895-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.622-0500 c20011| 2016-04-06T02:52:26.895-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.622-0500 c20011| 2016-04-06T02:52:26.895-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.625-0500 c20011| 2016-04-06T02:52:26.895-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.627-0500 c20011| 2016-04-06T02:52:26.895-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.629-0500 c20011| 2016-04-06T02:52:26.895-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.630-0500 c20011| 2016-04-06T02:52:26.895-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.632-0500 c20011| 2016-04-06T02:52:26.895-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.633-0500 c20011| 2016-04-06T02:52:26.895-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.635-0500 c20011| 2016-04-06T02:52:26.895-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.637-0500 c20011| 2016-04-06T02:52:26.895-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.640-0500 c20011| 2016-04-06T02:52:26.895-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.641-0500 c20011| 2016-04-06T02:52:26.895-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.641-0500 c20011| 2016-04-06T02:52:26.895-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.643-0500 c20011| 2016-04-06T02:52:26.895-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.649-0500 c20011| 2016-04-06T02:52:26.898-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 266 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.898-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:13.649-0500 c20011| 2016-04-06T02:52:26.898-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 266 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:13.651-0500 c20011| 2016-04-06T02:52:26.905-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.654-0500 c20011| 2016-04-06T02:52:26.905-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:13.665-0500 c20011| 2016-04-06T02:52:26.908-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:13.672-0500 c20011| 2016-04-06T02:52:26.908-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:13.719-0500 c20011| 2016-04-06T02:52:26.908-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 267 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:13.731-0500 c20011| 2016-04-06T02:52:26.908-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 267 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:13.737-0500 c20011| 2016-04-06T02:52:27.555-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 268 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:37.555-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.740-0500 c20011| 2016-04-06T02:52:27.555-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 268 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:13.742-0500 c20011| 2016-04-06T02:52:27.556-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 268 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20012", term: 2, primaryId: 1, durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, opTime: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:13.745-0500 c20011| 2016-04-06T02:52:27.556-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:29.556Z [js_test:multi_coll_drop] 2016-04-06T02:53:13.751-0500 c20011| 2016-04-06T02:52:28.056-0500 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.751-0500 c20011| 2016-04-06T02:52:28.056-0500 D COMMAND [conn28] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:13.821-0500 c20011| 2016-04-06T02:52:28.056-0500 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:13.824-0500 c20011| 2016-04-06T02:52:28.811-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 270 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:38.811-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.827-0500 c20011| 2016-04-06T02:52:28.811-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 270 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:13.827-0500 c20011| 2016-04-06T02:52:29.558-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 271 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:39.558-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.828-0500 c20011| 2016-04-06T02:52:29.558-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 271 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:13.829-0500 c20011| 2016-04-06T02:52:29.559-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 271 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20012", term: 2, primaryId: 1, durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, opTime: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:13.835-0500 c20011| 2016-04-06T02:52:29.559-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:31.559Z [js_test:multi_coll_drop] 2016-04-06T02:53:13.836-0500 c20011| 2016-04-06T02:52:30.056-0500 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.837-0500 c20011| 2016-04-06T02:52:30.056-0500 D COMMAND [conn28] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:13.839-0500 c20011| 2016-04-06T02:52:30.057-0500 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:13.842-0500 c20011| 2016-04-06T02:52:31.559-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 273 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:41.559-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.843-0500 c20011| 2016-04-06T02:52:31.560-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 273 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:13.846-0500 c20011| 2016-04-06T02:52:31.566-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 273 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20012", term: 2, primaryId: 1, durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, opTime: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:13.853-0500 c20011| 2016-04-06T02:52:31.566-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:33.566Z [js_test:multi_coll_drop] 2016-04-06T02:53:13.877-0500 c20011| 2016-04-06T02:52:31.900-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 266 timed out, adjusted timeout after getting connection from pool was 5000ms, op was id: 5, states: [ UNINITIALIZED, IN_PROGRESS ], start_time: 2016-04-06T02:52:26.898-0500, request: RemoteCommand 266 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.898-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:13.891-0500 c20011| 2016-04-06T02:52:31.900-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Operation timing out; original request was: RemoteCommand 266 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.898-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:13.908-0500 c20011| 2016-04-06T02:52:31.900-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Failed to execute command: RemoteCommand 266 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.898-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } reason: ExceededTimeLimit: Operation timed out, request was RemoteCommand 266 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.898-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:13.911-0500 c20011| 2016-04-06T02:52:31.900-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 266 finished with response: ExceededTimeLimit: Operation timed out, request was RemoteCommand 266 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.898-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:13.915-0500 c20011| 2016-04-06T02:52:31.900-0500 D REPL [rsBackgroundSync-0] Error returned from oplog query: ExceededTimeLimit: Operation timed out, request was RemoteCommand 266 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.898-0500 cmd:{ getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:13.917-0500 c20011| 2016-04-06T02:52:31.900-0500 D REPL [rsBackgroundSync] fetcher stopped reading remote oplog on mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:13.919-0500 c20011| 2016-04-06T02:52:31.900-0500 I REPL [ReplicationExecutor] could not find member to sync from [js_test:multi_coll_drop] 2016-04-06T02:53:13.922-0500 c20011| 2016-04-06T02:52:31.900-0500 D ASIO [ReplicationExecutor] Canceling operation; original request was: RemoteCommand 270 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:38.811-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.923-0500 c20011| 2016-04-06T02:52:31.900-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:31.900Z [js_test:multi_coll_drop] 2016-04-06T02:53:13.925-0500 c20011| 2016-04-06T02:52:31.900-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:31.900Z [js_test:multi_coll_drop] 2016-04-06T02:53:13.930-0500 c20011| 2016-04-06T02:52:31.900-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to execute command: RemoteCommand 270 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:38.811-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } reason: CallbackCanceled: Callback canceled [js_test:multi_coll_drop] 2016-04-06T02:53:13.931-0500 c20011| 2016-04-06T02:52:31.900-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 270 finished with response: CallbackCanceled: Callback canceled [js_test:multi_coll_drop] 2016-04-06T02:53:13.936-0500 c20011| 2016-04-06T02:52:31.900-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 277 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:38.811-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.937-0500 c20011| 2016-04-06T02:52:31.900-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:13.939-0500 c20011| 2016-04-06T02:52:31.900-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 279 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:41.900-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.941-0500 c20011| 2016-04-06T02:52:31.900-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 279 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:13.943-0500 c20011| 2016-04-06T02:52:31.901-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 278 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:13.948-0500 c20011| 2016-04-06T02:52:31.901-0500 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.948-0500 c20011| 2016-04-06T02:52:31.901-0500 D COMMAND [conn28] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:13.952-0500 c20011| 2016-04-06T02:52:31.901-0500 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } numYields:0 reslen:458 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:13.954-0500 c20011| 2016-04-06T02:52:31.901-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 279 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 2, primaryId: 1, durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, opTime: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:13.954-0500 c20011| 2016-04-06T02:52:31.901-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:34.401Z [js_test:multi_coll_drop] 2016-04-06T02:53:13.956-0500 c20011| 2016-04-06T02:52:32.204-0500 I REPL [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 5000ms [js_test:multi_coll_drop] 2016-04-06T02:53:13.958-0500 c20011| 2016-04-06T02:52:32.204-0500 I REPL [ReplicationExecutor] conducting a dry run election to see if we could be elected [js_test:multi_coll_drop] 2016-04-06T02:53:13.961-0500 c20011| 2016-04-06T02:52:32.204-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 281 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:37.204-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 2, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:13.965-0500 c20011| 2016-04-06T02:52:32.204-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 282 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:37.204-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 2, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:13.967-0500 c20011| 2016-04-06T02:52:32.204-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:13.970-0500 c20011| 2016-04-06T02:52:32.204-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 282 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:13.971-0500 c20011| 2016-04-06T02:52:32.204-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 283 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:13.974-0500 c20011| 2016-04-06T02:52:32.206-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 282 finished with response: { term: 2, voteGranted: true, reason: "", ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.974-0500 c20011| 2016-04-06T02:52:32.206-0500 I REPL [ReplicationExecutor] dry election run succeeded, running for election [js_test:multi_coll_drop] 2016-04-06T02:53:13.982-0500 c20011| 2016-04-06T02:52:32.206-0500 D QUERY [replExecDBWorker-1] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:13.985-0500 c20011| 2016-04-06T02:52:32.206-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 285 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:37.206-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 3, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:13.989-0500 c20011| 2016-04-06T02:52:32.206-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 286 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:37.206-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 3, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:13.990-0500 c20011| 2016-04-06T02:52:32.206-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:13.990-0500 c20011| 2016-04-06T02:52:32.206-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 286 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:13.997-0500 c20011| 2016-04-06T02:52:32.206-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 287 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:13.998-0500 c20011| 2016-04-06T02:52:32.206-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 286 finished with response: { term: 3, voteGranted: true, reason: "", ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:13.999-0500 c20011| 2016-04-06T02:52:32.207-0500 I REPL [ReplicationExecutor] election succeeded, assuming primary role in term 3 [js_test:multi_coll_drop] 2016-04-06T02:53:14.005-0500 c20011| 2016-04-06T02:52:32.207-0500 I REPL [ReplicationExecutor] transition to PRIMARY [js_test:multi_coll_drop] 2016-04-06T02:53:14.014-0500 c20011| 2016-04-06T02:52:32.207-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:32.207Z [js_test:multi_coll_drop] 2016-04-06T02:53:14.016-0500 c20011| 2016-04-06T02:52:32.207-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:32.207Z [js_test:multi_coll_drop] 2016-04-06T02:53:14.020-0500 c20011| 2016-04-06T02:52:32.207-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 289 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:38.811-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.021-0500 c20011| 2016-04-06T02:52:32.207-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 291 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:42.207-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.022-0500 c20011| 2016-04-06T02:52:32.207-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:14.023-0500 c20011| 2016-04-06T02:52:32.207-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 291 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:14.024-0500 c20011| 2016-04-06T02:52:32.207-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 290 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:14.025-0500 c20011| 2016-04-06T02:52:32.207-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 291 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 3, primaryId: 1, durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, opTime: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:14.026-0500 c20011| 2016-04-06T02:52:32.207-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:34.207Z [js_test:multi_coll_drop] 2016-04-06T02:53:14.027-0500 c20011| 2016-04-06T02:52:32.901-0500 D REPL [rsSync] Removing temporary collections from config [js_test:multi_coll_drop] 2016-04-06T02:53:14.028-0500 c20011| 2016-04-06T02:52:32.901-0500 D REPL [rsSync] Ignoring older committed snapshot from before I became primary, optime: { ts: Timestamp 1459929146000|10, t: 2 }, firstOpTimeOfMyTerm: { ts: Timestamp 2147483647000|0, t: 2147483647 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.028-0500 c20011| 2016-04-06T02:52:32.901-0500 D REPL [rsSync] Ignoring older committed snapshot from before I became primary, optime: { ts: Timestamp 1459929146000|10, t: 2 }, firstOpTimeOfMyTerm: { ts: Timestamp 1459929152000|2, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.030-0500 c20011| 2016-04-06T02:52:32.901-0500 I REPL [rsSync] transition to primary complete; database writes are now permitted [js_test:multi_coll_drop] 2016-04-06T02:53:14.031-0500 c20011| 2016-04-06T02:52:32.993-0500 D REPL [WTJournalFlusher] Ignoring older committed snapshot from before I became primary, optime: { ts: Timestamp 1459929146000|10, t: 2 }, firstOpTimeOfMyTerm: { ts: Timestamp 1459929152000|2, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.032-0500 c20011| 2016-04-06T02:52:34.207-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 293 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:44.207-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.033-0500 c20011| 2016-04-06T02:52:34.209-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 293 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:14.035-0500 c20011| 2016-04-06T02:52:34.209-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 293 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 3, primaryId: 1, durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, opTime: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:14.038-0500 c20011| 2016-04-06T02:52:34.209-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:36.209Z [js_test:multi_coll_drop] 2016-04-06T02:53:14.041-0500 c20011| 2016-04-06T02:52:34.401-0500 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.043-0500 c20011| 2016-04-06T02:52:34.401-0500 D COMMAND [conn28] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:14.045-0500 c20011| 2016-04-06T02:52:34.402-0500 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 3 } numYields:0 reslen:480 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.045-0500 c20011| 2016-04-06T02:52:34.902-0500 D COMMAND [conn28] run command local.$cmd { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:14.048-0500 c20011| 2016-04-06T02:52:34.902-0500 D QUERY [conn28] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: 1 } projection: {} limit: 1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:14.051-0500 c20011| 2016-04-06T02:52:34.902-0500 I COMMAND [conn28] command local.oplog.rs command: find { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:254 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.052-0500 c20011| 2016-04-06T02:52:34.903-0500 D COMMAND [conn30] run command local.$cmd { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929146000|10 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.056-0500 c20011| 2016-04-06T02:52:34.903-0500 I COMMAND [conn30] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929146000|10 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 3 } planSummary: COLLSCAN cursorid:19853084149 keysExamined:0 docsExamined:2 numYields:0 nreturned:2 reslen:1154 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.059-0500 c20011| 2016-04-06T02:52:34.906-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:14.061-0500 c20011| 2016-04-06T02:52:36.209-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 295 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:46.209-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.062-0500 c20011| 2016-04-06T02:52:36.210-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 295 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:14.064-0500 c20011| 2016-04-06T02:52:36.210-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 295 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, opTime: { ts: Timestamp 1459929152000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:14.065-0500 c20011| 2016-04-06T02:52:36.210-0500 D REPL [ReplicationExecutor] Ignoring older committed snapshot from before I became primary, optime: { ts: Timestamp 1459929146000|10, t: 2 }, firstOpTimeOfMyTerm: { ts: Timestamp 1459929152000|2, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.065-0500 c20011| 2016-04-06T02:52:36.210-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929152000|2, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.081-0500 c20011| 2016-04-06T02:52:36.210-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:38.210Z [js_test:multi_coll_drop] 2016-04-06T02:53:14.083-0500 c20011| 2016-04-06T02:52:36.211-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } cursorid:19853084149 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 1304ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.091-0500 c20011| 2016-04-06T02:52:36.211-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929152000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:14.094-0500 c20011| 2016-04-06T02:52:36.903-0500 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.096-0500 c20011| 2016-04-06T02:52:36.903-0500 D COMMAND [conn28] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:14.098-0500 c20011| 2016-04-06T02:52:36.903-0500 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 3 } numYields:0 reslen:480 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.102-0500 c20011| 2016-04-06T02:52:37.204-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to get connection from pool for request 281: ExceededTimeLimit: Couldn't get a connection within the time limit [js_test:multi_coll_drop] 2016-04-06T02:53:14.103-0500 c20011| 2016-04-06T02:52:37.206-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to get connection from pool for request 285: ExceededTimeLimit: Couldn't get a connection within the time limit [js_test:multi_coll_drop] 2016-04-06T02:53:14.107-0500 c20011| 2016-04-06T02:52:38.210-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 297 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:48.210-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.107-0500 c20011| 2016-04-06T02:52:38.210-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 297 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:14.110-0500 c20011| 2016-04-06T02:52:38.212-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 297 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, opTime: { ts: Timestamp 1459929152000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:14.111-0500 c20011| 2016-04-06T02:52:38.212-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:40.212Z [js_test:multi_coll_drop] 2016-04-06T02:53:14.116-0500 c20011| 2016-04-06T02:52:38.712-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929152000|2, t: 3 } } cursorid:19853084149 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 2500ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.119-0500 c20011| 2016-04-06T02:52:38.713-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929152000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:14.120-0500 c20011| 2016-04-06T02:52:38.716-0500 D COMMAND [conn37] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.121-0500 c20011| 2016-04-06T02:52:38.716-0500 I COMMAND [conn37] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.124-0500 c20011| 2016-04-06T02:52:38.811-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to get connection from pool for request 277: ExceededTimeLimit: Couldn't get a connection within the time limit [js_test:multi_coll_drop] 2016-04-06T02:53:14.125-0500 c20011| 2016-04-06T02:52:38.811-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to get connection from pool for request 289: ExceededTimeLimit: Couldn't get a connection within the time limit [js_test:multi_coll_drop] 2016-04-06T02:53:14.145-0500 c20011| 2016-04-06T02:52:38.812-0500 I REPL [ReplicationExecutor] Error in heartbeat request to mongovm16:20012; ExceededTimeLimit: Couldn't get a connection within the time limit [js_test:multi_coll_drop] 2016-04-06T02:53:14.146-0500 c20011| 2016-04-06T02:52:38.812-0500 D REPL [ReplicationExecutor] setDownValues: heartbeat response failed for member _id:1, msg: Couldn't get a connection within the time limit [js_test:multi_coll_drop] 2016-04-06T02:53:14.149-0500 c20011| 2016-04-06T02:52:38.812-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:40.811Z [js_test:multi_coll_drop] 2016-04-06T02:53:14.151-0500 c20011| 2016-04-06T02:52:38.903-0500 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.152-0500 c20011| 2016-04-06T02:52:38.903-0500 D COMMAND [conn28] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:14.155-0500 c20011| 2016-04-06T02:52:38.907-0500 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 3 } numYields:0 reslen:480 locks:{} protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.194-0500 c20011| 2016-04-06T02:52:40.212-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 299 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:50.212-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.198-0500 c20011| 2016-04-06T02:52:40.212-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 299 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:14.212-0500 c20011| 2016-04-06T02:52:40.213-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 299 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, opTime: { ts: Timestamp 1459929152000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:14.226-0500 c20011| 2016-04-06T02:52:40.213-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:42.213Z [js_test:multi_coll_drop] 2016-04-06T02:53:14.238-0500 c20011| 2016-04-06T02:52:40.811-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 301 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:50.811-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.239-0500 c20011| 2016-04-06T02:52:40.907-0500 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.240-0500 c20011| 2016-04-06T02:52:40.907-0500 D COMMAND [conn28] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:14.243-0500 c20011| 2016-04-06T02:52:40.907-0500 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 3 } numYields:0 reslen:480 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.257-0500 c20011| 2016-04-06T02:52:41.213-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929152000|2, t: 3 } } cursorid:19853084149 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 2500ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.277-0500 c20011| 2016-04-06T02:52:41.215-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929152000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:14.296-0500 c20011| 2016-04-06T02:52:41.708-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:14.297-0500 c20011| 2016-04-06T02:52:41.708-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:14.306-0500 c20011| 2016-04-06T02:52:41.708-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929152000|2, t: 3 } and is durable through: { ts: Timestamp 1459929152000|2, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.311-0500 c20011| 2016-04-06T02:52:41.708-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.313-0500 c20011| 2016-04-06T02:52:41.719-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 267 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.313-0500 c20011| 2016-04-06T02:52:41.719-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter failed to prepare update command with status: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:14.319-0500 c20011| 2016-04-06T02:52:41.719-0500 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to mongovm16:20012: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:14.321-0500 c20011| 2016-04-06T02:52:41.719-0500 D REPL [SyncSourceFeedback] The replication progress command (replSetUpdatePosition) failed and will be retried: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:14.323-0500 c20011| 2016-04-06T02:52:41.719-0500 D COMMAND [conn32] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.323-0500 c20011| 2016-04-06T02:52:41.719-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:14.324-0500 c20011| 2016-04-06T02:52:41.719-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 287 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:14.325-0500 c20011| 2016-04-06T02:52:41.720-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:14.325-0500 c20011| 2016-04-06T02:52:41.720-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 278 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:14.327-0500 c20011| 2016-04-06T02:52:41.720-0500 I COMMAND [conn32] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.329-0500 c20011| 2016-04-06T02:52:41.720-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 301 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:14.332-0500 c20011| 2016-04-06T02:52:41.720-0500 D COMMAND [conn29] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.332-0500 c20011| 2016-04-06T02:52:41.720-0500 D COMMAND [conn41] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.333-0500 c20011| 2016-04-06T02:52:41.720-0500 D COMMAND [conn29] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:14.334-0500 c20011| 2016-04-06T02:52:41.720-0500 I COMMAND [conn41] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.336-0500 c20011| 2016-04-06T02:52:41.720-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:14.336-0500 c20011| 2016-04-06T02:52:41.720-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 290 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:14.336-0500 c20011| 2016-04-06T02:52:41.721-0500 D COMMAND [conn39] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.338-0500 c20011| 2016-04-06T02:52:41.721-0500 I COMMAND [conn39] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.340-0500 c20011| 2016-04-06T02:52:41.721-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to connect to mongovm16:20012 - HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:14.342-0500 c20011| 2016-04-06T02:52:41.721-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 283 finished with response: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:14.345-0500 c20011| 2016-04-06T02:52:41.722-0500 D COMMAND [conn38] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929152631), up: 25, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.345-0500 c20011| 2016-04-06T02:52:41.722-0500 D QUERY [conn38] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:14.350-0500 c20011| 2016-04-06T02:52:41.722-0500 I WRITE [conn38] update config.mongos query: { _id: "mongovm16:20015" } update: { $set: { _id: "mongovm16:20015", ping: new Date(1459929152631), up: 25, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.351-0500 c20011| 2016-04-06T02:52:41.722-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.352-0500 c20011| 2016-04-06T02:52:41.722-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 301 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 2, durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, opTime: { ts: Timestamp 1459929161000|3, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:14.356-0500 c20011| 2016-04-06T02:52:41.722-0500 D REPL [ReplicationExecutor] Ignoring older committed snapshot optime: { ts: Timestamp 1459929146000|10, t: 2 }, currentCommittedOpTime: { ts: Timestamp 1459929152000|2, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.361-0500 c20011| 2016-04-06T02:52:41.722-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929152000|2, t: 3 } } cursorid:19853084149 numYields:1 nreturned:1 reslen:522 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 506ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.362-0500 c20011| 2016-04-06T02:52:41.722-0500 I REPL [ReplicationExecutor] Member mongovm16:20012 is now in state SECONDARY [js_test:multi_coll_drop] 2016-04-06T02:53:14.362-0500 c20011| 2016-04-06T02:52:41.722-0500 D COMMAND [conn36] run command admin.$cmd { _getUserCacheGeneration: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.364-0500 c20011| 2016-04-06T02:52:41.722-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:43.722Z [js_test:multi_coll_drop] 2016-04-06T02:53:14.364-0500 c20011| 2016-04-06T02:52:41.722-0500 D COMMAND [conn36] command: _getUserCacheGeneration [js_test:multi_coll_drop] 2016-04-06T02:53:14.365-0500 c20011| 2016-04-06T02:52:41.722-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.369-0500 c20011| 2016-04-06T02:52:41.722-0500 I COMMAND [conn36] command admin.$cmd command: _getUserCacheGeneration { _getUserCacheGeneration: 1, maxTimeMS: 30000 } numYields:0 reslen:317 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.371-0500 c20011| 2016-04-06T02:52:41.722-0500 D COMMAND [conn40] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:41.710-0500-5704c04965c17830b843f1b0", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929161710), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -76.0 }, max: { _id: MaxKey } }, left: { min: { _id: -76.0 }, max: { _id: -75.0 }, lastmod: Timestamp 1000|51, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -75.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|52, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.374-0500 c20011| 2016-04-06T02:52:41.725-0500 D COMMAND [conn36] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929151652), up: 24, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.375-0500 c20011| 2016-04-06T02:52:41.725-0500 D QUERY [conn36] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:14.386-0500 c20011| 2016-04-06T02:52:41.725-0500 D REPL [conn38] Required snapshot optime: { ts: Timestamp 1459929161000|1, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929152000|2, t: 3 }, name-id: "201" } [js_test:multi_coll_drop] 2016-04-06T02:53:14.391-0500 c20011| 2016-04-06T02:52:41.725-0500 D REPL [conn36] Required snapshot optime: { ts: Timestamp 1459929161000|1, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929152000|2, t: 3 }, name-id: "201" } [js_test:multi_coll_drop] 2016-04-06T02:53:14.409-0500 c20011| 2016-04-06T02:52:41.725-0500 I WRITE [conn36] update config.mongos query: { _id: "mongovm16:20014" } update: { $set: { _id: "mongovm16:20014", ping: new Date(1459929151652), up: 24, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.412-0500 c20011| 2016-04-06T02:52:41.725-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:60973 #42 (15 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:14.421-0500 c20011| 2016-04-06T02:52:41.725-0500 I COMMAND [conn29] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 2 } numYields:0 reslen:500 locks:{} protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.425-0500 c20011| 2016-04-06T02:52:41.725-0500 D COMMAND [conn42] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:14.430-0500 c20011| 2016-04-06T02:52:41.726-0500 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20015" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.450-0500 c20011| 2016-04-06T02:52:41.726-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929152000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:14.482-0500 c20011| 2016-04-06T02:52:41.726-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929152000|2, t: 3 } } cursorid:19853084149 numYields:0 nreturned:2 reslen:1057 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.487-0500 c20011| 2016-04-06T02:52:41.726-0500 D COMMAND [conn42] run command admin.$cmd { _getUserCacheGeneration: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.494-0500 c20011| 2016-04-06T02:52:41.726-0500 D COMMAND [conn42] command: _getUserCacheGeneration [js_test:multi_coll_drop] 2016-04-06T02:53:14.498-0500 c20011| 2016-04-06T02:52:41.726-0500 I COMMAND [conn42] command admin.$cmd command: _getUserCacheGeneration { _getUserCacheGeneration: 1, maxTimeMS: 30000 } numYields:0 reslen:317 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.499-0500 c20011| 2016-04-06T02:52:41.728-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929152000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:14.502-0500 c20011| 2016-04-06T02:52:41.729-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|1, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:14.502-0500 c20011| 2016-04-06T02:52:41.729-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:14.505-0500 c20011| 2016-04-06T02:52:41.729-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.517-0500 c20011| 2016-04-06T02:52:41.729-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|1, t: 3 } and is durable through: { ts: Timestamp 1459929152000|2, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.533-0500 c20011| 2016-04-06T02:52:41.729-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929161000|1, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929152000|2, t: 3 }, name-id: "201" } [js_test:multi_coll_drop] 2016-04-06T02:53:14.554-0500 c20011| 2016-04-06T02:52:41.729-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|1, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.575-0500 c20011| 2016-04-06T02:52:41.731-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:14.575-0500 c20011| 2016-04-06T02:52:41.731-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:14.579-0500 c20011| 2016-04-06T02:52:41.731-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.592-0500 c20011| 2016-04-06T02:52:41.731-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 3 } and is durable through: { ts: Timestamp 1459929152000|2, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.597-0500 c20011| 2016-04-06T02:52:41.731-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929161000|1, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929152000|2, t: 3 }, name-id: "201" } [js_test:multi_coll_drop] 2016-04-06T02:53:14.601-0500 c20011| 2016-04-06T02:52:41.731-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.603-0500 c20011| 2016-04-06T02:52:41.731-0500 D REPL [conn36] Required snapshot optime: { ts: Timestamp 1459929161000|1, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929152000|2, t: 3 }, name-id: "201" } [js_test:multi_coll_drop] 2016-04-06T02:53:14.608-0500 c20011| 2016-04-06T02:52:41.731-0500 D REPL [conn36] Required snapshot optime: { ts: Timestamp 1459929161000|3, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929152000|2, t: 3 }, name-id: "201" } [js_test:multi_coll_drop] 2016-04-06T02:53:14.616-0500 c20011| 2016-04-06T02:52:41.731-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929161000|2, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929152000|2, t: 3 }, name-id: "201" } [js_test:multi_coll_drop] 2016-04-06T02:53:14.620-0500 c20011| 2016-04-06T02:52:41.733-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:14.622-0500 c20011| 2016-04-06T02:52:41.733-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:14.626-0500 c20011| 2016-04-06T02:52:41.733-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.628-0500 c20011| 2016-04-06T02:52:41.733-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 3 } and is durable through: { ts: Timestamp 1459929161000|1, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.630-0500 c20011| 2016-04-06T02:52:41.733-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929161000|1, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.633-0500 c20011| 2016-04-06T02:52:41.733-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929161000|1, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929152000|2, t: 3 }, name-id: "201" } [js_test:multi_coll_drop] 2016-04-06T02:53:14.644-0500 c20011| 2016-04-06T02:52:41.733-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929161000|3, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929152000|2, t: 3 }, name-id: "201" } [js_test:multi_coll_drop] 2016-04-06T02:53:14.647-0500 c20011| 2016-04-06T02:52:41.733-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929161000|2, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929152000|2, t: 3 }, name-id: "201" } [js_test:multi_coll_drop] 2016-04-06T02:53:14.649-0500 c20011| 2016-04-06T02:52:41.733-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.652-0500 c20011| 2016-04-06T02:52:41.736-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|3, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:14.653-0500 c20011| 2016-04-06T02:52:41.736-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:14.656-0500 c20011| 2016-04-06T02:52:41.736-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.657-0500 c20011| 2016-04-06T02:52:41.736-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 3 } and is durable through: { ts: Timestamp 1459929161000|3, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.659-0500 c20011| 2016-04-06T02:52:41.736-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929161000|3, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.664-0500 c20011| 2016-04-06T02:52:41.737-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|3, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.667-0500 c20011| 2016-04-06T02:52:41.737-0500 I COMMAND [conn38] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929152631), up: 25, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:386 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 15ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.671-0500 c20011| 2016-04-06T02:52:41.737-0500 I COMMAND [conn40] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:41.710-0500-5704c04965c17830b843f1b0", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929161710), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -76.0 }, max: { _id: MaxKey } }, left: { min: { _id: -76.0 }, max: { _id: -75.0 }, lastmod: Timestamp 1000|51, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -75.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|52, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 14ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.674-0500 c20011| 2016-04-06T02:52:41.737-0500 D COMMAND [conn38] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.675-0500 c20011| 2016-04-06T02:52:41.737-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:14.679-0500 c20011| 2016-04-06T02:52:41.737-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.680-0500 c20011| 2016-04-06T02:52:41.737-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:14.684-0500 c20011| 2016-04-06T02:52:41.737-0500 I COMMAND [conn38] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } }, maxTimeMS: 30000 } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:443 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.688-0500 c20011| 2016-04-06T02:52:41.737-0500 D COMMAND [conn40] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c03a65c17830b843f1af') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.690-0500 c20011| 2016-04-06T02:52:41.737-0500 D QUERY [conn40] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:14.693-0500 c20011| 2016-04-06T02:52:41.738-0500 D QUERY [conn40] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c03a65c17830b843f1af') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.698-0500 c20011| 2016-04-06T02:52:41.738-0500 I COMMAND [conn36] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929151652), up: 24, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:386 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 12ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.699-0500 c20011| 2016-04-06T02:52:41.738-0500 D COMMAND [conn38] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.702-0500 c20011| 2016-04-06T02:52:41.738-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:14.705-0500 c20011| 2016-04-06T02:52:41.738-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.706-0500 c20011| 2016-04-06T02:52:41.738-0500 D QUERY [conn38] Using idhack: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:14.711-0500 c20011| 2016-04-06T02:52:41.738-0500 I COMMAND [conn38] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:434 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.714-0500 c20011| 2016-04-06T02:52:41.741-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929152000|2, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 12ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.731-0500 c20011| 2016-04-06T02:52:41.742-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929161000|4, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|3, t: 3 }, name-id: "203" } [js_test:multi_coll_drop] 2016-04-06T02:53:14.737-0500 c20011| 2016-04-06T02:52:41.742-0500 D COMMAND [conn36] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.745-0500 c20011| 2016-04-06T02:52:41.742-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:14.757-0500 c20011| 2016-04-06T02:52:41.742-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.762-0500 c20011| 2016-04-06T02:52:41.743-0500 D QUERY [conn36] Using idhack: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:14.769-0500 c20011| 2016-04-06T02:52:41.743-0500 I COMMAND [conn36] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:434 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.774-0500 c20011| 2016-04-06T02:52:41.743-0500 D COMMAND [conn36] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.778-0500 c20011| 2016-04-06T02:52:41.743-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:14.782-0500 c20011| 2016-04-06T02:52:41.743-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.783-0500 c20011| 2016-04-06T02:52:41.743-0500 D QUERY [conn36] Using idhack: query: { _id: "balancer" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:14.785-0500 c20011| 2016-04-06T02:52:41.743-0500 I COMMAND [conn36] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:428 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.790-0500 c20011| 2016-04-06T02:52:41.743-0500 D COMMAND [conn36] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929161743), up: 34, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.790-0500 c20011| 2016-04-06T02:52:41.743-0500 D QUERY [conn36] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:14.792-0500 c20011| 2016-04-06T02:52:41.743-0500 D REPL [conn36] Required snapshot optime: { ts: Timestamp 1459929161000|4, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|3, t: 3 }, name-id: "203" } [js_test:multi_coll_drop] 2016-04-06T02:53:14.793-0500 c20011| 2016-04-06T02:52:41.743-0500 I WRITE [conn36] update config.mongos query: { _id: "mongovm16:20014" } update: { $set: { _id: "mongovm16:20014", ping: new Date(1459929161743), up: 34, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.796-0500 c20011| 2016-04-06T02:52:41.744-0500 D REPL [conn36] Required snapshot optime: { ts: Timestamp 1459929161000|4, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|3, t: 3 }, name-id: "203" } [js_test:multi_coll_drop] 2016-04-06T02:53:14.797-0500 c20011| 2016-04-06T02:52:41.744-0500 D REPL [conn36] Required snapshot optime: { ts: Timestamp 1459929161000|5, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|3, t: 3 }, name-id: "203" } [js_test:multi_coll_drop] 2016-04-06T02:53:14.815-0500 c20011| 2016-04-06T02:52:41.744-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|3, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|4, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:14.839-0500 c20011| 2016-04-06T02:52:41.744-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:14.839-0500 c20011| 2016-04-06T02:52:41.744-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.840-0500 c20011| 2016-04-06T02:52:41.744-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|4, t: 3 } and is durable through: { ts: Timestamp 1459929161000|3, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.841-0500 c20011| 2016-04-06T02:52:41.744-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929161000|4, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|3, t: 3 }, name-id: "203" } [js_test:multi_coll_drop] 2016-04-06T02:53:14.842-0500 c20011| 2016-04-06T02:52:41.744-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929161000|5, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|3, t: 3 }, name-id: "203" } [js_test:multi_coll_drop] 2016-04-06T02:53:14.857-0500 c20011| 2016-04-06T02:52:41.744-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|3, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:14.865-0500 c20011| 2016-04-06T02:52:41.744-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|3, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|4, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.875-0500 c20011| 2016-04-06T02:52:41.745-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|3, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:522 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:14.896-0500 c20011| 2016-04-06T02:52:41.747-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|4, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:14.904-0500 c20011| 2016-04-06T02:52:41.747-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:14.921-0500 c20011| 2016-04-06T02:52:41.747-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.926-0500 c20011| 2016-04-06T02:52:41.747-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|4, t: 3 } and is durable through: { ts: Timestamp 1459929161000|4, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.939-0500 c20011| 2016-04-06T02:52:41.747-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929161000|4, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.942-0500 c20013| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:14.944-0500 c20013| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:14.944-0500 c20013| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:14.945-0500 c20013| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:14.946-0500 c20013| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:14.947-0500 c20013| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:14.947-0500 c20013| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:14.948-0500 c20013| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:14.949-0500 c20013| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:14.949-0500 c20013| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:14.951-0500 c20013| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:14.953-0500 c20013| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:14.954-0500 c20013| 2016-04-06T02:52:08.942-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:14.955-0500 c20013| 2016-04-06T02:52:08.942-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:14.960-0500 c20013| 2016-04-06T02:52:08.942-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:14.972-0500 c20013| 2016-04-06T02:52:08.942-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 764 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|62, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:14.974-0500 c20013| 2016-04-06T02:52:08.942-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 764 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:14.975-0500 c20013| 2016-04-06T02:52:08.943-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 764 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.978-0500 c20013| 2016-04-06T02:52:08.943-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 766 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.943-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|62, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:14.979-0500 c20013| 2016-04-06T02:52:08.944-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 766 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:14.985-0500 c20013| 2016-04-06T02:52:08.944-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:14.988-0500 c20013| 2016-04-06T02:52:08.944-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 767 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:14.990-0500 c20013| 2016-04-06T02:52:08.944-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 767 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:14.992-0500 c20013| 2016-04-06T02:52:08.944-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 767 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.993-0500 c20013| 2016-04-06T02:52:08.944-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 766 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.994-0500 c20013| 2016-04-06T02:52:08.944-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|63, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:14.994-0500 c20013| 2016-04-06T02:52:08.944-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:14.996-0500 c20013| 2016-04-06T02:52:08.944-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 770 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.944-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|63, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:14.996-0500 c20012| 2016-04-06T02:52:10.234-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:14.999-0500 c20012| 2016-04-06T02:52:10.234-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.010-0500 c20012| 2016-04-06T02:52:10.234-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.033-0500 c20013| 2016-04-06T02:52:08.944-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 770 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.041-0500 c20013| 2016-04-06T02:52:08.945-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 770 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|64, t: 1, h: -930003874952597810, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.047-0500 c20013| 2016-04-06T02:52:08.945-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|64 and ending at ts: Timestamp 1459929128000|64 [js_test:multi_coll_drop] 2016-04-06T02:53:15.048-0500 c20013| 2016-04-06T02:52:08.945-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:15.049-0500 c20013| 2016-04-06T02:52:08.945-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.056-0500 c20013| 2016-04-06T02:52:08.945-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.058-0500 c20012| 2016-04-06T02:52:10.234-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.059-0500 c20012| 2016-04-06T02:52:10.234-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.060-0500 c20012| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.060-0500 c20012| 2016-04-06T02:52:10.235-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:15.061-0500 c20012| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.062-0500 c20012| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.063-0500 c20012| 2016-04-06T02:52:10.235-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll-_id_-83.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:15.063-0500 c20012| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.066-0500 c20012| 2016-04-06T02:52:10.235-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll-_id_-82.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:15.068-0500 c20012| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.072-0500 c20012| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.073-0500 c20012| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.076-0500 c20012| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.076-0500 c20012| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.079-0500 c20011| 2016-04-06T02:52:41.747-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929161000|5, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|4, t: 3 }, name-id: "204" } [js_test:multi_coll_drop] 2016-04-06T02:53:15.081-0500 c20013| 2016-04-06T02:52:08.945-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.082-0500 c20013| 2016-04-06T02:52:08.945-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.084-0500 c20012| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.088-0500 c20012| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.088-0500 c20012| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.089-0500 c20012| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.089-0500 c20013| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.091-0500 c20013| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.092-0500 c20013| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.093-0500 c20013| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.096-0500 c20013| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.097-0500 c20013| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.097-0500 c20013| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.099-0500 c20013| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.099-0500 c20013| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.100-0500 c20013| 2016-04-06T02:52:08.947-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:15.102-0500 c20013| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.103-0500 c20013| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.106-0500 c20013| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.107-0500 c20013| 2016-04-06T02:52:08.947-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 772 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.947-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|63, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:15.111-0500 c20013| 2016-04-06T02:52:08.947-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 772 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.113-0500 c20013| 2016-04-06T02:52:08.947-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:15.116-0500 c20013| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.117-0500 c20013| 2016-04-06T02:52:08.947-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.120-0500 c20013| 2016-04-06T02:52:08.948-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.121-0500 c20013| 2016-04-06T02:52:08.948-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.122-0500 c20013| 2016-04-06T02:52:08.948-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.123-0500 c20013| 2016-04-06T02:52:08.948-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.126-0500 c20013| 2016-04-06T02:52:08.948-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.130-0500 c20013| 2016-04-06T02:52:08.948-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.130-0500 c20013| 2016-04-06T02:52:08.948-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.131-0500 c20013| 2016-04-06T02:52:08.948-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.132-0500 c20013| 2016-04-06T02:52:08.948-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.133-0500 c20013| 2016-04-06T02:52:08.948-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.135-0500 c20013| 2016-04-06T02:52:08.948-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.136-0500 c20013| 2016-04-06T02:52:08.948-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.140-0500 c20013| 2016-04-06T02:52:08.948-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.141-0500 c20013| 2016-04-06T02:52:08.948-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.143-0500 c20013| 2016-04-06T02:52:08.948-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:15.147-0500 c20013| 2016-04-06T02:52:08.948-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.150-0500 c20013| 2016-04-06T02:52:08.948-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 773 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|63, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.152-0500 c20013| 2016-04-06T02:52:08.948-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 773 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.155-0500 c20013| 2016-04-06T02:52:08.948-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 773 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.160-0500 c20013| 2016-04-06T02:52:08.950-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.165-0500 c20013| 2016-04-06T02:52:08.950-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 775 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.166-0500 c20013| 2016-04-06T02:52:08.950-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 775 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.167-0500 c20013| 2016-04-06T02:52:08.950-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 775 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.168-0500 c20013| 2016-04-06T02:52:08.951-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 772 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.169-0500 c20013| 2016-04-06T02:52:08.951-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|64, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.170-0500 c20013| 2016-04-06T02:52:08.951-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:15.172-0500 c20013| 2016-04-06T02:52:08.951-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 778 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.951-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|64, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:15.172-0500 c20013| 2016-04-06T02:52:08.951-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 778 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.175-0500 c20013| 2016-04-06T02:52:08.954-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 778 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|65, t: 1, h: 2692489107514904355, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f196'), state: 2, when: new Date(1459929128953), why: "splitting chunk [{ _id: -88.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.177-0500 c20013| 2016-04-06T02:52:08.954-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|65 and ending at ts: Timestamp 1459929128000|65 [js_test:multi_coll_drop] 2016-04-06T02:53:15.178-0500 c20013| 2016-04-06T02:52:08.954-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:15.181-0500 c20013| 2016-04-06T02:52:08.954-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.182-0500 c20013| 2016-04-06T02:52:08.954-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.182-0500 c20013| 2016-04-06T02:52:08.954-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.182-0500 c20013| 2016-04-06T02:52:08.954-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.183-0500 c20013| 2016-04-06T02:52:08.954-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.186-0500 c20013| 2016-04-06T02:52:08.954-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.188-0500 c20013| 2016-04-06T02:52:08.954-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.189-0500 c20013| 2016-04-06T02:52:08.954-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.190-0500 c20013| 2016-04-06T02:52:08.954-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.191-0500 c20011| 2016-04-06T02:52:41.747-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929161000|5, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|4, t: 3 }, name-id: "204" } [js_test:multi_coll_drop] 2016-04-06T02:53:15.195-0500 c20011| 2016-04-06T02:52:41.747-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|4, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:15.200-0500 c20011| 2016-04-06T02:52:41.747-0500 I COMMAND [conn40] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c03a65c17830b843f1af') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 9ms [js_test:multi_coll_drop] 2016-04-06T02:53:15.204-0500 c20011| 2016-04-06T02:52:41.747-0500 D COMMAND [conn38] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929161747), up: 34, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.207-0500 c20012| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.210-0500 c20012| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.211-0500 c20012| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.213-0500 c20012| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.215-0500 c20012| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.216-0500 c20012| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.217-0500 c20012| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.218-0500 c20013| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.218-0500 c20013| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.219-0500 c20013| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.221-0500 c20013| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.222-0500 c20013| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.222-0500 c20013| 2016-04-06T02:52:08.955-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:15.224-0500 c20013| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.225-0500 c20013| 2016-04-06T02:52:08.955-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:15.226-0500 c20013| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.229-0500 c20013| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.230-0500 c20013| 2016-04-06T02:52:08.955-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.234-0500 c20013| 2016-04-06T02:52:08.956-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.234-0500 c20013| 2016-04-06T02:52:08.956-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.236-0500 c20013| 2016-04-06T02:52:08.956-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.239-0500 c20013| 2016-04-06T02:52:08.956-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.241-0500 c20013| 2016-04-06T02:52:08.956-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.243-0500 c20013| 2016-04-06T02:52:08.956-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.244-0500 c20013| 2016-04-06T02:52:08.956-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.245-0500 c20013| 2016-04-06T02:52:08.956-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.247-0500 c20013| 2016-04-06T02:52:08.956-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.247-0500 c20013| 2016-04-06T02:52:08.956-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.249-0500 c20013| 2016-04-06T02:52:08.956-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.253-0500 c20013| 2016-04-06T02:52:08.956-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.255-0500 c20013| 2016-04-06T02:52:08.956-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.257-0500 c20013| 2016-04-06T02:52:08.956-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.263-0500 c20013| 2016-04-06T02:52:08.956-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:15.270-0500 c20013| 2016-04-06T02:52:08.956-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.273-0500 c20013| 2016-04-06T02:52:08.956-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 780 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|64, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.276-0500 c20013| 2016-04-06T02:52:08.956-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 780 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.282-0500 c20013| 2016-04-06T02:52:08.956-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 781 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.956-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|64, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:15.284-0500 c20013| 2016-04-06T02:52:08.956-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 780 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.287-0500 c20013| 2016-04-06T02:52:08.956-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 781 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.291-0500 c20013| 2016-04-06T02:52:08.960-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.300-0500 c20013| 2016-04-06T02:52:08.960-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 783 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.304-0500 c20013| 2016-04-06T02:52:08.960-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 783 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.308-0500 c20013| 2016-04-06T02:52:08.960-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 783 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.311-0500 c20013| 2016-04-06T02:52:08.961-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 781 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.313-0500 c20013| 2016-04-06T02:52:08.962-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|65, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.315-0500 c20013| 2016-04-06T02:52:08.962-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:15.319-0500 c20013| 2016-04-06T02:52:08.963-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 786 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.963-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|65, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:15.320-0500 c20013| 2016-04-06T02:52:08.963-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 786 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.337-0500 c20013| 2016-04-06T02:52:08.964-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 786 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|66, t: 1, h: -6638103080377994745, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-88.0", lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -88.0 }, max: { _id: -87.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-88.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-87.0", lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -87.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-87.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.342-0500 c20013| 2016-04-06T02:52:08.964-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|66 and ending at ts: Timestamp 1459929128000|66 [js_test:multi_coll_drop] 2016-04-06T02:53:15.344-0500 c20013| 2016-04-06T02:52:08.964-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:15.345-0500 c20013| 2016-04-06T02:52:08.964-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.347-0500 c20013| 2016-04-06T02:52:08.964-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.348-0500 c20013| 2016-04-06T02:52:08.964-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.349-0500 c20013| 2016-04-06T02:52:08.964-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.351-0500 c20013| 2016-04-06T02:52:08.964-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.352-0500 c20013| 2016-04-06T02:52:08.964-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.356-0500 c20013| 2016-04-06T02:52:08.964-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.356-0500 c20013| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.357-0500 c20013| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.358-0500 c20013| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.359-0500 c20013| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.359-0500 c20013| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.360-0500 c20013| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.362-0500 c20013| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.365-0500 c20013| 2016-04-06T02:52:08.965-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:15.366-0500 c20013| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.366-0500 c20013| 2016-04-06T02:52:08.965-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll-_id_-88.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:15.370-0500 c20013| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.371-0500 c20012| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.372-0500 c20011| 2016-04-06T02:52:41.747-0500 D QUERY [conn38] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:15.374-0500 c20011| 2016-04-06T02:52:41.747-0500 D REPL [conn38] Required snapshot optime: { ts: Timestamp 1459929161000|5, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|4, t: 3 }, name-id: "204" } [js_test:multi_coll_drop] 2016-04-06T02:53:15.378-0500 c20011| 2016-04-06T02:52:41.747-0500 I WRITE [conn38] update config.mongos query: { _id: "mongovm16:20015" } update: { $set: { _id: "mongovm16:20015", ping: new Date(1459929161747), up: 34, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:15.380-0500 c20012| 2016-04-06T02:52:10.235-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:15.386-0500 c20012| 2016-04-06T02:52:10.235-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.397-0500 c20012| 2016-04-06T02:52:10.235-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 964 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.401-0500 c20012| 2016-04-06T02:52:10.235-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 964 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.402-0500 c20012| 2016-04-06T02:52:10.236-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 964 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.406-0500 c20012| 2016-04-06T02:52:10.236-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 966 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.236-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:15.408-0500 c20012| 2016-04-06T02:52:10.236-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 966 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.411-0500 c20012| 2016-04-06T02:52:10.238-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.417-0500 c20012| 2016-04-06T02:52:10.238-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 967 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.418-0500 c20012| 2016-04-06T02:52:10.238-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 967 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.420-0500 c20012| 2016-04-06T02:52:10.238-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 967 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.423-0500 c20012| 2016-04-06T02:52:10.239-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 966 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.424-0500 c20012| 2016-04-06T02:52:10.239-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.429-0500 c20012| 2016-04-06T02:52:10.239-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:15.433-0500 c20012| 2016-04-06T02:52:10.239-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 970 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.239-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:15.434-0500 c20012| 2016-04-06T02:52:10.239-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 970 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.440-0500 c20012| 2016-04-06T02:52:10.240-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 970 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929130000|5, t: 1, h: 3823828548878560264, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:10.239-0500-5704c02a65c17830b843f1a1", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929130239), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -83.0 }, max: { _id: MaxKey } }, left: { min: { _id: -83.0 }, max: { _id: -82.0 }, lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -82.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.443-0500 c20012| 2016-04-06T02:52:10.240-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929130000|5 and ending at ts: Timestamp 1459929130000|5 [js_test:multi_coll_drop] 2016-04-06T02:53:15.444-0500 c20012| 2016-04-06T02:52:10.240-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:15.445-0500 c20012| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.448-0500 c20012| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.449-0500 c20012| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.450-0500 c20012| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.452-0500 c20012| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.453-0500 c20012| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.456-0500 c20012| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.458-0500 c20012| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.460-0500 c20012| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.462-0500 c20012| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.464-0500 c20012| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.466-0500 c20012| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.470-0500 c20012| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.472-0500 c20012| 2016-04-06T02:52:10.240-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:15.473-0500 c20012| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.477-0500 c20012| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.478-0500 c20012| 2016-04-06T02:52:10.241-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.480-0500 c20012| 2016-04-06T02:52:10.241-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.485-0500 c20012| 2016-04-06T02:52:10.241-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.492-0500 c20012| 2016-04-06T02:52:10.241-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.493-0500 c20012| 2016-04-06T02:52:10.241-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.495-0500 c20012| 2016-04-06T02:52:10.241-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.496-0500 c20012| 2016-04-06T02:52:10.241-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.498-0500 c20012| 2016-04-06T02:52:10.241-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.499-0500 c20012| 2016-04-06T02:52:10.241-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.501-0500 c20012| 2016-04-06T02:52:10.241-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.505-0500 c20012| 2016-04-06T02:52:10.241-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.510-0500 c20012| 2016-04-06T02:52:10.241-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.511-0500 c20012| 2016-04-06T02:52:10.241-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.516-0500 c20012| 2016-04-06T02:52:10.241-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.519-0500 c20012| 2016-04-06T02:52:10.241-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.521-0500 c20012| 2016-04-06T02:52:10.241-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.526-0500 c20012| 2016-04-06T02:52:10.241-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.528-0500 c20012| 2016-04-06T02:52:10.241-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:15.541-0500 c20012| 2016-04-06T02:52:10.242-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.555-0500 c20012| 2016-04-06T02:52:10.242-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 972 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.556-0500 c20012| 2016-04-06T02:52:10.242-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 972 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.562-0500 c20012| 2016-04-06T02:52:10.242-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 972 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.568-0500 c20012| 2016-04-06T02:52:10.242-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 974 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.242-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:15.569-0500 c20012| 2016-04-06T02:52:10.242-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 974 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.582-0500 c20012| 2016-04-06T02:52:10.244-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.589-0500 c20012| 2016-04-06T02:52:10.244-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 974 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.593-0500 c20012| 2016-04-06T02:52:10.244-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 975 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.593-0500 c20012| 2016-04-06T02:52:10.244-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 975 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.594-0500 c20012| 2016-04-06T02:52:10.244-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 975 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.596-0500 c20012| 2016-04-06T02:52:10.244-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.599-0500 c20012| 2016-04-06T02:52:10.244-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:15.603-0500 c20012| 2016-04-06T02:52:10.244-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 978 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.244-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:15.607-0500 c20012| 2016-04-06T02:52:10.244-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 978 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.611-0500 c20012| 2016-04-06T02:52:10.245-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 978 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929130000|6, t: 1, h: 838024042340526810, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.625-0500 c20012| 2016-04-06T02:52:10.245-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929130000|6 and ending at ts: Timestamp 1459929130000|6 [js_test:multi_coll_drop] 2016-04-06T02:53:15.628-0500 c20011| 2016-04-06T02:52:41.748-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:60975 #43 (16 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:15.629-0500 c20011| 2016-04-06T02:52:41.748-0500 D COMMAND [conn43] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:15.632-0500 c20011| 2016-04-06T02:52:41.748-0500 I COMMAND [conn43] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20014" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:15.636-0500 c20011| 2016-04-06T02:52:41.748-0500 D COMMAND [conn43] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|4, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.638-0500 c20011| 2016-04-06T02:52:41.748-0500 D COMMAND [conn43] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|4, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:15.642-0500 c20011| 2016-04-06T02:52:41.748-0500 D COMMAND [conn43] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|4, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.649-0500 c20011| 2016-04-06T02:52:41.748-0500 D QUERY [conn43] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:15.651-0500 c20012| 2016-04-06T02:52:10.245-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:15.656-0500 c20011| 2016-04-06T02:52:41.748-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|3, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:15.661-0500 c20011| 2016-04-06T02:52:41.749-0500 D REPL [conn38] Required snapshot optime: { ts: Timestamp 1459929161000|5, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|4, t: 3 }, name-id: "204" } [js_test:multi_coll_drop] 2016-04-06T02:53:15.664-0500 c20011| 2016-04-06T02:52:41.749-0500 D REPL [conn38] Required snapshot optime: { ts: Timestamp 1459929161000|6, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|4, t: 3 }, name-id: "204" } [js_test:multi_coll_drop] 2016-04-06T02:53:15.667-0500 c20011| 2016-04-06T02:52:41.749-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|5, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.667-0500 c20011| 2016-04-06T02:52:41.749-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:15.668-0500 c20011| 2016-04-06T02:52:41.749-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.670-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.671-0500 c20013| 2016-04-06T02:52:08.965-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll-_id_-87.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:15.673-0500 c20013| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.675-0500 c20013| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.679-0500 c20013| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.681-0500 c20013| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.686-0500 c20013| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.687-0500 c20013| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.689-0500 c20013| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.689-0500 c20013| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.689-0500 c20013| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.694-0500 c20013| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.694-0500 c20013| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.706-0500 c20013| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.711-0500 c20013| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.713-0500 c20013| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.714-0500 c20013| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.722-0500 c20013| 2016-04-06T02:52:08.965-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.725-0500 c20013| 2016-04-06T02:52:08.966-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:15.732-0500 c20013| 2016-04-06T02:52:08.966-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.744-0500 c20013| 2016-04-06T02:52:08.966-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 788 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|65, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.746-0500 c20013| 2016-04-06T02:52:08.966-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 788 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.747-0500 c20013| 2016-04-06T02:52:08.966-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 788 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.748-0500 c20013| 2016-04-06T02:52:08.967-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 790 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.967-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|65, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:15.749-0500 c20013| 2016-04-06T02:52:08.967-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 790 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.751-0500 c20013| 2016-04-06T02:52:08.967-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.755-0500 c20013| 2016-04-06T02:52:08.967-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 791 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.756-0500 c20013| 2016-04-06T02:52:08.967-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 791 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.757-0500 c20013| 2016-04-06T02:52:08.967-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 791 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.758-0500 c20013| 2016-04-06T02:52:08.967-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 790 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.759-0500 c20013| 2016-04-06T02:52:08.967-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|66, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.760-0500 c20013| 2016-04-06T02:52:08.967-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:15.763-0500 c20013| 2016-04-06T02:52:08.968-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 794 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.968-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|66, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:15.764-0500 c20013| 2016-04-06T02:52:08.968-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 794 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.768-0500 c20013| 2016-04-06T02:52:08.968-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 794 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|67, t: 1, h: -1218800546483451830, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:08.967-0500-5704c02865c17830b843f197", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929128967), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -88.0 }, max: { _id: MaxKey } }, left: { min: { _id: -88.0 }, max: { _id: -87.0 }, lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -87.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.769-0500 c20013| 2016-04-06T02:52:08.968-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|67 and ending at ts: Timestamp 1459929128000|67 [js_test:multi_coll_drop] 2016-04-06T02:53:15.770-0500 c20013| 2016-04-06T02:52:08.968-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:15.770-0500 c20013| 2016-04-06T02:52:08.968-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.771-0500 c20013| 2016-04-06T02:52:08.968-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.772-0500 c20013| 2016-04-06T02:52:08.968-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.773-0500 c20013| 2016-04-06T02:52:08.968-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.773-0500 c20013| 2016-04-06T02:52:08.968-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.774-0500 c20013| 2016-04-06T02:52:08.968-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.775-0500 c20013| 2016-04-06T02:52:08.968-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.776-0500 c20013| 2016-04-06T02:52:08.968-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.777-0500 c20013| 2016-04-06T02:52:08.968-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.778-0500 c20013| 2016-04-06T02:52:08.968-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.779-0500 c20013| 2016-04-06T02:52:08.968-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.780-0500 c20013| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.780-0500 c20013| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.780-0500 c20013| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.781-0500 c20013| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.782-0500 c20013| 2016-04-06T02:52:08.969-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:15.783-0500 c20013| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.783-0500 c20013| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.784-0500 c20013| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.785-0500 c20013| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.786-0500 c20013| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.786-0500 c20013| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.787-0500 c20013| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.788-0500 c20013| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.789-0500 c20013| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.789-0500 c20013| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.793-0500 c20013| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.796-0500 c20013| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.800-0500 c20013| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.800-0500 c20013| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.803-0500 c20013| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.804-0500 c20013| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.815-0500 c20013| 2016-04-06T02:52:08.969-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.818-0500 c20013| 2016-04-06T02:52:08.969-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:15.823-0500 c20013| 2016-04-06T02:52:08.969-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.831-0500 c20013| 2016-04-06T02:52:08.969-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 796 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|66, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.833-0500 c20013| 2016-04-06T02:52:08.970-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 796 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.873-0500 c20013| 2016-04-06T02:52:08.970-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 796 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.875-0500 c20013| 2016-04-06T02:52:08.970-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 798 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.970-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|66, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:15.875-0500 c20013| 2016-04-06T02:52:08.970-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 798 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.879-0500 c20013| 2016-04-06T02:52:08.971-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.882-0500 c20013| 2016-04-06T02:52:08.971-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 799 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.883-0500 c20013| 2016-04-06T02:52:08.972-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 799 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.886-0500 c20013| 2016-04-06T02:52:08.972-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 799 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.888-0500 c20013| 2016-04-06T02:52:08.972-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 798 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.889-0500 c20013| 2016-04-06T02:52:08.972-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|67, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.889-0500 c20013| 2016-04-06T02:52:08.972-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:15.891-0500 c20013| 2016-04-06T02:52:08.972-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 802 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.972-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|67, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:15.892-0500 c20013| 2016-04-06T02:52:08.972-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 802 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.901-0500 c20013| 2016-04-06T02:52:08.973-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 802 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|68, t: 1, h: -2285432667988156004, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.903-0500 c20013| 2016-04-06T02:52:08.973-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|68 and ending at ts: Timestamp 1459929128000|68 [js_test:multi_coll_drop] 2016-04-06T02:53:15.903-0500 c20013| 2016-04-06T02:52:08.973-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:15.904-0500 c20013| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.905-0500 c20013| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.906-0500 c20013| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.907-0500 c20013| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.907-0500 c20013| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.908-0500 c20013| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.908-0500 c20013| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.909-0500 c20013| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.910-0500 c20013| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.910-0500 c20013| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.911-0500 c20013| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.911-0500 c20013| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.912-0500 c20013| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.913-0500 c20013| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.916-0500 c20013| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.916-0500 c20013| 2016-04-06T02:52:08.973-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:15.918-0500 c20013| 2016-04-06T02:52:08.973-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.919-0500 c20013| 2016-04-06T02:52:08.973-0500 D QUERY [repl writer worker 3] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:15.919-0500 c20013| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.920-0500 c20013| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.921-0500 c20013| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.921-0500 c20013| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.922-0500 c20013| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.923-0500 c20013| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.924-0500 c20013| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.928-0500 c20013| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.929-0500 c20013| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.931-0500 c20013| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.933-0500 c20013| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.944-0500 c20013| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.946-0500 c20013| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.947-0500 c20013| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.948-0500 c20013| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.950-0500 c20013| 2016-04-06T02:52:08.974-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:15.950-0500 c20013| 2016-04-06T02:52:08.975-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:15.966-0500 c20013| 2016-04-06T02:52:08.975-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.968-0500 c20013| 2016-04-06T02:52:08.975-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 804 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|67, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.970-0500 c20013| 2016-04-06T02:52:08.975-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 804 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.971-0500 c20013| 2016-04-06T02:52:08.975-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 805 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.975-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|67, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:15.972-0500 c20013| 2016-04-06T02:52:08.975-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 805 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.975-0500 c20013| 2016-04-06T02:52:08.975-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 804 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.976-0500 c20013| 2016-04-06T02:52:08.976-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 805 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.978-0500 c20013| 2016-04-06T02:52:08.976-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|68, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:15.979-0500 c20013| 2016-04-06T02:52:08.976-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:15.985-0500 c20013| 2016-04-06T02:52:08.976-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 808 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.976-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|68, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:15.988-0500 c20013| 2016-04-06T02:52:08.976-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 808 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:15.993-0500 c20013| 2016-04-06T02:52:08.978-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:15.999-0500 c20013| 2016-04-06T02:52:08.978-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 809 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:16.010-0500 c20013| 2016-04-06T02:52:08.978-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 809 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:16.012-0500 c20013| 2016-04-06T02:52:08.979-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 809 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:16.032-0500 c20013| 2016-04-06T02:52:08.982-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 808 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|69, t: 1, h: -6723415074916916584, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02865c17830b843f198'), state: 2, when: new Date(1459929128978), why: "splitting chunk [{ _id: -87.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:16.034-0500 c20013| 2016-04-06T02:52:08.983-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|69 and ending at ts: Timestamp 1459929128000|69 [js_test:multi_coll_drop] 2016-04-06T02:53:16.058-0500 c20013| 2016-04-06T02:52:08.983-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:16.069-0500 c20013| 2016-04-06T02:52:08.983-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.070-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.082-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.088-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.096-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.099-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.106-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.113-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.114-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.114-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.117-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.118-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.139-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.148-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.172-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.172-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.172-0500 c20013| 2016-04-06T02:52:08.984-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:16.173-0500 c20013| 2016-04-06T02:52:08.984-0500 D QUERY [repl writer worker 4] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:16.201-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.211-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.211-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.212-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.214-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.215-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.215-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.215-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.216-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.216-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.216-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.217-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.217-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.217-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.228-0500 c20012| 2016-04-06T02:52:10.246-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:16.228-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.229-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.229-0500 c20012| 2016-04-06T02:52:10.246-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:16.229-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.230-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.230-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.230-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.231-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.231-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.231-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.244-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.276-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.291-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.319-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.346-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.347-0500 c20012| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.348-0500 c20012| 2016-04-06T02:52:10.247-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.348-0500 c20012| 2016-04-06T02:52:10.247-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.349-0500 c20012| 2016-04-06T02:52:10.247-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.371-0500 c20012| 2016-04-06T02:52:10.247-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:16.395-0500 c20012| 2016-04-06T02:52:10.247-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:16.398-0500 c20012| 2016-04-06T02:52:10.247-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 980 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:16.401-0500 c20012| 2016-04-06T02:52:10.247-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 980 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:16.412-0500 c20012| 2016-04-06T02:52:10.247-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 980 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:16.586-0500 c20012| 2016-04-06T02:52:10.248-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 982 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.248-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:16.588-0500 c20012| 2016-04-06T02:52:10.248-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 982 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:16.593-0500 c20012| 2016-04-06T02:52:10.253-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:16.599-0500 c20012| 2016-04-06T02:52:10.253-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 983 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:16.600-0500 c20012| 2016-04-06T02:52:10.253-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 983 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:16.600-0500 c20012| 2016-04-06T02:52:10.254-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 983 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:16.603-0500 c20012| 2016-04-06T02:52:10.254-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 982 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:16.605-0500 c20012| 2016-04-06T02:52:10.254-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|6, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:16.607-0500 c20012| 2016-04-06T02:52:10.254-0500 D REPL [conn7] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929130000|6, t: 1 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929130000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:16.608-0500 c20012| 2016-04-06T02:52:10.254-0500 D REPL [conn7] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999981μs [js_test:multi_coll_drop] 2016-04-06T02:53:16.608-0500 c20012| 2016-04-06T02:52:10.254-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:16.609-0500 c20012| 2016-04-06T02:52:10.254-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|6, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:16.610-0500 c20012| 2016-04-06T02:52:10.254-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:16.615-0500 c20012| 2016-04-06T02:52:10.254-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|6, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:16.615-0500 c20012| 2016-04-06T02:52:10.255-0500 D QUERY [conn7] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:16.616-0500 c20012| 2016-04-06T02:52:10.255-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 986 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.255-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:16.617-0500 c20012| 2016-04-06T02:52:10.255-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|6, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:16.617-0500 c20012| 2016-04-06T02:52:10.255-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 986 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:16.618-0500 c20012| 2016-04-06T02:52:10.257-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 986 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929130000|7, t: 1, h: -6994787252017545484, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02a65c17830b843f1a2'), state: 2, when: new Date(1459929130256), why: "splitting chunk [{ _id: -82.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:16.618-0500 c20012| 2016-04-06T02:52:10.257-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929130000|7 and ending at ts: Timestamp 1459929130000|7 [js_test:multi_coll_drop] 2016-04-06T02:53:16.620-0500 c20012| 2016-04-06T02:52:10.257-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:16.621-0500 c20012| 2016-04-06T02:52:10.257-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.621-0500 c20012| 2016-04-06T02:52:10.257-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.622-0500 c20012| 2016-04-06T02:52:10.257-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.622-0500 c20012| 2016-04-06T02:52:10.257-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.622-0500 c20012| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.623-0500 c20012| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.623-0500 c20012| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.623-0500 c20012| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.623-0500 c20012| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.624-0500 c20012| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.625-0500 c20012| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.625-0500 c20012| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.626-0500 c20012| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.627-0500 c20012| 2016-04-06T02:52:10.258-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:16.627-0500 c20012| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.627-0500 c20012| 2016-04-06T02:52:10.258-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:16.627-0500 c20012| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.628-0500 c20012| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.628-0500 c20012| 2016-04-06T02:52:10.259-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.628-0500 c20012| 2016-04-06T02:52:10.259-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.628-0500 c20012| 2016-04-06T02:52:10.259-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.629-0500 c20012| 2016-04-06T02:52:10.259-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.629-0500 c20012| 2016-04-06T02:52:10.259-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.630-0500 c20012| 2016-04-06T02:52:10.259-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.630-0500 c20012| 2016-04-06T02:52:10.259-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.630-0500 c20012| 2016-04-06T02:52:10.259-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.631-0500 c20012| 2016-04-06T02:52:10.259-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.633-0500 c20012| 2016-04-06T02:52:10.259-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.635-0500 c20012| 2016-04-06T02:52:10.259-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.636-0500 c20012| 2016-04-06T02:52:10.259-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.649-0500 c20012| 2016-04-06T02:52:10.259-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.653-0500 c20012| 2016-04-06T02:52:10.259-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.656-0500 c20012| 2016-04-06T02:52:10.259-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.656-0500 c20012| 2016-04-06T02:52:10.259-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.657-0500 c20012| 2016-04-06T02:52:10.259-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:16.659-0500 c20012| 2016-04-06T02:52:10.260-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:16.676-0500 c20012| 2016-04-06T02:52:10.260-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 988 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:16.679-0500 c20012| 2016-04-06T02:52:10.260-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 988 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:16.681-0500 c20012| 2016-04-06T02:52:10.260-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 988 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:16.692-0500 c20012| 2016-04-06T02:52:10.260-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 990 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.260-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:16.703-0500 c20012| 2016-04-06T02:52:10.260-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 990 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:16.706-0500 c20012| 2016-04-06T02:52:10.265-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:16.713-0500 c20012| 2016-04-06T02:52:10.265-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 991 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:16.714-0500 c20012| 2016-04-06T02:52:10.265-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 991 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:16.715-0500 c20012| 2016-04-06T02:52:10.265-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 991 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:16.716-0500 c20012| 2016-04-06T02:52:10.265-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 990 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:16.721-0500 c20012| 2016-04-06T02:52:10.270-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:16.722-0500 c20012| 2016-04-06T02:52:10.271-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:16.724-0500 c20012| 2016-04-06T02:52:10.271-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 994 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.271-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:16.725-0500 c20012| 2016-04-06T02:52:10.271-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 994 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:16.729-0500 c20012| 2016-04-06T02:52:10.271-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 994 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929130000|8, t: 1, h: -1899469897052357851, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-82.0", lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -82.0 }, max: { _id: -81.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-82.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-81.0", lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -81.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-81.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:16.732-0500 c20012| 2016-04-06T02:52:10.275-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929130000|8 and ending at ts: Timestamp 1459929130000|8 [js_test:multi_coll_drop] 2016-04-06T02:53:16.732-0500 c20012| 2016-04-06T02:52:10.275-0500 D REPL [rsBackgroundSync-0] bgsync buffer has 0 bytes [js_test:multi_coll_drop] 2016-04-06T02:53:16.735-0500 c20012| 2016-04-06T02:52:10.275-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:16.744-0500 c20012| 2016-04-06T02:52:10.275-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.746-0500 c20012| 2016-04-06T02:52:10.275-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.747-0500 c20012| 2016-04-06T02:52:10.276-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.749-0500 c20012| 2016-04-06T02:52:10.276-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.750-0500 c20012| 2016-04-06T02:52:10.276-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.752-0500 c20012| 2016-04-06T02:52:10.276-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.755-0500 c20012| 2016-04-06T02:52:10.276-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.756-0500 c20012| 2016-04-06T02:52:10.276-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.759-0500 c20012| 2016-04-06T02:52:10.276-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.764-0500 c20012| 2016-04-06T02:52:10.276-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.765-0500 c20012| 2016-04-06T02:52:10.276-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.765-0500 c20012| 2016-04-06T02:52:10.276-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.766-0500 c20012| 2016-04-06T02:52:10.276-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.767-0500 c20012| 2016-04-06T02:52:10.276-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:16.770-0500 c20012| 2016-04-06T02:52:10.276-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.770-0500 c20012| 2016-04-06T02:52:10.276-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll-_id_-82.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:16.770-0500 c20012| 2016-04-06T02:52:10.276-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll-_id_-81.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:16.772-0500 c20012| 2016-04-06T02:52:10.276-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.772-0500 c20012| 2016-04-06T02:52:10.276-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.773-0500 c20012| 2016-04-06T02:52:10.276-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.778-0500 c20012| 2016-04-06T02:52:10.276-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.778-0500 c20012| 2016-04-06T02:52:10.276-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.795-0500 c20012| 2016-04-06T02:52:10.276-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.809-0500 c20012| 2016-04-06T02:52:10.277-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.811-0500 c20012| 2016-04-06T02:52:10.277-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.815-0500 c20012| 2016-04-06T02:52:10.277-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.818-0500 c20012| 2016-04-06T02:52:10.277-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.818-0500 c20012| 2016-04-06T02:52:10.277-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.820-0500 c20012| 2016-04-06T02:52:10.277-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.822-0500 c20012| 2016-04-06T02:52:10.277-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.823-0500 c20012| 2016-04-06T02:52:10.277-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.824-0500 c20012| 2016-04-06T02:52:10.277-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.826-0500 c20012| 2016-04-06T02:52:10.277-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.827-0500 c20012| 2016-04-06T02:52:10.277-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.828-0500 c20012| 2016-04-06T02:52:10.277-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.829-0500 c20012| 2016-04-06T02:52:10.277-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:16.835-0500 c20012| 2016-04-06T02:52:10.277-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:16.836-0500 c20012| 2016-04-06T02:52:10.277-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 996 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:16.838-0500 c20012| 2016-04-06T02:52:10.277-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 996 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:16.841-0500 c20012| 2016-04-06T02:52:10.277-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 996 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:16.847-0500 c20012| 2016-04-06T02:52:10.280-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:16.853-0500 c20012| 2016-04-06T02:52:10.280-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 998 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:16.854-0500 c20012| 2016-04-06T02:52:10.280-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 998 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:16.855-0500 c20012| 2016-04-06T02:52:10.280-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 998 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:16.862-0500 c20012| 2016-04-06T02:52:10.281-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1000 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.281-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:16.864-0500 c20012| 2016-04-06T02:52:10.281-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1000 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:16.870-0500 c20012| 2016-04-06T02:52:10.281-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1000 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929130000|9, t: 1, h: -5293347687548571671, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:10.276-0500-5704c02a65c17830b843f1a3", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929130276), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -82.0 }, max: { _id: MaxKey } }, left: { min: { _id: -82.0 }, max: { _id: -81.0 }, lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -81.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:16.871-0500 c20012| 2016-04-06T02:52:10.285-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:16.874-0500 c20012| 2016-04-06T02:52:10.285-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929130000|9 and ending at ts: Timestamp 1459929130000|9 [js_test:multi_coll_drop] 2016-04-06T02:53:16.874-0500 c20012| 2016-04-06T02:52:10.285-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|10, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:16.879-0500 c20012| 2016-04-06T02:52:10.285-0500 D REPL [conn7] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929130000|10, t: 1 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929130000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:16.879-0500 c20012| 2016-04-06T02:52:10.285-0500 D REPL [conn7] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999980μs [js_test:multi_coll_drop] 2016-04-06T02:53:16.898-0500 c20012| 2016-04-06T02:52:10.285-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:16.906-0500 c20012| 2016-04-06T02:52:10.286-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.911-0500 c20012| 2016-04-06T02:52:10.286-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.915-0500 c20012| 2016-04-06T02:52:10.286-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.916-0500 c20012| 2016-04-06T02:52:10.286-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.918-0500 c20012| 2016-04-06T02:52:10.286-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.920-0500 c20012| 2016-04-06T02:52:10.286-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.922-0500 c20012| 2016-04-06T02:52:10.286-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.925-0500 c20012| 2016-04-06T02:52:10.286-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.927-0500 c20012| 2016-04-06T02:52:10.286-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.927-0500 c20012| 2016-04-06T02:52:10.286-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.928-0500 c20012| 2016-04-06T02:52:10.286-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.929-0500 c20012| 2016-04-06T02:52:10.286-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.931-0500 c20012| 2016-04-06T02:52:10.286-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.934-0500 c20012| 2016-04-06T02:52:10.286-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:16.934-0500 c20012| 2016-04-06T02:52:10.286-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.934-0500 c20012| 2016-04-06T02:52:10.286-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.935-0500 c20012| 2016-04-06T02:52:10.286-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.936-0500 c20012| 2016-04-06T02:52:10.287-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.938-0500 c20012| 2016-04-06T02:52:10.287-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.939-0500 c20012| 2016-04-06T02:52:10.287-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.940-0500 c20012| 2016-04-06T02:52:10.287-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.941-0500 c20012| 2016-04-06T02:52:10.287-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.942-0500 c20012| 2016-04-06T02:52:10.287-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.942-0500 c20012| 2016-04-06T02:52:10.287-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.946-0500 c20012| 2016-04-06T02:52:10.287-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.948-0500 c20012| 2016-04-06T02:52:10.287-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.949-0500 c20012| 2016-04-06T02:52:10.287-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.949-0500 c20012| 2016-04-06T02:52:10.287-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.952-0500 c20012| 2016-04-06T02:52:10.287-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1002 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.287-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:16.953-0500 c20012| 2016-04-06T02:52:10.287-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.954-0500 c20012| 2016-04-06T02:52:10.287-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.955-0500 c20012| 2016-04-06T02:52:10.287-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1002 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:16.957-0500 c20012| 2016-04-06T02:52:10.287-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.960-0500 c20012| 2016-04-06T02:52:10.288-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.972-0500 c20012| 2016-04-06T02:52:10.287-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:16.978-0500 c20012| 2016-04-06T02:52:10.288-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:16.998-0500 c20012| 2016-04-06T02:52:10.288-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:17.030-0500 c20012| 2016-04-06T02:52:10.288-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1003 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|9, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:17.044-0500 c20012| 2016-04-06T02:52:10.288-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1002 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929130000|10, t: 1, h: 3135197531614568333, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.045-0500 c20012| 2016-04-06T02:52:10.288-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1003 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.045-0500 c20012| 2016-04-06T02:52:10.288-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1003 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.047-0500 c20012| 2016-04-06T02:52:10.288-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.047-0500 c20012| 2016-04-06T02:52:10.288-0500 D REPL [conn7] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29997068μs [js_test:multi_coll_drop] 2016-04-06T02:53:17.048-0500 c20012| 2016-04-06T02:52:10.288-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929130000|10 and ending at ts: Timestamp 1459929130000|10 [js_test:multi_coll_drop] 2016-04-06T02:53:17.049-0500 c20012| 2016-04-06T02:52:10.288-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:17.049-0500 c20012| 2016-04-06T02:52:10.288-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.049-0500 c20012| 2016-04-06T02:52:10.288-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.059-0500 c20012| 2016-04-06T02:52:10.288-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.062-0500 c20012| 2016-04-06T02:52:10.288-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.064-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.064-0500 c20012| 2016-04-06T02:52:10.288-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.068-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.069-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.071-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.072-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.097-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.100-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.103-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.113-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.113-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.113-0500 c20012| 2016-04-06T02:52:10.289-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:17.118-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.120-0500 c20012| 2016-04-06T02:52:10.289-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:17.121-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.122-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.123-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.125-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.129-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.129-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.130-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.131-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.131-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.132-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.135-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.136-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.139-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.139-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.139-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.139-0500 c20012| 2016-04-06T02:52:10.289-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:17.142-0500 c20012| 2016-04-06T02:52:10.290-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:17.148-0500 c20012| 2016-04-06T02:52:10.290-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:17.149-0500 c20012| 2016-04-06T02:52:10.290-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1006 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:17.150-0500 c20012| 2016-04-06T02:52:10.290-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1006 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.150-0500 c20012| 2016-04-06T02:52:10.290-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|10, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:17.157-0500 c20012| 2016-04-06T02:52:10.290-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|10, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.158-0500 c20012| 2016-04-06T02:52:10.290-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1006 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.162-0500 c20012| 2016-04-06T02:52:10.290-0500 D QUERY [conn7] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:17.169-0500 c20012| 2016-04-06T02:52:10.290-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|10, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.175-0500 c20012| 2016-04-06T02:52:10.291-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|38 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|10, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.177-0500 c20012| 2016-04-06T02:52:10.291-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|10, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:17.179-0500 c20012| 2016-04-06T02:52:10.291-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|38 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|10, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.180-0500 c20012| 2016-04-06T02:52:10.291-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1008 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.291-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:17.181-0500 c20012| 2016-04-06T02:52:10.291-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1008 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.183-0500 c20012| 2016-04-06T02:52:10.291-0500 D QUERY [conn7] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:17.187-0500 c20012| 2016-04-06T02:52:10.291-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|38 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|10, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:712 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.191-0500 c20012| 2016-04-06T02:52:10.291-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:17.196-0500 c20012| 2016-04-06T02:52:10.291-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1009 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:17.197-0500 c20012| 2016-04-06T02:52:10.291-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1009 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.199-0500 c20012| 2016-04-06T02:52:10.291-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1009 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.203-0500 c20012| 2016-04-06T02:52:10.292-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:17.209-0500 c20012| 2016-04-06T02:52:10.292-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1011 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:17.211-0500 c20012| 2016-04-06T02:52:10.292-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1011 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.215-0500 c20012| 2016-04-06T02:52:12.081-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1012 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:22.081-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.216-0500 c20012| 2016-04-06T02:52:12.081-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1012 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:17.218-0500 c20012| 2016-04-06T02:52:12.082-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1012 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:17.219-0500 c20012| 2016-04-06T02:52:12.082-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:14.082Z [js_test:multi_coll_drop] 2016-04-06T02:53:17.220-0500 c20012| 2016-04-06T02:52:12.085-0500 D COMMAND [conn5] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.221-0500 c20012| 2016-04-06T02:52:12.085-0500 D COMMAND [conn5] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:17.226-0500 c20012| 2016-04-06T02:52:12.085-0500 I COMMAND [conn5] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.228-0500 c20012| 2016-04-06T02:52:12.183-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1014 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:22.183-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.229-0500 c20012| 2016-04-06T02:52:12.183-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1014 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.231-0500 c20012| 2016-04-06T02:52:13.687-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:37082 #13 (11 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:17.232-0500 c20012| 2016-04-06T02:52:13.688-0500 D COMMAND [conn13] run command admin.$cmd { isMaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.232-0500 c20012| 2016-04-06T02:52:13.688-0500 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1 } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.234-0500 c20012| 2016-04-06T02:52:13.688-0500 D COMMAND [conn13] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.240-0500 c20012| 2016-04-06T02:52:13.688-0500 I COMMAND [conn13] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.241-0500 c20012| 2016-04-06T02:52:14.044-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.241-0500 c20012| 2016-04-06T02:52:14.044-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:17.244-0500 c20012| 2016-04-06T02:52:14.045-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1011 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.246-0500 c20012| 2016-04-06T02:52:14.046-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1008 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.247-0500 c20012| 2016-04-06T02:52:14.048-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to execute command: RemoteCommand 1014 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:22.183-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } reason: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:17.249-0500 c20012| 2016-04-06T02:52:14.048-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1014 finished with response: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:17.250-0500 c20012| 2016-04-06T02:52:14.050-0500 I REPL [ReplicationExecutor] Error in heartbeat request to mongovm16:20011; HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:17.251-0500 c20012| 2016-04-06T02:52:14.050-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:14.050Z [js_test:multi_coll_drop] 2016-04-06T02:53:17.253-0500 c20012| 2016-04-06T02:52:14.050-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1018 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:22.183-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.255-0500 c20012| 2016-04-06T02:52:14.050-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.255-0500 c20012| 2016-04-06T02:52:14.050-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:17.260-0500 c20012| 2016-04-06T02:52:14.050-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1019 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.264-0500 c20012| 2016-04-06T02:52:14.050-0500 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } numYields:0 reslen:489 locks:{} protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.265-0500 c20012| 2016-04-06T02:52:14.051-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.269-0500 c20012| 2016-04-06T02:52:14.051-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1019 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:17.270-0500 c20012| 2016-04-06T02:52:14.051-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1018 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.272-0500 c20012| 2016-04-06T02:52:14.051-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1018 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 1, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:17.273-0500 c20012| 2016-04-06T02:52:14.053-0500 I REPL [ReplicationExecutor] Member mongovm16:20011 is now in state SECONDARY [js_test:multi_coll_drop] 2016-04-06T02:53:17.277-0500 c20012| 2016-04-06T02:52:14.053-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:16.053Z [js_test:multi_coll_drop] 2016-04-06T02:53:17.283-0500 c20012| 2016-04-06T02:52:14.053-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1021 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:19.053-0500 cmd:{ getMore: 20785203637, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:17.285-0500 c20012| 2016-04-06T02:52:14.053-0500 I ASIO [rsBackgroundSync-0] dropping unhealthy pooled connection to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.290-0500 c20012| 2016-04-06T02:52:14.053-0500 I ASIO [rsBackgroundSync-0] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:53:17.292-0500 c20012| 2016-04-06T02:52:14.053-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.295-0500 c20012| 2016-04-06T02:52:14.054-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1022 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.296-0500 c20012| 2016-04-06T02:52:14.054-0500 I ASIO [NetworkInterfaceASIO-BGSync-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.298-0500 c20012| 2016-04-06T02:52:14.054-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1022 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:17.300-0500 c20012| 2016-04-06T02:52:14.054-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1021 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.303-0500 c20012| 2016-04-06T02:52:14.082-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1023 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:24.082-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.310-0500 c20012| 2016-04-06T02:52:14.082-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1023 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:17.315-0500 c20012| 2016-04-06T02:52:14.083-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1023 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 1, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:17.315-0500 c20012| 2016-04-06T02:52:14.083-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:16.083Z [js_test:multi_coll_drop] 2016-04-06T02:53:17.324-0500 c20012| 2016-04-06T02:52:14.085-0500 D COMMAND [conn5] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.327-0500 c20012| 2016-04-06T02:52:14.085-0500 D COMMAND [conn5] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:17.330-0500 c20012| 2016-04-06T02:52:14.085-0500 I COMMAND [conn5] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } numYields:0 reslen:470 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.330-0500 c20012| 2016-04-06T02:52:14.553-0500 D COMMAND [conn10] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.335-0500 c20012| 2016-04-06T02:52:14.553-0500 I COMMAND [conn10] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.338-0500 c20012| 2016-04-06T02:52:15.054-0500 D COMMAND [conn10] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.338-0500 c20012| 2016-04-06T02:52:15.054-0500 I COMMAND [conn10] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.339-0500 c20012| 2016-04-06T02:52:15.555-0500 D COMMAND [conn10] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.340-0500 c20012| 2016-04-06T02:52:15.556-0500 I COMMAND [conn10] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.344-0500 c20012| 2016-04-06T02:52:15.851-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.347-0500 c20012| 2016-04-06T02:52:15.851-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.347-0500 c20012| 2016-04-06T02:52:16.052-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.355-0500 c20012| 2016-04-06T02:52:16.053-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1025 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:26.053-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.356-0500 c20012| 2016-04-06T02:52:16.053-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1025 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.360-0500 c20012| 2016-04-06T02:52:16.053-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.365-0500 c20012| 2016-04-06T02:52:16.055-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1025 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 1, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:17.367-0500 c20012| 2016-04-06T02:52:16.055-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:18.055Z [js_test:multi_coll_drop] 2016-04-06T02:53:17.368-0500 c20012| 2016-04-06T02:52:16.057-0500 D COMMAND [conn10] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.370-0500 c20012| 2016-04-06T02:52:16.057-0500 I COMMAND [conn10] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.375-0500 c20012| 2016-04-06T02:52:16.083-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1027 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:26.083-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.376-0500 c20012| 2016-04-06T02:52:16.083-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1027 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:17.380-0500 c20012| 2016-04-06T02:52:16.085-0500 D COMMAND [conn5] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.392-0500 c20012| 2016-04-06T02:52:16.085-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1027 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 1, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:17.392-0500 c20012| 2016-04-06T02:52:16.085-0500 D COMMAND [conn5] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:17.393-0500 c20012| 2016-04-06T02:52:16.085-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:18.085Z [js_test:multi_coll_drop] 2016-04-06T02:53:17.397-0500 c20012| 2016-04-06T02:52:16.085-0500 I COMMAND [conn5] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } numYields:0 reslen:470 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.399-0500 c20012| 2016-04-06T02:52:16.254-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.404-0500 c20012| 2016-04-06T02:52:16.254-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.404-0500 c20012| 2016-04-06T02:52:16.455-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.407-0500 c20012| 2016-04-06T02:52:16.455-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.418-0500 c20012| 2016-04-06T02:52:16.545-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:17.425-0500 c20012| 2016-04-06T02:52:16.545-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1029 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:17.429-0500 c20012| 2016-04-06T02:52:16.545-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] dropping unhealthy pooled connection to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.431-0500 c20012| 2016-04-06T02:52:16.545-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] dropping unhealthy pooled connection to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.433-0500 c20012| 2016-04-06T02:52:16.545-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:53:17.436-0500 c20012| 2016-04-06T02:52:16.545-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.437-0500 c20012| 2016-04-06T02:52:16.545-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1030 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.439-0500 c20012| 2016-04-06T02:52:16.545-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.440-0500 c20012| 2016-04-06T02:52:16.545-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1030 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:17.450-0500 c20012| 2016-04-06T02:52:16.546-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1029 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.451-0500 c20012| 2016-04-06T02:52:16.546-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1029 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.455-0500 c20012| 2016-04-06T02:52:16.551-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.458-0500 c20012| 2016-04-06T02:52:16.551-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:17.462-0500 c20012| 2016-04-06T02:52:16.551-0500 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } numYields:0 reslen:470 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.463-0500 c20012| 2016-04-06T02:52:16.552-0500 D COMMAND [conn5] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.463-0500 c20012| 2016-04-06T02:52:16.552-0500 D COMMAND [conn5] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:17.468-0500 c20012| 2016-04-06T02:52:16.553-0500 I COMMAND [conn5] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } numYields:0 reslen:470 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.476-0500 c20012| 2016-04-06T02:52:16.554-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1021 finished with response: { cursor: { nextBatch: [], id: 20785203637, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.482-0500 c20012| 2016-04-06T02:52:16.554-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:17.486-0500 c20012| 2016-04-06T02:52:16.555-0500 D REPL [rsBackgroundSync-0] Cancelling oplog query because we have to choose a sync source. Current source: mongovm16:20011, OpTime{ ts: Timestamp 1459929130000|10, t: 1 }, hasSyncSource:0 [js_test:multi_coll_drop] 2016-04-06T02:53:17.489-0500 c20012| 2016-04-06T02:52:16.555-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1033 -- target:mongovm16:20011 db:local cmd:{ killCursors: "oplog.rs", cursors: [ 20785203637 ] } [js_test:multi_coll_drop] 2016-04-06T02:53:17.490-0500 c20012| 2016-04-06T02:52:16.555-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1033 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.491-0500 c20012| 2016-04-06T02:52:16.555-0500 D REPL [rsBackgroundSync] fetcher stopped reading remote oplog on mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.491-0500 c20012| 2016-04-06T02:52:16.555-0500 I REPL [ReplicationExecutor] could not find member to sync from [js_test:multi_coll_drop] 2016-04-06T02:53:17.493-0500 c20012| 2016-04-06T02:52:16.555-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:16.555Z [js_test:multi_coll_drop] 2016-04-06T02:53:17.494-0500 c20012| 2016-04-06T02:52:16.555-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:16.555Z [js_test:multi_coll_drop] 2016-04-06T02:53:17.499-0500 c20012| 2016-04-06T02:52:16.555-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1034 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:26.555-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.509-0500 c20012| 2016-04-06T02:52:16.555-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1033 finished with response: { cursorsKilled: [ 20785203637 ], cursorsNotFound: [], cursorsAlive: [], cursorsUnknown: [], ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.511-0500 c20012| 2016-04-06T02:52:16.555-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1036 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:26.555-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.511-0500 c20012| 2016-04-06T02:52:16.555-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1034 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.512-0500 c20012| 2016-04-06T02:52:16.555-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1036 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:17.515-0500 c20012| 2016-04-06T02:52:16.557-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1034 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 1, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:17.522-0500 c20012| 2016-04-06T02:52:16.557-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:19.057Z [js_test:multi_coll_drop] 2016-04-06T02:53:17.523-0500 c20012| 2016-04-06T02:52:16.557-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1036 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 1, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:17.524-0500 c20012| 2016-04-06T02:52:16.557-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:19.057Z [js_test:multi_coll_drop] 2016-04-06T02:53:17.527-0500 c20012| 2016-04-06T02:52:16.558-0500 D COMMAND [conn10] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.529-0500 c20012| 2016-04-06T02:52:16.558-0500 I COMMAND [conn10] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.532-0500 c20012| 2016-04-06T02:52:16.658-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.536-0500 c20012| 2016-04-06T02:52:16.658-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.536-0500 c20012| 2016-04-06T02:52:16.859-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.538-0500 c20012| 2016-04-06T02:52:16.860-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.539-0500 c20012| 2016-04-06T02:52:17.059-0500 D COMMAND [conn10] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.540-0500 c20012| 2016-04-06T02:52:17.059-0500 I COMMAND [conn10] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.540-0500 c20012| 2016-04-06T02:52:17.061-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.541-0500 c20012| 2016-04-06T02:52:17.061-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.542-0500 c20012| 2016-04-06T02:52:17.201-0500 D COMMAND [conn6] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.547-0500 c20012| 2016-04-06T02:52:17.202-0500 I COMMAND [conn6] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.549-0500 c20012| 2016-04-06T02:52:17.261-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.552-0500 c20012| 2016-04-06T02:52:17.262-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.553-0500 c20012| 2016-04-06T02:52:17.438-0500 D COMMAND [conn8] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.556-0500 c20012| 2016-04-06T02:52:17.438-0500 I COMMAND [conn8] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.557-0500 c20012| 2016-04-06T02:52:17.462-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.559-0500 c20012| 2016-04-06T02:52:17.462-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.560-0500 c20012| 2016-04-06T02:52:17.560-0500 D COMMAND [conn10] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.562-0500 c20012| 2016-04-06T02:52:17.560-0500 I COMMAND [conn10] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.564-0500 c20012| 2016-04-06T02:52:17.663-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.565-0500 c20012| 2016-04-06T02:52:17.663-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.566-0500 c20012| 2016-04-06T02:52:17.703-0500 D COMMAND [conn6] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.572-0500 c20012| 2016-04-06T02:52:17.703-0500 I COMMAND [conn6] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.576-0500 c20012| 2016-04-06T02:52:17.864-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.579-0500 c20012| 2016-04-06T02:52:17.864-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.579-0500 c20012| 2016-04-06T02:52:17.939-0500 D COMMAND [conn8] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.580-0500 c20012| 2016-04-06T02:52:17.939-0500 I COMMAND [conn8] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.581-0500 c20012| 2016-04-06T02:52:18.061-0500 D COMMAND [conn10] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.582-0500 c20012| 2016-04-06T02:52:18.061-0500 I COMMAND [conn10] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.583-0500 c20012| 2016-04-06T02:52:18.065-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.594-0500 c20012| 2016-04-06T02:52:18.065-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.598-0500 c20012| 2016-04-06T02:52:18.204-0500 D COMMAND [conn6] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.599-0500 c20012| 2016-04-06T02:52:18.204-0500 I COMMAND [conn6] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.599-0500 c20012| 2016-04-06T02:52:18.266-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.601-0500 c20012| 2016-04-06T02:52:18.266-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.602-0500 c20012| 2016-04-06T02:52:18.359-0500 D COMMAND [conn10] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.605-0500 c20012| 2016-04-06T02:52:18.359-0500 I COMMAND [conn10] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.607-0500 c20012| 2016-04-06T02:52:18.440-0500 D COMMAND [conn8] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.608-0500 c20012| 2016-04-06T02:52:18.440-0500 I COMMAND [conn8] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.608-0500 c20012| 2016-04-06T02:52:18.467-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.613-0500 c20012| 2016-04-06T02:52:18.467-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.614-0500 c20012| 2016-04-06T02:52:18.562-0500 D COMMAND [conn10] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.617-0500 c20012| 2016-04-06T02:52:18.562-0500 I COMMAND [conn10] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.618-0500 c20012| 2016-04-06T02:52:18.668-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.619-0500 c20012| 2016-04-06T02:52:18.668-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.623-0500 c20012| 2016-04-06T02:52:18.707-0500 D COMMAND [conn6] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.639-0500 c20012| 2016-04-06T02:52:18.708-0500 I COMMAND [conn6] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.639-0500 c20012| 2016-04-06T02:52:18.869-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.640-0500 c20012| 2016-04-06T02:52:18.869-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.642-0500 c20012| 2016-04-06T02:52:18.941-0500 D COMMAND [conn8] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.648-0500 c20012| 2016-04-06T02:52:18.941-0500 I COMMAND [conn8] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.650-0500 c20012| 2016-04-06T02:52:19.046-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter failed to prepare update command with status: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:17.652-0500 c20012| 2016-04-06T02:52:19.046-0500 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to mongovm16:20011: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:17.653-0500 c20012| 2016-04-06T02:52:19.046-0500 D REPL [SyncSourceFeedback] The replication progress command (replSetUpdatePosition) failed and will be retried: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:17.656-0500 c20012| 2016-04-06T02:52:19.051-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.656-0500 c20012| 2016-04-06T02:52:19.051-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:17.659-0500 c20012| 2016-04-06T02:52:19.051-0500 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.660-0500 c20012| 2016-04-06T02:52:19.055-0500 D COMMAND [conn5] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.661-0500 c20012| 2016-04-06T02:52:19.055-0500 D COMMAND [conn5] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:17.662-0500 c20012| 2016-04-06T02:52:19.066-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1039 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:29.066-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.663-0500 c20012| 2016-04-06T02:52:19.066-0500 D COMMAND [conn10] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.671-0500 c20012| 2016-04-06T02:52:19.066-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1040 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:29.066-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.673-0500 c20012| 2016-04-06T02:52:19.066-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1039 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.678-0500 c20012| 2016-04-06T02:52:19.066-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1040 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:17.683-0500 c20012| 2016-04-06T02:52:19.066-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1039 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 1, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:17.700-0500 c20012| 2016-04-06T02:52:19.069-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1040 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 1, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:17.711-0500 c20012| 2016-04-06T02:52:19.070-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.711-0500 c20012| 2016-04-06T02:52:19.076-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:21.576Z [js_test:multi_coll_drop] 2016-04-06T02:53:17.716-0500 c20012| 2016-04-06T02:52:19.076-0500 I COMMAND [conn5] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } numYields:0 reslen:439 locks:{} protocol:op_command 20ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.719-0500 c20012| 2016-04-06T02:52:19.076-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:21.576Z [js_test:multi_coll_drop] 2016-04-06T02:53:17.726-0500 c20012| 2016-04-06T02:52:19.076-0500 I COMMAND [conn10] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 9ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.728-0500 c20012| 2016-04-06T02:52:19.076-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.730-0500 c20012| 2016-04-06T02:52:19.140-0500 I REPL [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 5000ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.732-0500 c20012| 2016-04-06T02:52:19.140-0500 I REPL [ReplicationExecutor] conducting a dry run election to see if we could be elected [js_test:multi_coll_drop] 2016-04-06T02:53:17.740-0500 c20012| 2016-04-06T02:52:19.140-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1043 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:24.140-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 1, candidateIndex: 1, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:17.743-0500 c20012| 2016-04-06T02:52:19.140-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1044 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:24.140-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 1, candidateIndex: 1, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:17.744-0500 c20012| 2016-04-06T02:52:19.140-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1043 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.745-0500 c20012| 2016-04-06T02:52:19.140-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1044 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:17.748-0500 c20012| 2016-04-06T02:52:19.141-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1044 finished with response: { term: 1, voteGranted: true, reason: "", ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.752-0500 c20012| 2016-04-06T02:52:19.141-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1043 finished with response: { term: 1, voteGranted: true, reason: "", ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.754-0500 c20012| 2016-04-06T02:52:19.141-0500 I REPL [ReplicationExecutor] dry election run succeeded, running for election [js_test:multi_coll_drop] 2016-04-06T02:53:17.756-0500 c20012| 2016-04-06T02:52:19.141-0500 D QUERY [replExecDBWorker-2] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:17.768-0500 c20012| 2016-04-06T02:52:19.141-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1047 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:24.141-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 2, candidateIndex: 1, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:17.770-0500 c20012| 2016-04-06T02:52:19.141-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1047 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.775-0500 c20012| 2016-04-06T02:52:19.141-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1048 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:24.141-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 2, candidateIndex: 1, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:17.777-0500 c20012| 2016-04-06T02:52:19.141-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1048 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:17.779-0500 c20012| 2016-04-06T02:52:19.142-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1047 finished with response: { term: 2, voteGranted: true, reason: "", ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.784-0500 c20012| 2016-04-06T02:52:19.142-0500 D ASIO [ReplicationExecutor] Canceling operation; original request was: RemoteCommand 1048 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:24.141-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 2, candidateIndex: 1, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:17.785-0500 c20012| 2016-04-06T02:52:19.142-0500 I REPL [ReplicationExecutor] election succeeded, assuming primary role in term 2 [js_test:multi_coll_drop] 2016-04-06T02:53:17.787-0500 c20012| 2016-04-06T02:52:19.142-0500 I REPL [ReplicationExecutor] transition to PRIMARY [js_test:multi_coll_drop] 2016-04-06T02:53:17.789-0500 c20012| 2016-04-06T02:52:19.142-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:19.142Z [js_test:multi_coll_drop] 2016-04-06T02:53:17.791-0500 c20012| 2016-04-06T02:52:19.142-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:19.142Z [js_test:multi_coll_drop] 2016-04-06T02:53:17.803-0500 c20012| 2016-04-06T02:52:19.142-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to execute command: RemoteCommand 1048 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:24.141-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 2, candidateIndex: 1, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929130000|10, t: 1 } } reason: CallbackCanceled: Callback canceled [js_test:multi_coll_drop] 2016-04-06T02:53:17.805-0500 c20012| 2016-04-06T02:52:19.142-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1048 finished with response: CallbackCanceled: Callback canceled [js_test:multi_coll_drop] 2016-04-06T02:53:17.809-0500 c20012| 2016-04-06T02:52:19.142-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1050 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:29.142-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.815-0500 c20012| 2016-04-06T02:52:19.142-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1051 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:29.142-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.815-0500 c20012| 2016-04-06T02:52:19.142-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:17.816-0500 c20012| 2016-04-06T02:52:19.142-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1050 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.817-0500 c20012| 2016-04-06T02:52:19.142-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1052 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:17.821-0500 c20012| 2016-04-06T02:52:19.143-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1050 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 2, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:17.821-0500 c20012| 2016-04-06T02:52:19.143-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:21.143Z [js_test:multi_coll_drop] 2016-04-06T02:53:17.824-0500 c20012| 2016-04-06T02:52:19.143-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:17.827-0500 c20012| 2016-04-06T02:52:19.143-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1052 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:17.827-0500 c20012| 2016-04-06T02:52:19.143-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1051 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:17.831-0500 c20012| 2016-04-06T02:52:19.144-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1051 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 2, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:17.835-0500 c20012| 2016-04-06T02:52:19.144-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:21.144Z [js_test:multi_coll_drop] 2016-04-06T02:53:17.836-0500 c20012| 2016-04-06T02:52:19.208-0500 D COMMAND [conn6] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.838-0500 c20012| 2016-04-06T02:52:19.209-0500 I COMMAND [conn6] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.838-0500 c20012| 2016-04-06T02:52:19.277-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.840-0500 c20012| 2016-04-06T02:52:19.277-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.841-0500 c20012| 2016-04-06T02:52:19.441-0500 D COMMAND [conn8] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.844-0500 c20012| 2016-04-06T02:52:19.442-0500 I COMMAND [conn8] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.846-0500 c20012| 2016-04-06T02:52:19.478-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.847-0500 c20012| 2016-04-06T02:52:19.478-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.847-0500 c20012| 2016-04-06T02:52:19.560-0500 D REPL [rsSync] Removing temporary collections from config [js_test:multi_coll_drop] 2016-04-06T02:53:17.852-0500 c20012| 2016-04-06T02:52:19.560-0500 I REPL [rsSync] transition to primary complete; database writes are now permitted [js_test:multi_coll_drop] 2016-04-06T02:53:17.853-0500 c20012| 2016-04-06T02:52:19.576-0500 D COMMAND [conn10] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.855-0500 c20012| 2016-04-06T02:52:19.577-0500 I COMMAND [conn10] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.857-0500 c20012| 2016-04-06T02:52:19.584-0500 D COMMAND [conn11] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c02e65c17830b843f1a4') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.859-0500 c20012| 2016-04-06T02:52:19.584-0500 D QUERY [conn11] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:17.861-0500 c20012| 2016-04-06T02:52:19.584-0500 D QUERY [conn11] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c02e65c17830b843f1a4') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.870-0500 c20012| 2016-04-06T02:52:19.585-0500 I COMMAND [conn11] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c02e65c17830b843f1a4') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:0 docsExamined:0 nMatched:0 nModified:0 numYields:0 reslen:362 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.873-0500 c20012| 2016-04-06T02:52:19.585-0500 D COMMAND [conn11] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c03365c17830b843f1a5'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929139585), why: "splitting chunk [{ _id: -81.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.877-0500 c20012| 2016-04-06T02:52:19.585-0500 D QUERY [conn11] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:17.880-0500 c20012| 2016-04-06T02:52:19.585-0500 D QUERY [conn11] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:17.882-0500 c20012| 2016-04-06T02:52:19.585-0500 D QUERY [conn11] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.884-0500 c20012| 2016-04-06T02:52:19.595-0500 D REPL [conn11] Required snapshot optime: { ts: Timestamp 1459929139000|3, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|10, t: 1 }, name-id: "185" } [js_test:multi_coll_drop] 2016-04-06T02:53:17.885-0500 c20012| 2016-04-06T02:52:19.678-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.886-0500 c20012| 2016-04-06T02:52:19.679-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.886-0500 c20012| 2016-04-06T02:52:19.709-0500 D COMMAND [conn6] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.888-0500 c20012| 2016-04-06T02:52:19.709-0500 I COMMAND [conn6] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.891-0500 c20012| 2016-04-06T02:52:19.710-0500 D COMMAND [conn7] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929137199), up: 10, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.891-0500 c20012| 2016-04-06T02:52:19.710-0500 D QUERY [conn7] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:17.892-0500 c20012| 2016-04-06T02:52:19.710-0500 D REPL [conn7] Required snapshot optime: { ts: Timestamp 1459929139000|3, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|10, t: 1 }, name-id: "185" } [js_test:multi_coll_drop] 2016-04-06T02:53:17.894-0500 c20012| 2016-04-06T02:52:19.710-0500 I WRITE [conn7] update config.mongos query: { _id: "mongovm16:20014" } update: { $set: { _id: "mongovm16:20014", ping: new Date(1459929137199), up: 10, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.895-0500 c20012| 2016-04-06T02:52:19.712-0500 D REPL [conn7] Required snapshot optime: { ts: Timestamp 1459929139000|3, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|10, t: 1 }, name-id: "185" } [js_test:multi_coll_drop] 2016-04-06T02:53:17.903-0500 c20012| 2016-04-06T02:52:19.712-0500 D REPL [conn7] Required snapshot optime: { ts: Timestamp 1459929139000|4, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|10, t: 1 }, name-id: "185" } [js_test:multi_coll_drop] 2016-04-06T02:53:17.905-0500 c20012| 2016-04-06T02:52:19.943-0500 D COMMAND [conn8] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.909-0500 c20012| 2016-04-06T02:52:19.943-0500 I COMMAND [conn8] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.911-0500 c20012| 2016-04-06T02:52:19.944-0500 D COMMAND [conn9] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929137435), up: 10, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.918-0500 c20012| 2016-04-06T02:52:19.944-0500 D QUERY [conn9] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:17.929-0500 c20012| 2016-04-06T02:52:19.944-0500 D REPL [conn9] Required snapshot optime: { ts: Timestamp 1459929139000|3, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|10, t: 1 }, name-id: "185" } [js_test:multi_coll_drop] 2016-04-06T02:53:17.950-0500 c20012| 2016-04-06T02:52:19.944-0500 D REPL [conn9] Required snapshot optime: { ts: Timestamp 1459929139000|4, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|10, t: 1 }, name-id: "185" } [js_test:multi_coll_drop] 2016-04-06T02:53:17.952-0500 c20012| 2016-04-06T02:52:19.944-0500 I WRITE [conn9] update config.mongos query: { _id: "mongovm16:20015" } update: { $set: { _id: "mongovm16:20015", ping: new Date(1459929137435), up: 10, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:17.956-0500 c20012| 2016-04-06T02:52:19.950-0500 D REPL [conn9] Required snapshot optime: { ts: Timestamp 1459929139000|3, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|10, t: 1 }, name-id: "185" } [js_test:multi_coll_drop] 2016-04-06T02:53:17.959-0500 c20012| 2016-04-06T02:52:19.950-0500 D REPL [conn9] Required snapshot optime: { ts: Timestamp 1459929139000|4, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|10, t: 1 }, name-id: "185" } [js_test:multi_coll_drop] 2016-04-06T02:53:17.960-0500 c20012| 2016-04-06T02:52:19.950-0500 D REPL [conn9] Required snapshot optime: { ts: Timestamp 1459929139000|5, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|10, t: 1 }, name-id: "185" } [js_test:multi_coll_drop] 2016-04-06T02:53:17.963-0500 c20012| 2016-04-06T02:52:21.143-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1056 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:31.143-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.964-0500 c20012| 2016-04-06T02:52:21.143-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1056 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:17.967-0500 c20012| 2016-04-06T02:52:21.144-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1056 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 2, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:17.968-0500 c20012| 2016-04-06T02:52:21.144-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:23.144Z [js_test:multi_coll_drop] 2016-04-06T02:53:17.982-0500 c20012| 2016-04-06T02:52:21.144-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1058 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:31.144-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.985-0500 c20012| 2016-04-06T02:52:21.144-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1058 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:17.989-0500 c20012| 2016-04-06T02:52:21.144-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1058 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 2, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:17.990-0500 c20012| 2016-04-06T02:52:21.144-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:23.144Z [js_test:multi_coll_drop] 2016-04-06T02:53:17.992-0500 c20012| 2016-04-06T02:52:21.552-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:17.993-0500 c20012| 2016-04-06T02:52:21.552-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:17.994-0500 c20012| 2016-04-06T02:52:21.553-0500 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } numYields:0 reslen:480 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.022-0500 c20012| 2016-04-06T02:52:21.576-0500 D COMMAND [conn5] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.022-0500 c20012| 2016-04-06T02:52:21.576-0500 D COMMAND [conn5] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:18.030-0500 c20012| 2016-04-06T02:52:21.576-0500 I COMMAND [conn5] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } numYields:0 reslen:480 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.033-0500 c20012| 2016-04-06T02:52:21.615-0500 D COMMAND [conn3] run command local.$cmd { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:18.043-0500 c20012| 2016-04-06T02:52:21.616-0500 D QUERY [conn3] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: 1 } projection: {} limit: 1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:18.053-0500 c20012| 2016-04-06T02:52:21.616-0500 I COMMAND [conn3] command local.oplog.rs command: find { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:254 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.055-0500 c20012| 2016-04-06T02:52:21.617-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:37469 #14 (12 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:18.058-0500 c20012| 2016-04-06T02:52:21.617-0500 D COMMAND [conn14] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20011" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.060-0500 c20012| 2016-04-06T02:52:21.617-0500 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20011" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.069-0500 c20012| 2016-04-06T02:52:21.618-0500 D COMMAND [conn14] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:18.076-0500 c20012| 2016-04-06T02:52:21.618-0500 D COMMAND [conn14] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:18.088-0500 c20012| 2016-04-06T02:52:21.618-0500 D REPL [conn14] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.091-0500 c20012| 2016-04-06T02:52:21.618-0500 D REPL [conn14] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.099-0500 c20012| 2016-04-06T02:52:21.618-0500 I COMMAND [conn14] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.102-0500 c20012| 2016-04-06T02:52:21.619-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:37470 #15 (13 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:18.103-0500 c20012| 2016-04-06T02:52:21.619-0500 D COMMAND [conn15] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20011" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.105-0500 c20012| 2016-04-06T02:52:21.620-0500 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20011" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.112-0500 c20012| 2016-04-06T02:52:21.620-0500 D COMMAND [conn15] run command local.$cmd { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929130000|10 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.121-0500 c20012| 2016-04-06T02:52:21.621-0500 I COMMAND [conn15] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929130000|10 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 2 } planSummary: COLLSCAN cursorid:22197973872 keysExamined:0 docsExamined:5 numYields:0 nreturned:5 reslen:1201 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.125-0500 c20012| 2016-04-06T02:52:21.624-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:18.133-0500 c20012| 2016-04-06T02:52:21.636-0500 D COMMAND [conn14] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929139000|2, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:18.133-0500 c20012| 2016-04-06T02:52:21.636-0500 D COMMAND [conn14] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:18.137-0500 c20012| 2016-04-06T02:52:21.636-0500 D REPL [conn14] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929139000|2, t: 2 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.140-0500 c20012| 2016-04-06T02:52:21.636-0500 D REPL [conn14] Required snapshot optime: { ts: Timestamp 1459929139000|3, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|10, t: 1 }, name-id: "185" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.141-0500 c20012| 2016-04-06T02:52:21.636-0500 D REPL [conn14] Required snapshot optime: { ts: Timestamp 1459929139000|4, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|10, t: 1 }, name-id: "185" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.146-0500 c20012| 2016-04-06T02:52:21.636-0500 D REPL [conn14] Required snapshot optime: { ts: Timestamp 1459929139000|5, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|10, t: 1 }, name-id: "185" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.151-0500 c20012| 2016-04-06T02:52:21.637-0500 D REPL [conn14] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.155-0500 c20012| 2016-04-06T02:52:21.637-0500 I COMMAND [conn14] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929139000|2, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.161-0500 c20012| 2016-04-06T02:52:21.638-0500 D COMMAND [conn14] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929139000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:18.161-0500 c20012| 2016-04-06T02:52:21.638-0500 D COMMAND [conn14] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:18.168-0500 c20012| 2016-04-06T02:52:21.638-0500 D REPL [conn14] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929139000|5, t: 2 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.171-0500 c20012| 2016-04-06T02:52:21.638-0500 D REPL [conn14] Required snapshot optime: { ts: Timestamp 1459929139000|3, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|10, t: 1 }, name-id: "185" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.176-0500 c20012| 2016-04-06T02:52:21.638-0500 D REPL [conn14] Required snapshot optime: { ts: Timestamp 1459929139000|4, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|10, t: 1 }, name-id: "185" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.178-0500 c20012| 2016-04-06T02:52:21.638-0500 D REPL [conn14] Required snapshot optime: { ts: Timestamp 1459929139000|5, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929130000|10, t: 1 }, name-id: "185" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.181-0500 c20012| 2016-04-06T02:52:21.638-0500 D REPL [conn14] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.188-0500 c20012| 2016-04-06T02:52:21.638-0500 I COMMAND [conn14] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929139000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.194-0500 c20012| 2016-04-06T02:52:21.638-0500 D COMMAND [conn14] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929139000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929139000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:18.196-0500 c20012| 2016-04-06T02:52:21.638-0500 D COMMAND [conn14] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:18.203-0500 c20012| 2016-04-06T02:52:21.638-0500 D REPL [conn14] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929139000|5, t: 2 } and is durable through: { ts: Timestamp 1459929139000|2, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.203-0500 c20012| 2016-04-06T02:52:21.638-0500 D REPL [conn14] Updating _lastCommittedOpTime to { ts: Timestamp 1459929139000|2, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.207-0500 c20012| 2016-04-06T02:52:21.638-0500 D REPL [conn14] Required snapshot optime: { ts: Timestamp 1459929139000|3, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929139000|2, t: 2 }, name-id: "186" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.209-0500 c20012| 2016-04-06T02:52:21.638-0500 D REPL [conn14] Required snapshot optime: { ts: Timestamp 1459929139000|4, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929139000|2, t: 2 }, name-id: "186" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.213-0500 c20012| 2016-04-06T02:52:21.638-0500 D REPL [conn14] Required snapshot optime: { ts: Timestamp 1459929139000|5, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929139000|2, t: 2 }, name-id: "186" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.216-0500 c20012| 2016-04-06T02:52:21.638-0500 D REPL [conn14] Required snapshot optime: { ts: Timestamp 1459929139000|3, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929139000|2, t: 2 }, name-id: "186" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.222-0500 c20012| 2016-04-06T02:52:21.638-0500 D REPL [conn14] Required snapshot optime: { ts: Timestamp 1459929139000|4, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929139000|2, t: 2 }, name-id: "186" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.229-0500 c20012| 2016-04-06T02:52:21.638-0500 D REPL [conn14] Required snapshot optime: { ts: Timestamp 1459929139000|5, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929139000|2, t: 2 }, name-id: "186" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.233-0500 c20012| 2016-04-06T02:52:21.638-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:37476 #16 (14 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:18.238-0500 c20012| 2016-04-06T02:52:21.638-0500 D REPL [conn14] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.246-0500 c20012| 2016-04-06T02:52:21.638-0500 I COMMAND [conn14] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929139000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929139000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.249-0500 c20012| 2016-04-06T02:52:21.639-0500 D COMMAND [conn14] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929139000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929139000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:18.251-0500 c20012| 2016-04-06T02:52:21.639-0500 D COMMAND [conn14] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:18.255-0500 c20012| 2016-04-06T02:52:21.639-0500 D COMMAND [conn16] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20011" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.259-0500 c20012| 2016-04-06T02:52:21.639-0500 D REPL [conn14] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929139000|5, t: 2 } and is durable through: { ts: Timestamp 1459929139000|5, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.261-0500 c20012| 2016-04-06T02:52:21.639-0500 D REPL [conn14] Updating _lastCommittedOpTime to { ts: Timestamp 1459929139000|5, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.262-0500 c20012| 2016-04-06T02:52:21.639-0500 D REPL [conn14] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.266-0500 c20012| 2016-04-06T02:52:21.639-0500 I COMMAND [conn14] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929139000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929139000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.268-0500 c20012| 2016-04-06T02:52:21.641-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|10, t: 1 } } cursorid:22197973872 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 16ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.279-0500 c20012| 2016-04-06T02:52:21.641-0500 I COMMAND [conn7] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929137199), up: 10, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:386 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 1930ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.290-0500 c20012| 2016-04-06T02:52:21.641-0500 I COMMAND [conn16] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20011" } numYields:0 reslen:482 locks:{} protocol:op_query 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.297-0500 c20012| 2016-04-06T02:52:21.641-0500 I COMMAND [conn9] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929137435), up: 10, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:386 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 1696ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.303-0500 c20012| 2016-04-06T02:52:21.641-0500 I COMMAND [conn11] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c03365c17830b843f1a5'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929139585), why: "splitting chunk [{ _id: -81.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c03365c17830b843f1a5'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929139585), why: "splitting chunk [{ _id: -81.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 2055ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.306-0500 c20012| 2016-04-06T02:52:21.644-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929139000|5, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:18.309-0500 c20012| 2016-04-06T02:52:21.646-0500 D COMMAND [conn7] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929141645), up: 14, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.310-0500 c20012| 2016-04-06T02:52:21.646-0500 D QUERY [conn7] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.316-0500 c20012| 2016-04-06T02:52:21.646-0500 I WRITE [conn7] update config.mongos query: { _id: "mongovm16:20014" } update: { $set: { _id: "mongovm16:20014", ping: new Date(1459929141645), up: 14, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.320-0500 c20012| 2016-04-06T02:52:21.646-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929139000|5, t: 2 } } cursorid:22197973872 numYields:0 nreturned:1 reslen:522 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.323-0500 c20011| 2016-04-06T02:52:41.749-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|5, t: 3 } and is durable through: { ts: Timestamp 1459929161000|4, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.327-0500 c20011| 2016-04-06T02:52:41.749-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929161000|5, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|4, t: 3 }, name-id: "204" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.328-0500 c20011| 2016-04-06T02:52:41.749-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929161000|6, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|4, t: 3 }, name-id: "204" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.337-0500 c20011| 2016-04-06T02:52:41.749-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|5, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.340-0500 c20011| 2016-04-06T02:52:41.750-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|5, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:18.341-0500 c20011| 2016-04-06T02:52:41.750-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:18.351-0500 c20011| 2016-04-06T02:52:41.750-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.356-0500 c20011| 2016-04-06T02:52:41.750-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|5, t: 3 } and is durable through: { ts: Timestamp 1459929161000|5, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.358-0500 c20011| 2016-04-06T02:52:41.750-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929161000|5, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.362-0500 c20011| 2016-04-06T02:52:41.750-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929161000|6, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|5, t: 3 }, name-id: "205" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.365-0500 c20011| 2016-04-06T02:52:41.750-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929161000|6, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|5, t: 3 }, name-id: "205" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.374-0500 c20011| 2016-04-06T02:52:41.750-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|5, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.377-0500 c20011| 2016-04-06T02:52:41.762-0500 I COMMAND [conn43] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|4, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 13ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.383-0500 c20011| 2016-04-06T02:52:41.762-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|3, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:522 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 13ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.387-0500 c20011| 2016-04-06T02:52:41.765-0500 I COMMAND [conn36] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929161743), up: 34, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:386 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 21ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.394-0500 c20011| 2016-04-06T02:52:41.767-0500 D COMMAND [conn36] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|50 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|5, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.403-0500 c20011| 2016-04-06T02:52:41.767-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|5, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:18.406-0500 c20011| 2016-04-06T02:52:41.767-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|50 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|5, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.408-0500 c20011| 2016-04-06T02:52:41.767-0500 D QUERY [conn36] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:18.412-0500 c20011| 2016-04-06T02:52:41.769-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|6, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:18.413-0500 c20011| 2016-04-06T02:52:41.769-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:18.416-0500 c20011| 2016-04-06T02:52:41.769-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.417-0500 c20011| 2016-04-06T02:52:41.769-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|6, t: 3 } and is durable through: { ts: Timestamp 1459929161000|5, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.419-0500 c20011| 2016-04-06T02:52:41.769-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929161000|6, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|5, t: 3 }, name-id: "205" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.424-0500 c20011| 2016-04-06T02:52:41.769-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|6, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.430-0500 c20011| 2016-04-06T02:52:41.770-0500 I COMMAND [conn36] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|50 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|5, t: 3 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:732 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.432-0500 c20011| 2016-04-06T02:52:41.770-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|5, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:18.436-0500 c20011| 2016-04-06T02:52:41.771-0500 D COMMAND [conn36] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|5, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.439-0500 c20011| 2016-04-06T02:52:41.771-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|5, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:18.441-0500 c20011| 2016-04-06T02:52:41.771-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|5, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.446-0500 c20011| 2016-04-06T02:52:41.771-0500 D QUERY [conn36] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:18.447-0500 s20014| 2016-04-06T02:53:03.716-0500 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:18.449-0500 s20014| 2016-04-06T02:53:03.716-0500 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:18.449-0500 s20014| 2016-04-06T02:53:03.716-0500 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.100.28:20011, no events [js_test:multi_coll_drop] 2016-04-06T02:53:18.450-0500 s20014| 2016-04-06T02:53:03.716-0500 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.100.28:20012, no events [js_test:multi_coll_drop] 2016-04-06T02:53:18.451-0500 s20014| 2016-04-06T02:53:03.716-0500 D NETWORK [ReplicaSetMonitorWatcher] creating new connection to:mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:18.456-0500 s20014| 2016-04-06T02:53:03.720-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:53:18.463-0500 s20014| 2016-04-06T02:53:03.720-0500 D NETWORK [ReplicaSetMonitorWatcher] connected to server mongovm16:20013 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:53:18.470-0500 c20011| 2016-04-06T02:52:41.771-0500 I COMMAND [conn36] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|5, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.481-0500 c20011| 2016-04-06T02:52:41.773-0500 D COMMAND [conn40] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c04965c17830b843f1b1'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929161772), why: "splitting chunk [{ _id: -75.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.485-0500 c20011| 2016-04-06T02:52:41.773-0500 D QUERY [conn40] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.495-0500 c20011| 2016-04-06T02:52:41.773-0500 D QUERY [conn40] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.501-0500 c20011| 2016-04-06T02:52:41.773-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|6, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|6, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:18.501-0500 c20011| 2016-04-06T02:52:41.773-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:18.502-0500 c20011| 2016-04-06T02:52:41.773-0500 D QUERY [conn40] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.523-0500 c20011| 2016-04-06T02:52:41.773-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.528-0500 c20012| 2016-04-06T02:52:21.648-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929139000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:18.528-0500 c20012| 2016-04-06T02:52:21.648-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:18.533-0500 c20012| 2016-04-06T02:52:21.648-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929141000|1, t: 2 } and is durable through: { ts: Timestamp 1459929139000|5, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.535-0500 c20012| 2016-04-06T02:52:21.648-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.540-0500 c20012| 2016-04-06T02:52:21.648-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929139000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.543-0500 c20012| 2016-04-06T02:52:21.650-0500 D REPL [conn7] Required snapshot optime: { ts: Timestamp 1459929141000|1, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929139000|5, t: 2 }, name-id: "189" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.549-0500 c20012| 2016-04-06T02:52:21.651-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:18.549-0500 c20012| 2016-04-06T02:52:21.651-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:18.554-0500 c20012| 2016-04-06T02:52:21.651-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929141000|1, t: 2 } and is durable through: { ts: Timestamp 1459929141000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.556-0500 c20012| 2016-04-06T02:52:21.651-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929141000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.561-0500 c20012| 2016-04-06T02:52:21.651-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.567-0500 c20012| 2016-04-06T02:52:21.651-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.574-0500 c20012| 2016-04-06T02:52:21.652-0500 I COMMAND [conn7] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929141645), up: 14, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:386 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.579-0500 c20012| 2016-04-06T02:52:21.653-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929139000|5, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:18.583-0500 c20012| 2016-04-06T02:52:21.654-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929139000|5, t: 2 } } cursorid:22197973872 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.593-0500 c20012| 2016-04-06T02:52:21.655-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929141000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:18.597-0500 c20012| 2016-04-06T02:52:22.554-0500 D COMMAND [conn5] run command local.$cmd { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:18.601-0500 c20012| 2016-04-06T02:52:22.554-0500 D QUERY [conn5] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: 1 } projection: {} limit: 1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:18.604-0500 c20012| 2016-04-06T02:52:22.554-0500 I COMMAND [conn5] command local.oplog.rs command: find { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:254 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.607-0500 c20012| 2016-04-06T02:52:22.555-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:37532 #17 (15 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:18.627-0500 c20012| 2016-04-06T02:52:22.556-0500 D COMMAND [conn17] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20013" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.628-0500 c20012| 2016-04-06T02:52:22.556-0500 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20013" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.628-0500 c20012| 2016-04-06T02:52:22.556-0500 D COMMAND [conn17] run command local.$cmd { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929130000|10 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.632-0500 c20012| 2016-04-06T02:52:22.556-0500 I COMMAND [conn17] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929130000|10 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 2 } planSummary: COLLSCAN cursorid:25449496203 keysExamined:0 docsExamined:6 numYields:0 nreturned:6 reslen:1371 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.634-0500 c20012| 2016-04-06T02:52:22.557-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:37533 #18 (16 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:18.637-0500 c20012| 2016-04-06T02:52:22.557-0500 D COMMAND [conn18] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20013" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.639-0500 c20012| 2016-04-06T02:52:22.557-0500 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20013" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.647-0500 c20012| 2016-04-06T02:52:22.558-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:18.648-0500 c20012| 2016-04-06T02:52:22.558-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:18.651-0500 c20012| 2016-04-06T02:52:22.558-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.660-0500 c20012| 2016-04-06T02:52:22.558-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.664-0500 c20012| 2016-04-06T02:52:22.558-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.672-0500 c20012| 2016-04-06T02:52:22.560-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929141000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:18.705-0500 c20012| 2016-04-06T02:52:22.560-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929139000|2, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:18.706-0500 c20012| 2016-04-06T02:52:22.560-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:18.710-0500 c20012| 2016-04-06T02:52:22.560-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.713-0500 c20012| 2016-04-06T02:52:22.560-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929139000|2, t: 2 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.717-0500 c20012| 2016-04-06T02:52:22.560-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929139000|2, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.719-0500 c20012| 2016-04-06T02:52:22.561-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:18.720-0500 c20012| 2016-04-06T02:52:22.561-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:18.721-0500 c20012| 2016-04-06T02:52:22.561-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.727-0500 c20012| 2016-04-06T02:52:22.561-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929141000|1, t: 2 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.731-0500 c20012| 2016-04-06T02:52:22.561-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.733-0500 c20012| 2016-04-06T02:52:22.562-0500 D COMMAND [conn9] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929141000|1, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.737-0500 c20012| 2016-04-06T02:52:22.562-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929139000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:18.738-0500 c20012| 2016-04-06T02:52:22.562-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:18.743-0500 c20012| 2016-04-06T02:52:22.562-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.747-0500 c20012| 2016-04-06T02:52:22.562-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929141000|1, t: 2 } and is durable through: { ts: Timestamp 1459929139000|2, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.752-0500 c20012| 2016-04-06T02:52:22.562-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929139000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.756-0500 c20012| 2016-04-06T02:52:22.562-0500 D COMMAND [conn9] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929141000|1, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:18.759-0500 c20012| 2016-04-06T02:52:22.562-0500 D COMMAND [conn9] Using 'committed' snapshot. { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929141000|1, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.760-0500 c20012| 2016-04-06T02:52:22.562-0500 D QUERY [conn9] Using idhack: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:18.767-0500 c20012| 2016-04-06T02:52:22.562-0500 I COMMAND [conn9] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929141000|1, t: 2 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:434 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.771-0500 c20012| 2016-04-06T02:52:22.564-0500 D COMMAND [conn11] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-81.0", lastmod: Timestamp 1000|41, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -81.0 }, max: { _id: -80.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-81.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-80.0", lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -80.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-80.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|40 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.774-0500 c20012| 2016-04-06T02:52:22.564-0500 D QUERY [conn11] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:18.776-0500 c20012| 2016-04-06T02:52:22.564-0500 D COMMAND [conn9] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929142564), up: 15, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.777-0500 c20012| 2016-04-06T02:52:22.564-0500 D QUERY [conn11] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:18.780-0500 c20012| 2016-04-06T02:52:22.564-0500 I COMMAND [conn11] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.780-0500 c20012| 2016-04-06T02:52:22.564-0500 D QUERY [conn11] Using idhack: { _id: "multidrop.coll-_id_-81.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.782-0500 c20012| 2016-04-06T02:52:22.564-0500 D QUERY [conn11] Using idhack: { _id: "multidrop.coll-_id_-80.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.783-0500 c20012| 2016-04-06T02:52:22.564-0500 D QUERY [conn9] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.789-0500 c20012| 2016-04-06T02:52:22.565-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929141000|1, t: 2 } } cursorid:25449496203 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.796-0500 c20012| 2016-04-06T02:52:22.565-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929141000|1, t: 2 } } cursorid:22197973872 numYields:1 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 909ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.801-0500 c20012| 2016-04-06T02:52:22.565-0500 I WRITE [conn9] update config.mongos query: { _id: "mongovm16:20015" } update: { $set: { _id: "mongovm16:20015", ping: new Date(1459929142564), up: 15, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 638 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.808-0500 c20012| 2016-04-06T02:52:22.567-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929139000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|1, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:18.809-0500 c20012| 2016-04-06T02:52:22.567-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:18.811-0500 c20012| 2016-04-06T02:52:22.567-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.815-0500 c20012| 2016-04-06T02:52:22.567-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|1, t: 2 } and is durable through: { ts: Timestamp 1459929139000|2, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.819-0500 c20012| 2016-04-06T02:52:22.567-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929139000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|1, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.823-0500 c20012| 2016-04-06T02:52:22.567-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929141000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:18.828-0500 c20012| 2016-04-06T02:52:22.567-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929141000|1, t: 2 } } cursorid:22197973872 numYields:0 nreturned:1 reslen:522 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.829-0500 c20012| 2016-04-06T02:52:22.569-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929141000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:18.835-0500 c20012| 2016-04-06T02:52:22.569-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929141000|1, t: 2 } } cursorid:25449496203 numYields:0 nreturned:1 reslen:522 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.837-0500 c20012| 2016-04-06T02:52:22.570-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929141000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:18.843-0500 c20012| 2016-04-06T02:52:22.571-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|1, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:18.844-0500 c20012| 2016-04-06T02:52:22.571-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:18.845-0500 c20012| 2016-04-06T02:52:22.571-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|1, t: 2 } and is durable through: { ts: Timestamp 1459929141000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.848-0500 c20012| 2016-04-06T02:52:22.571-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.851-0500 c20012| 2016-04-06T02:52:22.571-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|1, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.855-0500 c20012| 2016-04-06T02:52:22.572-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929141000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:18.861-0500 c20012| 2016-04-06T02:52:22.576-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929139000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|2, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:18.861-0500 c20012| 2016-04-06T02:52:22.576-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:18.866-0500 c20012| 2016-04-06T02:52:22.576-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.868-0500 c20012| 2016-04-06T02:52:22.576-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|2, t: 2 } and is durable through: { ts: Timestamp 1459929139000|2, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.872-0500 c20012| 2016-04-06T02:52:22.576-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929139000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|2, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.876-0500 c20012| 2016-04-06T02:52:22.580-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|2, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:18.877-0500 c20012| 2016-04-06T02:52:22.580-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:18.879-0500 c20012| 2016-04-06T02:52:22.580-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|2, t: 2 } and is durable through: { ts: Timestamp 1459929141000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.884-0500 c20012| 2016-04-06T02:52:22.580-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.899-0500 c20012| 2016-04-06T02:52:22.580-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|2, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.917-0500 c20012| 2016-04-06T02:52:22.589-0500 D REPL [conn11] Required snapshot optime: { ts: Timestamp 1459929142000|1, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929141000|1, t: 2 }, name-id: "190" } [js_test:multi_coll_drop] 2016-04-06T02:53:18.921-0500 c20012| 2016-04-06T02:52:22.590-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|2, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:18.922-0500 c20012| 2016-04-06T02:52:22.590-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:18.928-0500 c20012| 2016-04-06T02:52:22.590-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|2, t: 2 } and is durable through: { ts: Timestamp 1459929142000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.929-0500 c20012| 2016-04-06T02:52:22.590-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.932-0500 c20012| 2016-04-06T02:52:22.590-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.938-0500 c20012| 2016-04-06T02:52:22.590-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|2, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.953-0500 c20012| 2016-04-06T02:52:22.590-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929141000|1, t: 2 } } cursorid:25449496203 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 18ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.961-0500 c20012| 2016-04-06T02:52:22.590-0500 I COMMAND [conn11] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-81.0", lastmod: Timestamp 1000|41, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -81.0 }, max: { _id: -80.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-81.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-80.0", lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -80.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-80.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|40 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 26ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.963-0500 c20012| 2016-04-06T02:52:22.590-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929141000|1, t: 2 } } cursorid:22197973872 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 20ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.966-0500 c20012| 2016-04-06T02:52:22.591-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|2, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:18.966-0500 c20012| 2016-04-06T02:52:22.591-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:18.971-0500 c20012| 2016-04-06T02:52:22.591-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.979-0500 c20012| 2016-04-06T02:52:22.591-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|2, t: 2 } and is durable through: { ts: Timestamp 1459929141000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.983-0500 c20011| 2016-04-06T02:52:41.773-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|6, t: 3 } and is durable through: { ts: Timestamp 1459929161000|6, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.985-0500 c20011| 2016-04-06T02:52:41.773-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929161000|6, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:18.988-0500 c20011| 2016-04-06T02:52:41.773-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|6, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|6, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.993-0500 c20011| 2016-04-06T02:52:41.773-0500 I COMMAND [conn38] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929161747), up: 34, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:386 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 25ms [js_test:multi_coll_drop] 2016-04-06T02:53:18.996-0500 c20011| 2016-04-06T02:52:41.773-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|5, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.015-0500 c20011| 2016-04-06T02:52:41.776-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|6, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|7, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:19.016-0500 c20011| 2016-04-06T02:52:41.776-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:19.021-0500 c20011| 2016-04-06T02:52:41.776-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.022-0500 c20011| 2016-04-06T02:52:41.776-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|7, t: 3 } and is durable through: { ts: Timestamp 1459929161000|6, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.024-0500 c20011| 2016-04-06T02:52:41.776-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|6, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|7, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.027-0500 c20011| 2016-04-06T02:52:41.777-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|7, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|7, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:19.029-0500 c20011| 2016-04-06T02:52:41.777-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:19.031-0500 c20011| 2016-04-06T02:52:41.777-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.036-0500 c20011| 2016-04-06T02:52:41.777-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|7, t: 3 } and is durable through: { ts: Timestamp 1459929161000|7, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.042-0500 c20011| 2016-04-06T02:52:41.777-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|7, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|7, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.045-0500 c20011| 2016-04-06T02:52:41.778-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|6, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:19.047-0500 c20011| 2016-04-06T02:52:41.779-0500 D REPL [conn40] Updating _lastCommittedOpTime to { ts: Timestamp 1459929161000|7, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.052-0500 c20011| 2016-04-06T02:52:41.782-0500 I COMMAND [conn40] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c04965c17830b843f1b1'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929161772), why: "splitting chunk [{ _id: -75.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c04965c17830b843f1b1'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929161772), why: "splitting chunk [{ _id: -75.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 8ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.055-0500 c20011| 2016-04-06T02:52:41.782-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|6, t: 3 } } cursorid:19853084149 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.060-0500 c20011| 2016-04-06T02:52:41.782-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|7, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:19.062-0500 c20011| 2016-04-06T02:52:41.784-0500 D COMMAND [conn40] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|52 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|7, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.064-0500 c20011| 2016-04-06T02:52:41.784-0500 D COMMAND [conn40] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|7, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:19.065-0500 c20011| 2016-04-06T02:52:41.784-0500 D COMMAND [conn40] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|52 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|7, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.067-0500 c20011| 2016-04-06T02:52:41.784-0500 D QUERY [conn40] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:19.070-0500 c20011| 2016-04-06T02:52:41.785-0500 I COMMAND [conn40] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|52 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|7, t: 3 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.078-0500 c20012| 2016-04-06T02:52:22.591-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|2, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.081-0500 c20012| 2016-04-06T02:52:22.591-0500 D COMMAND [conn11] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:22.591-0500-5704c03665c17830b843f1a6", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929142591), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -81.0 }, max: { _id: MaxKey } }, left: { min: { _id: -81.0 }, max: { _id: -80.0 }, lastmod: Timestamp 1000|41, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -80.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.082-0500 c20012| 2016-04-06T02:52:22.591-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:19.082-0500 c20012| 2016-04-06T02:52:22.591-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:19.086-0500 c20012| 2016-04-06T02:52:22.591-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|1, t: 2 } } cursorid:25449496203 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.098-0500 c20012| 2016-04-06T02:52:22.591-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|1, t: 2 } } cursorid:22197973872 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.104-0500 c20012| 2016-04-06T02:52:22.593-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:19.106-0500 c20012| 2016-04-06T02:52:22.593-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:19.109-0500 c20012| 2016-04-06T02:52:22.593-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.113-0500 c20012| 2016-04-06T02:52:22.593-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|3, t: 2 } and is durable through: { ts: Timestamp 1459929141000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.122-0500 c20012| 2016-04-06T02:52:22.593-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.124-0500 c20012| 2016-04-06T02:52:22.594-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:19.126-0500 c20012| 2016-04-06T02:52:22.594-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:19.130-0500 c20012| 2016-04-06T02:52:22.595-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:19.131-0500 c20012| 2016-04-06T02:52:22.595-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:19.134-0500 c20012| 2016-04-06T02:52:22.595-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|3, t: 2 } and is durable through: { ts: Timestamp 1459929142000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.142-0500 c20012| 2016-04-06T02:52:22.595-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.146-0500 c20012| 2016-04-06T02:52:22.595-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.160-0500 c20012| 2016-04-06T02:52:22.615-0500 D REPL [conn9] Required snapshot optime: { ts: Timestamp 1459929142000|2, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929142000|1, t: 2 }, name-id: "191" } [js_test:multi_coll_drop] 2016-04-06T02:53:19.164-0500 c20012| 2016-04-06T02:52:22.615-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:19.165-0500 c20012| 2016-04-06T02:52:22.615-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:19.174-0500 c20012| 2016-04-06T02:52:22.615-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|3, t: 2 } and is durable through: { ts: Timestamp 1459929142000|2, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.176-0500 c20012| 2016-04-06T02:52:22.615-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|2, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.176-0500 c20012| 2016-04-06T02:52:22.615-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.184-0500 c20012| 2016-04-06T02:52:22.615-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.187-0500 c20012| 2016-04-06T02:52:22.615-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:19.187-0500 c20012| 2016-04-06T02:52:22.615-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:19.189-0500 c20012| 2016-04-06T02:52:22.615-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.191-0500 c20012| 2016-04-06T02:52:22.615-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|3, t: 2 } and is durable through: { ts: Timestamp 1459929142000|2, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.193-0500 c20012| 2016-04-06T02:52:22.615-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.195-0500 c20012| 2016-04-06T02:52:22.625-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|1, t: 2 } } cursorid:22197973872 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 30ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.197-0500 c20012| 2016-04-06T02:52:22.625-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|1, t: 2 } } cursorid:25449496203 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 31ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.206-0500 c20012| 2016-04-06T02:52:22.626-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|2, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:19.214-0500 c20012| 2016-04-06T02:52:22.626-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|2, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:19.226-0500 c20012| 2016-04-06T02:52:22.631-0500 I COMMAND [conn9] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929142564), up: 15, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:386 locks:{ Global: { acquireCount: { r: 2, w: 2 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 638 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 66ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.232-0500 c20012| 2016-04-06T02:52:22.632-0500 D REPL [conn11] Required snapshot optime: { ts: Timestamp 1459929142000|3, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929142000|2, t: 2 }, name-id: "192" } [js_test:multi_coll_drop] 2016-04-06T02:53:19.235-0500 c20012| 2016-04-06T02:52:22.633-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:19.235-0500 c20012| 2016-04-06T02:52:22.633-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:19.240-0500 c20012| 2016-04-06T02:52:22.633-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|3, t: 2 } and is durable through: { ts: Timestamp 1459929142000|3, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.241-0500 c20012| 2016-04-06T02:52:22.633-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|3, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.248-0500 c20012| 2016-04-06T02:52:22.633-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.250-0500 c20012| 2016-04-06T02:52:22.633-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.261-0500 c20012| 2016-04-06T02:52:22.633-0500 I COMMAND [conn11] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:22.591-0500-5704c03665c17830b843f1a6", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929142591), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -81.0 }, max: { _id: MaxKey } }, left: { min: { _id: -81.0 }, max: { _id: -80.0 }, lastmod: Timestamp 1000|41, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -80.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 41ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.272-0500 c20012| 2016-04-06T02:52:22.633-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|2, t: 2 } } cursorid:25449496203 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.288-0500 c20012| 2016-04-06T02:52:22.633-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|2, t: 2 } } cursorid:22197973872 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.304-0500 c20012| 2016-04-06T02:52:22.633-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:19.304-0500 c20012| 2016-04-06T02:52:22.633-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:19.325-0500 c20012| 2016-04-06T02:52:22.633-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.333-0500 c20012| 2016-04-06T02:52:22.633-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|3, t: 2 } and is durable through: { ts: Timestamp 1459929142000|3, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.340-0500 c20012| 2016-04-06T02:52:22.633-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.343-0500 c20012| 2016-04-06T02:52:22.633-0500 D COMMAND [conn11] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c03365c17830b843f1a5') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.346-0500 c20011| 2016-04-06T02:52:41.786-0500 D COMMAND [conn40] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-75.0", lastmod: Timestamp 1000|53, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -75.0 }, max: { _id: -74.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-75.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-74.0", lastmod: Timestamp 1000|54, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -74.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-74.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|52 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.347-0500 c20011| 2016-04-06T02:52:41.786-0500 D QUERY [conn40] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:19.349-0500 c20011| 2016-04-06T02:52:41.786-0500 D QUERY [conn40] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:19.352-0500 c20011| 2016-04-06T02:52:41.786-0500 I COMMAND [conn40] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.353-0500 c20011| 2016-04-06T02:52:41.786-0500 D QUERY [conn40] Using idhack: { _id: "multidrop.coll-_id_-75.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:19.355-0500 c20011| 2016-04-06T02:52:41.786-0500 D QUERY [conn40] Using idhack: { _id: "multidrop.coll-_id_-74.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:19.357-0500 c20011| 2016-04-06T02:52:41.788-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|7, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.360-0500 c20011| 2016-04-06T02:52:41.791-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|7, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:19.361-0500 c20011| 2016-04-06T02:52:41.791-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:19.362-0500 c20011| 2016-04-06T02:52:41.791-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.364-0500 c20011| 2016-04-06T02:52:41.791-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|8, t: 3 } and is durable through: { ts: Timestamp 1459929161000|7, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.367-0500 c20011| 2016-04-06T02:52:41.791-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|7, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.370-0500 c20011| 2016-04-06T02:52:41.791-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|7, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:19.372-0500 c20011| 2016-04-06T02:52:41.792-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929161000|8, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|7, t: 3 }, name-id: "207" } [js_test:multi_coll_drop] 2016-04-06T02:53:19.379-0500 c20011| 2016-04-06T02:52:41.796-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:19.380-0500 c20011| 2016-04-06T02:52:41.796-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:19.386-0500 c20011| 2016-04-06T02:52:41.796-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.390-0500 c20011| 2016-04-06T02:52:41.796-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|8, t: 3 } and is durable through: { ts: Timestamp 1459929161000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.391-0500 c20011| 2016-04-06T02:52:41.796-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929161000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.397-0500 c20011| 2016-04-06T02:52:41.796-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.403-0500 c20011| 2016-04-06T02:52:41.797-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|7, t: 3 } } cursorid:19853084149 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.412-0500 c20011| 2016-04-06T02:52:41.797-0500 I COMMAND [conn40] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-75.0", lastmod: Timestamp 1000|53, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -75.0 }, max: { _id: -74.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-75.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-74.0", lastmod: Timestamp 1000|54, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -74.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-74.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|52 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 11ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.419-0500 c20011| 2016-04-06T02:52:41.797-0500 D COMMAND [conn40] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:41.797-0500-5704c04965c17830b843f1b2", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929161797), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -75.0 }, max: { _id: MaxKey } }, left: { min: { _id: -75.0 }, max: { _id: -74.0 }, lastmod: Timestamp 1000|53, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -74.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|54, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.420-0500 c20011| 2016-04-06T02:52:41.798-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:19.425-0500 c20011| 2016-04-06T02:52:41.798-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|8, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.432-0500 c20011| 2016-04-06T02:52:41.801-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|9, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:19.433-0500 c20011| 2016-04-06T02:52:41.801-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:19.435-0500 c20011| 2016-04-06T02:52:41.801-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.441-0500 c20011| 2016-04-06T02:52:41.801-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|9, t: 3 } and is durable through: { ts: Timestamp 1459929161000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.444-0500 c20011| 2016-04-06T02:52:41.801-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|9, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.447-0500 c20011| 2016-04-06T02:52:41.802-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:19.449-0500 c20011| 2016-04-06T02:52:41.803-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929161000|9, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|8, t: 3 }, name-id: "208" } [js_test:multi_coll_drop] 2016-04-06T02:53:19.451-0500 c20011| 2016-04-06T02:52:41.822-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|9, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|9, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:19.451-0500 c20011| 2016-04-06T02:52:41.822-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:19.454-0500 c20011| 2016-04-06T02:52:41.822-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.459-0500 c20011| 2016-04-06T02:52:41.822-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|9, t: 3 } and is durable through: { ts: Timestamp 1459929161000|9, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.461-0500 c20011| 2016-04-06T02:52:41.822-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929161000|9, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.465-0500 c20011| 2016-04-06T02:52:41.822-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|9, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|9, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.472-0500 c20011| 2016-04-06T02:52:41.822-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|8, t: 3 } } cursorid:19853084149 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 19ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.474-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.475-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.477-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.478-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.479-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.480-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.481-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.481-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.485-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.486-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.486-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.488-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.488-0500 c20013| 2016-04-06T02:52:08.984-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.489-0500 c20013| 2016-04-06T02:52:08.985-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.490-0500 c20013| 2016-04-06T02:52:08.985-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.492-0500 c20013| 2016-04-06T02:52:08.985-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:19.496-0500 c20013| 2016-04-06T02:52:08.985-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:19.498-0500 c20013| 2016-04-06T02:52:08.985-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 812 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|68, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:19.500-0500 c20013| 2016-04-06T02:52:08.985-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 812 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:19.500-0500 c20013| 2016-04-06T02:52:08.985-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 812 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.501-0500 c20013| 2016-04-06T02:52:08.985-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 814 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.985-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|68, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:19.502-0500 c20013| 2016-04-06T02:52:08.985-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 814 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:19.505-0500 c20013| 2016-04-06T02:52:08.987-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:19.508-0500 c20013| 2016-04-06T02:52:08.987-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 815 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:19.510-0500 c20013| 2016-04-06T02:52:08.987-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 815 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:19.510-0500 c20013| 2016-04-06T02:52:08.987-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 815 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.511-0500 c20013| 2016-04-06T02:52:08.987-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 814 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.511-0500 c20013| 2016-04-06T02:52:08.988-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|69, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.512-0500 c20013| 2016-04-06T02:52:08.988-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:19.514-0500 c20013| 2016-04-06T02:52:08.988-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 818 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.988-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|69, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:19.514-0500 c20013| 2016-04-06T02:52:08.988-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 818 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:19.518-0500 c20013| 2016-04-06T02:52:08.991-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 818 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929128000|70, t: 1, h: 3091193383868667392, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-87.0", lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -87.0 }, max: { _id: -86.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-87.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-86.0", lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -86.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-86.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.519-0500 c20013| 2016-04-06T02:52:08.991-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929128000|70 and ending at ts: Timestamp 1459929128000|70 [js_test:multi_coll_drop] 2016-04-06T02:53:19.519-0500 c20013| 2016-04-06T02:52:08.991-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:19.520-0500 c20013| 2016-04-06T02:52:08.991-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.521-0500 c20013| 2016-04-06T02:52:08.991-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.521-0500 c20013| 2016-04-06T02:52:08.991-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.521-0500 c20013| 2016-04-06T02:52:08.991-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.529-0500 c20013| 2016-04-06T02:52:08.991-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.529-0500 c20013| 2016-04-06T02:52:08.991-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.531-0500 c20013| 2016-04-06T02:52:08.991-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.534-0500 c20013| 2016-04-06T02:52:08.991-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.540-0500 c20013| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.541-0500 c20013| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.542-0500 c20013| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.542-0500 c20013| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.544-0500 c20013| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.548-0500 c20013| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.552-0500 c20013| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.554-0500 c20013| 2016-04-06T02:52:08.992-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:19.556-0500 c20013| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.558-0500 c20013| 2016-04-06T02:52:08.992-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll-_id_-87.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:19.559-0500 c20013| 2016-04-06T02:52:08.992-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll-_id_-86.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:19.560-0500 c20013| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.560-0500 c20013| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.561-0500 c20013| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.567-0500 c20013| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.570-0500 c20013| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.572-0500 c20013| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.573-0500 c20013| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.574-0500 c20013| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.576-0500 c20013| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.578-0500 c20013| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.578-0500 c20013| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.579-0500 c20013| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.580-0500 c20013| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.580-0500 c20013| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.581-0500 c20013| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.582-0500 c20013| 2016-04-06T02:52:08.992-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.586-0500 c20013| 2016-04-06T02:52:08.993-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:19.592-0500 c20013| 2016-04-06T02:52:08.993-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:19.607-0500 c20013| 2016-04-06T02:52:08.993-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 820 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|69, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:19.607-0500 c20013| 2016-04-06T02:52:08.993-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 820 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:19.609-0500 c20013| 2016-04-06T02:52:08.993-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 820 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.629-0500 c20013| 2016-04-06T02:52:08.994-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 822 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.994-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|69, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:19.630-0500 c20013| 2016-04-06T02:52:08.994-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 822 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:19.635-0500 c20013| 2016-04-06T02:52:08.996-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:19.639-0500 c20013| 2016-04-06T02:52:08.996-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 823 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, appliedOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:19.640-0500 c20013| 2016-04-06T02:52:08.996-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 823 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:19.641-0500 c20012| 2016-04-06T02:52:22.634-0500 D QUERY [conn11] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:19.644-0500 c20012| 2016-04-06T02:52:22.634-0500 D QUERY [conn11] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c03365c17830b843f1a5') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.648-0500 c20012| 2016-04-06T02:52:22.634-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|3, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:19.650-0500 c20012| 2016-04-06T02:52:22.634-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|3, t: 2 } } cursorid:25449496203 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.657-0500 c20012| 2016-04-06T02:52:22.634-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|3, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:19.664-0500 c20012| 2016-04-06T02:52:22.634-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|3, t: 2 } } cursorid:22197973872 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.666-0500 c20011| 2016-04-06T02:52:41.823-0500 I COMMAND [conn40] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:41.797-0500-5704c04965c17830b843f1b2", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929161797), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -75.0 }, max: { _id: MaxKey } }, left: { min: { _id: -75.0 }, max: { _id: -74.0 }, lastmod: Timestamp 1000|53, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -74.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|54, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 25ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.672-0500 c20011| 2016-04-06T02:52:41.823-0500 D COMMAND [conn40] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c04965c17830b843f1b1') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.674-0500 c20011| 2016-04-06T02:52:41.823-0500 D QUERY [conn40] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:19.679-0500 c20011| 2016-04-06T02:52:41.823-0500 D QUERY [conn40] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c04965c17830b843f1b1') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.680-0500 c20011| 2016-04-06T02:52:41.824-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|9, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:19.686-0500 c20011| 2016-04-06T02:52:41.824-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|9, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:19.688-0500 c20013| 2016-04-06T02:52:08.997-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 823 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.691-0500 c20012| 2016-04-06T02:52:22.640-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|3, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:19.697-0500 c20011| 2016-04-06T02:52:41.827-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929161000|10, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|9, t: 3 }, name-id: "209" } [js_test:multi_coll_drop] 2016-04-06T02:53:19.700-0500 c20013| 2016-04-06T02:52:08.997-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 822 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.704-0500 c20013| 2016-04-06T02:52:08.997-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929128000|70, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.704-0500 c20013| 2016-04-06T02:52:08.997-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:19.707-0500 c20011| 2016-04-06T02:52:41.827-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|9, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:19.714-0500 c20012| 2016-04-06T02:52:22.641-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:19.715-0500 c20012| 2016-04-06T02:52:22.642-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:19.723-0500 c20012| 2016-04-06T02:52:22.642-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.728-0500 c20013| 2016-04-06T02:52:08.997-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 826 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:13.997-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|70, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:19.731-0500 c20013| 2016-04-06T02:52:08.997-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 826 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:19.740-0500 c20013| 2016-04-06T02:52:09.019-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 826 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929129000|1, t: 1, h: 1591298908171832149, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:09.014-0500-5704c02965c17830b843f199", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929129014), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -87.0 }, max: { _id: MaxKey } }, left: { min: { _id: -87.0 }, max: { _id: -86.0 }, lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -86.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.742-0500 c20013| 2016-04-06T02:52:09.020-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929129000|1 and ending at ts: Timestamp 1459929129000|1 [js_test:multi_coll_drop] 2016-04-06T02:53:19.751-0500 c20013| 2016-04-06T02:52:09.020-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:19.752-0500 c20013| 2016-04-06T02:52:09.020-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.754-0500 c20013| 2016-04-06T02:52:09.020-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.754-0500 c20013| 2016-04-06T02:52:09.020-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.759-0500 c20013| 2016-04-06T02:52:09.020-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.760-0500 c20013| 2016-04-06T02:52:09.020-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.762-0500 c20013| 2016-04-06T02:52:09.020-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.763-0500 c20013| 2016-04-06T02:52:09.020-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.764-0500 c20013| 2016-04-06T02:52:09.020-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.765-0500 c20013| 2016-04-06T02:52:09.020-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.766-0500 c20013| 2016-04-06T02:52:09.020-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.767-0500 c20013| 2016-04-06T02:52:09.020-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.768-0500 c20013| 2016-04-06T02:52:09.020-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.770-0500 c20013| 2016-04-06T02:52:09.020-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.771-0500 c20013| 2016-04-06T02:52:09.020-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.771-0500 c20013| 2016-04-06T02:52:09.020-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:19.773-0500 c20013| 2016-04-06T02:52:09.020-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.775-0500 c20013| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.776-0500 c20013| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.777-0500 c20013| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.778-0500 c20013| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.778-0500 c20013| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.780-0500 c20013| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.781-0500 c20013| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.783-0500 c20013| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.784-0500 c20013| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.788-0500 c20013| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.789-0500 c20013| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.790-0500 c20013| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.791-0500 c20013| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.791-0500 c20013| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.792-0500 c20013| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.792-0500 c20013| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.793-0500 c20013| 2016-04-06T02:52:09.021-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:19.798-0500 c20013| 2016-04-06T02:52:09.022-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:19.807-0500 c20013| 2016-04-06T02:52:09.022-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 828 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.022-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929128000|70, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:19.808-0500 c20013| 2016-04-06T02:52:09.022-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:19.817-0500 c20013| 2016-04-06T02:52:09.022-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 829 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929128000|70, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:19.817-0500 c20013| 2016-04-06T02:52:09.022-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 829 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:19.824-0500 c20013| 2016-04-06T02:52:09.022-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 829 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:19.828-0500 c20013| 2016-04-06T02:52:09.023-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 828 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:19.849-0500 c20013| 2016-04-06T02:52:09.024-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:19.859-0500 c20013| 2016-04-06T02:52:09.024-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 831 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:19.863-0500 c20013| 2016-04-06T02:52:09.024-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 831 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:19.864-0500 c20013| 2016-04-06T02:52:09.024-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 831 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.130-0500 c20013| 2016-04-06T02:52:09.025-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 828 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.132-0500 c20013| 2016-04-06T02:52:09.025-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.133-0500 c20013| 2016-04-06T02:52:09.025-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:20.139-0500 c20013| 2016-04-06T02:52:09.025-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 834 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.025-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:20.142-0500 c20013| 2016-04-06T02:52:09.025-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 834 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:20.145-0500 c20013| 2016-04-06T02:52:09.029-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 834 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929129000|2, t: 1, h: 1364947328691333013, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.146-0500 c20013| 2016-04-06T02:52:09.029-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929129000|2 and ending at ts: Timestamp 1459929129000|2 [js_test:multi_coll_drop] 2016-04-06T02:53:20.149-0500 c20013| 2016-04-06T02:52:09.029-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:20.152-0500 c20013| 2016-04-06T02:52:09.029-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.155-0500 c20013| 2016-04-06T02:52:09.029-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.157-0500 c20013| 2016-04-06T02:52:09.029-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.157-0500 c20013| 2016-04-06T02:52:09.029-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.158-0500 c20013| 2016-04-06T02:52:09.029-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.158-0500 c20013| 2016-04-06T02:52:09.029-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.159-0500 c20013| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.161-0500 c20013| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.161-0500 c20013| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.162-0500 c20013| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.163-0500 c20013| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.164-0500 c20013| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.165-0500 c20013| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.165-0500 c20013| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.166-0500 c20013| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.167-0500 c20013| 2016-04-06T02:52:09.030-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.168-0500 c20013| 2016-04-06T02:52:09.030-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:20.169-0500 c20013| 2016-04-06T02:52:09.030-0500 D QUERY [repl writer worker 9] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:20.171-0500 c20013| 2016-04-06T02:52:09.031-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.171-0500 c20013| 2016-04-06T02:52:09.031-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.176-0500 c20013| 2016-04-06T02:52:09.031-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.178-0500 c20013| 2016-04-06T02:52:09.031-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.179-0500 c20013| 2016-04-06T02:52:09.031-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.180-0500 c20013| 2016-04-06T02:52:09.031-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.182-0500 c20013| 2016-04-06T02:52:09.031-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.183-0500 c20013| 2016-04-06T02:52:09.031-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.194-0500 c20013| 2016-04-06T02:52:09.031-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.197-0500 c20013| 2016-04-06T02:52:09.031-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.197-0500 c20013| 2016-04-06T02:52:09.031-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.198-0500 c20013| 2016-04-06T02:52:09.031-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.200-0500 c20013| 2016-04-06T02:52:09.031-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.202-0500 c20013| 2016-04-06T02:52:09.031-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.203-0500 c20013| 2016-04-06T02:52:09.031-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.204-0500 c20013| 2016-04-06T02:52:09.031-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.216-0500 c20013| 2016-04-06T02:52:09.031-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 836 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.031-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:20.216-0500 c20013| 2016-04-06T02:52:09.031-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 836 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:20.217-0500 c20013| 2016-04-06T02:52:09.031-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:20.223-0500 c20013| 2016-04-06T02:52:09.032-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.227-0500 c20013| 2016-04-06T02:52:09.032-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 837 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.228-0500 c20013| 2016-04-06T02:52:09.032-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 837 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:20.229-0500 c20013| 2016-04-06T02:52:09.032-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 837 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.230-0500 c20013| 2016-04-06T02:52:09.034-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 836 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.231-0500 c20013| 2016-04-06T02:52:09.034-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.232-0500 c20013| 2016-04-06T02:52:09.034-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:20.238-0500 c20013| 2016-04-06T02:52:09.034-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.242-0500 c20013| 2016-04-06T02:52:09.034-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|2, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:20.247-0500 c20013| 2016-04-06T02:52:09.034-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 840 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.034-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:20.248-0500 c20013| 2016-04-06T02:52:09.034-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 840 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:20.251-0500 c20013| 2016-04-06T02:52:09.034-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.254-0500 c20013| 2016-04-06T02:52:09.034-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:20.259-0500 c20013| 2016-04-06T02:52:09.034-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:20.263-0500 c20013| 2016-04-06T02:52:09.035-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|28 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.265-0500 c20013| 2016-04-06T02:52:09.035-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|2, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:20.269-0500 c20013| 2016-04-06T02:52:09.035-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|28 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.270-0500 c20013| 2016-04-06T02:52:09.035-0500 D QUERY [conn10] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:20.274-0500 c20013| 2016-04-06T02:52:09.035-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|28 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929129000|2, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:712 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:20.278-0500 c20013| 2016-04-06T02:52:09.035-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.283-0500 c20013| 2016-04-06T02:52:09.035-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 841 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.286-0500 c20013| 2016-04-06T02:52:09.035-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 841 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:20.286-0500 c20013| 2016-04-06T02:52:09.036-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 841 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.290-0500 c20013| 2016-04-06T02:52:09.038-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 840 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929129000|3, t: 1, h: -6195657287990773069, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02965c17830b843f19a'), state: 2, when: new Date(1459929129036), why: "splitting chunk [{ _id: -86.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.292-0500 c20013| 2016-04-06T02:52:09.038-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929129000|3 and ending at ts: Timestamp 1459929129000|3 [js_test:multi_coll_drop] 2016-04-06T02:53:20.295-0500 c20013| 2016-04-06T02:52:09.038-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:20.296-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.297-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.298-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.300-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.301-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.303-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.304-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.304-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.304-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.305-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.306-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.307-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.307-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.308-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.309-0500 c20013| 2016-04-06T02:52:09.039-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:20.310-0500 c20013| 2016-04-06T02:52:09.039-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:20.311-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.314-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.315-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.317-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.318-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.320-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.321-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.322-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.323-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.327-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.329-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.329-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.329-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.330-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.333-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.334-0500 c20013| 2016-04-06T02:52:09.039-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.336-0500 c20013| 2016-04-06T02:52:09.040-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.337-0500 c20013| 2016-04-06T02:52:09.040-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.340-0500 c20013| 2016-04-06T02:52:09.040-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:20.346-0500 c20013| 2016-04-06T02:52:09.041-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 844 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.041-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:20.349-0500 c20013| 2016-04-06T02:52:09.041-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.353-0500 c20013| 2016-04-06T02:52:09.042-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 845 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.355-0500 c20013| 2016-04-06T02:52:09.042-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 845 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:20.356-0500 c20013| 2016-04-06T02:52:09.042-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 845 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.358-0500 c20013| 2016-04-06T02:52:09.042-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 844 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:20.362-0500 c20013| 2016-04-06T02:52:09.049-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.366-0500 c20013| 2016-04-06T02:52:09.049-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 847 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.367-0500 c20013| 2016-04-06T02:52:09.049-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 847 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:20.368-0500 c20013| 2016-04-06T02:52:09.050-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 847 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.371-0500 c20013| 2016-04-06T02:52:09.050-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 844 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.372-0500 c20013| 2016-04-06T02:52:09.050-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.374-0500 c20013| 2016-04-06T02:52:09.050-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:20.380-0500 c20013| 2016-04-06T02:52:09.051-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 850 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.051-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:20.382-0500 c20013| 2016-04-06T02:52:09.051-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 850 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:20.391-0500 c20013| 2016-04-06T02:52:09.056-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 850 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929129000|4, t: 1, h: 6878295864364967569, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-86.0", lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -86.0 }, max: { _id: -85.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-86.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-85.0", lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -85.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-85.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.392-0500 c20013| 2016-04-06T02:52:09.056-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929129000|4 and ending at ts: Timestamp 1459929129000|4 [js_test:multi_coll_drop] 2016-04-06T02:53:20.394-0500 c20013| 2016-04-06T02:52:09.056-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:20.395-0500 c20013| 2016-04-06T02:52:09.056-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.397-0500 c20013| 2016-04-06T02:52:09.056-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.399-0500 c20013| 2016-04-06T02:52:09.056-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.400-0500 c20013| 2016-04-06T02:52:09.056-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.402-0500 c20013| 2016-04-06T02:52:09.056-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.403-0500 c20013| 2016-04-06T02:52:09.056-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.404-0500 c20013| 2016-04-06T02:52:09.056-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.406-0500 c20013| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.406-0500 c20013| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.408-0500 c20013| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.409-0500 c20013| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.412-0500 c20013| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.415-0500 c20013| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.416-0500 c20013| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.417-0500 c20013| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.417-0500 c20013| 2016-04-06T02:52:09.057-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:20.419-0500 c20013| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.420-0500 c20013| 2016-04-06T02:52:09.057-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-86.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:20.421-0500 c20013| 2016-04-06T02:52:09.057-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-85.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:20.422-0500 c20013| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.422-0500 c20013| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.422-0500 c20013| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.425-0500 c20013| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.426-0500 c20013| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.426-0500 c20013| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.427-0500 c20013| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.427-0500 c20013| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.428-0500 c20013| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.429-0500 c20013| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.429-0500 c20013| 2016-04-06T02:52:09.057-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.430-0500 c20013| 2016-04-06T02:52:09.058-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.431-0500 c20013| 2016-04-06T02:52:09.058-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.434-0500 c20013| 2016-04-06T02:52:09.058-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 852 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.058-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:20.435-0500 c20013| 2016-04-06T02:52:09.058-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 852 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:20.436-0500 c20013| 2016-04-06T02:52:09.058-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.440-0500 c20013| 2016-04-06T02:52:09.058-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.441-0500 c20013| 2016-04-06T02:52:09.058-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.442-0500 c20013| 2016-04-06T02:52:09.059-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:20.447-0500 c20013| 2016-04-06T02:52:09.059-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.449-0500 c20013| 2016-04-06T02:52:09.059-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 853 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.456-0500 c20013| 2016-04-06T02:52:09.059-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 853 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:20.459-0500 c20013| 2016-04-06T02:52:09.060-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 853 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.468-0500 c20013| 2016-04-06T02:52:09.062-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.474-0500 c20013| 2016-04-06T02:52:09.062-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 855 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.475-0500 c20013| 2016-04-06T02:52:09.062-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 855 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:20.477-0500 c20013| 2016-04-06T02:52:09.062-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 855 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.478-0500 c20013| 2016-04-06T02:52:09.062-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 852 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.480-0500 c20013| 2016-04-06T02:52:09.062-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.481-0500 c20013| 2016-04-06T02:52:09.062-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:20.484-0500 c20013| 2016-04-06T02:52:09.062-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 858 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.062-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:20.485-0500 c20013| 2016-04-06T02:52:09.062-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 858 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:20.491-0500 c20013| 2016-04-06T02:52:09.063-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 858 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929129000|5, t: 1, h: -2747954062576067140, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:09.062-0500-5704c02965c17830b843f19b", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929129062), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -86.0 }, max: { _id: MaxKey } }, left: { min: { _id: -86.0 }, max: { _id: -85.0 }, lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -85.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.493-0500 c20013| 2016-04-06T02:52:09.063-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929129000|5 and ending at ts: Timestamp 1459929129000|5 [js_test:multi_coll_drop] 2016-04-06T02:53:20.494-0500 c20013| 2016-04-06T02:52:09.063-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:20.495-0500 c20013| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.495-0500 c20013| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.497-0500 c20013| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.498-0500 c20013| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.499-0500 c20013| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.503-0500 c20013| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.506-0500 c20013| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.506-0500 c20013| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.508-0500 c20013| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.511-0500 c20013| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.514-0500 c20013| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.514-0500 c20013| 2016-04-06T02:52:09.064-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:20.516-0500 c20013| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.522-0500 c20013| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.525-0500 c20013| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.532-0500 c20013| 2016-04-06T02:52:09.064-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.534-0500 c20013| 2016-04-06T02:52:09.065-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.535-0500 c20013| 2016-04-06T02:52:09.065-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.535-0500 c20013| 2016-04-06T02:52:09.065-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.537-0500 c20013| 2016-04-06T02:52:09.065-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.538-0500 c20013| 2016-04-06T02:52:09.065-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.539-0500 c20013| 2016-04-06T02:52:09.065-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.540-0500 c20013| 2016-04-06T02:52:09.065-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.541-0500 c20013| 2016-04-06T02:52:09.065-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.543-0500 c20013| 2016-04-06T02:52:09.065-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.543-0500 c20013| 2016-04-06T02:52:09.065-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.544-0500 c20013| 2016-04-06T02:52:09.065-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.545-0500 c20013| 2016-04-06T02:52:09.065-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.547-0500 c20013| 2016-04-06T02:52:09.065-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.548-0500 c20013| 2016-04-06T02:52:09.065-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.549-0500 c20013| 2016-04-06T02:52:09.065-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.550-0500 c20013| 2016-04-06T02:52:09.066-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.551-0500 c20013| 2016-04-06T02:52:09.066-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.557-0500 c20013| 2016-04-06T02:52:09.066-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 860 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.066-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:20.558-0500 c20013| 2016-04-06T02:52:09.066-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 860 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:20.560-0500 c20013| 2016-04-06T02:52:09.066-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:20.564-0500 c20013| 2016-04-06T02:52:09.066-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.569-0500 c20013| 2016-04-06T02:52:09.066-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 861 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.572-0500 c20013| 2016-04-06T02:52:09.066-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 861 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:20.576-0500 c20013| 2016-04-06T02:52:09.066-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 861 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.580-0500 c20013| 2016-04-06T02:52:09.073-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.587-0500 c20013| 2016-04-06T02:52:09.074-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 863 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.588-0500 c20013| 2016-04-06T02:52:09.074-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 860 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.588-0500 c20013| 2016-04-06T02:52:09.074-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.589-0500 c20013| 2016-04-06T02:52:09.074-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 863 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:20.590-0500 c20013| 2016-04-06T02:52:09.074-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:20.592-0500 c20013| 2016-04-06T02:52:09.074-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 865 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.074-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:20.595-0500 c20013| 2016-04-06T02:52:09.074-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 865 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:20.597-0500 c20013| 2016-04-06T02:52:09.074-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 863 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.603-0500 c20013| 2016-04-06T02:52:09.075-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 865 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929129000|6, t: 1, h: 1904439408712808447, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.604-0500 c20013| 2016-04-06T02:52:09.075-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929129000|6 and ending at ts: Timestamp 1459929129000|6 [js_test:multi_coll_drop] 2016-04-06T02:53:20.606-0500 c20013| 2016-04-06T02:52:09.075-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:20.607-0500 c20013| 2016-04-06T02:52:09.075-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.608-0500 c20013| 2016-04-06T02:52:09.075-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.609-0500 c20013| 2016-04-06T02:52:09.075-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.609-0500 c20013| 2016-04-06T02:52:09.075-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.612-0500 c20013| 2016-04-06T02:52:09.075-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.613-0500 c20013| 2016-04-06T02:52:09.075-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.614-0500 c20013| 2016-04-06T02:52:09.075-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.616-0500 c20013| 2016-04-06T02:52:09.075-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.617-0500 c20013| 2016-04-06T02:52:09.075-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.618-0500 c20013| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.623-0500 c20013| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.623-0500 c20013| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.623-0500 c20013| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.624-0500 c20013| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.625-0500 c20013| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.625-0500 c20013| 2016-04-06T02:52:09.076-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:20.628-0500 c20013| 2016-04-06T02:52:09.076-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:20.630-0500 c20013| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.632-0500 c20013| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.632-0500 c20013| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.633-0500 c20013| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.635-0500 c20013| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.636-0500 c20013| 2016-04-06T02:52:09.076-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.636-0500 c20013| 2016-04-06T02:52:09.077-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.639-0500 c20013| 2016-04-06T02:52:09.077-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.644-0500 c20013| 2016-04-06T02:52:09.077-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.644-0500 c20013| 2016-04-06T02:52:09.077-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.645-0500 c20013| 2016-04-06T02:52:09.077-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.646-0500 c20013| 2016-04-06T02:52:09.077-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.646-0500 c20013| 2016-04-06T02:52:09.077-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.647-0500 c20013| 2016-04-06T02:52:09.077-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.650-0500 c20013| 2016-04-06T02:52:09.077-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 868 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.077-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:20.650-0500 c20013| 2016-04-06T02:52:09.077-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.651-0500 c20013| 2016-04-06T02:52:09.077-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.652-0500 c20013| 2016-04-06T02:52:09.077-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.654-0500 c20013| 2016-04-06T02:52:09.077-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 868 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:20.656-0500 c20013| 2016-04-06T02:52:09.077-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:20.659-0500 c20013| 2016-04-06T02:52:09.077-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.662-0500 c20013| 2016-04-06T02:52:09.077-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 869 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.664-0500 c20013| 2016-04-06T02:52:09.077-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 869 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:20.666-0500 c20013| 2016-04-06T02:52:09.077-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 869 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.671-0500 c20013| 2016-04-06T02:52:09.079-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.679-0500 c20013| 2016-04-06T02:52:09.079-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 871 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.681-0500 c20013| 2016-04-06T02:52:09.079-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 871 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:20.683-0500 c20013| 2016-04-06T02:52:09.080-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 871 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.685-0500 c20013| 2016-04-06T02:52:09.085-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 868 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.686-0500 c20013| 2016-04-06T02:52:09.086-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.687-0500 c20013| 2016-04-06T02:52:09.086-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:20.688-0500 c20013| 2016-04-06T02:52:09.086-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 874 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.086-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:20.690-0500 c20013| 2016-04-06T02:52:09.086-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 874 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:20.694-0500 c20013| 2016-04-06T02:52:09.094-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 874 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929129000|7, t: 1, h: 7424373951997247397, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02965c17830b843f19c'), state: 2, when: new Date(1459929129093), why: "splitting chunk [{ _id: -85.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.698-0500 c20013| 2016-04-06T02:52:09.094-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929129000|7 and ending at ts: Timestamp 1459929129000|7 [js_test:multi_coll_drop] 2016-04-06T02:53:20.698-0500 c20013| 2016-04-06T02:52:09.094-0500 D REPL [rsBackgroundSync-0] bgsync buffer has 0 bytes [js_test:multi_coll_drop] 2016-04-06T02:53:20.699-0500 c20013| 2016-04-06T02:52:09.094-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:20.699-0500 c20013| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.702-0500 c20013| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.703-0500 c20013| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.704-0500 c20013| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.705-0500 c20013| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.707-0500 c20013| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.708-0500 c20013| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.709-0500 c20013| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.710-0500 c20013| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.711-0500 c20013| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.712-0500 c20013| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.715-0500 c20013| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.719-0500 c20013| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.720-0500 c20013| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.721-0500 c20013| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.723-0500 c20013| 2016-04-06T02:52:09.095-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:20.723-0500 c20013| 2016-04-06T02:52:09.095-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.725-0500 c20013| 2016-04-06T02:52:09.095-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:20.727-0500 c20013| 2016-04-06T02:52:09.096-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 876 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.096-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:20.729-0500 c20013| 2016-04-06T02:52:09.096-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 876 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:20.731-0500 c20013| 2016-04-06T02:52:09.098-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.733-0500 c20013| 2016-04-06T02:52:09.098-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.734-0500 c20013| 2016-04-06T02:52:09.098-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.734-0500 c20013| 2016-04-06T02:52:09.098-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.735-0500 c20013| 2016-04-06T02:52:09.098-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.742-0500 c20013| 2016-04-06T02:52:09.098-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.743-0500 c20013| 2016-04-06T02:52:09.098-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.744-0500 c20013| 2016-04-06T02:52:09.098-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.747-0500 c20013| 2016-04-06T02:52:09.098-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.748-0500 c20013| 2016-04-06T02:52:09.098-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.749-0500 c20013| 2016-04-06T02:52:09.098-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.751-0500 c20013| 2016-04-06T02:52:09.098-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.751-0500 c20013| 2016-04-06T02:52:09.098-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.753-0500 c20013| 2016-04-06T02:52:09.098-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.753-0500 c20013| 2016-04-06T02:52:09.098-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.754-0500 c20013| 2016-04-06T02:52:09.098-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:20.756-0500 c20013| 2016-04-06T02:52:09.099-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:20.760-0500 c20013| 2016-04-06T02:52:09.099-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.762-0500 c20012| 2016-04-06T02:52:22.642-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|4, t: 2 } and is durable through: { ts: Timestamp 1459929142000|3, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.766-0500 c20012| 2016-04-06T02:52:22.642-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:20.768-0500 c20012| 2016-04-06T02:52:22.643-0500 D REPL [conn11] Required snapshot optime: { ts: Timestamp 1459929142000|4, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929142000|3, t: 2 }, name-id: "193" } [js_test:multi_coll_drop] 2016-04-06T02:53:20.770-0500 c20012| 2016-04-06T02:52:22.649-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|3, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:20.773-0500 c20012| 2016-04-06T02:52:22.650-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.774-0500 c20012| 2016-04-06T02:52:22.650-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:20.775-0500 c20012| 2016-04-06T02:52:22.650-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|4, t: 2 } and is durable through: { ts: Timestamp 1459929142000|3, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.777-0500 c20012| 2016-04-06T02:52:22.650-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929142000|4, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929142000|3, t: 2 }, name-id: "193" } [js_test:multi_coll_drop] 2016-04-06T02:53:20.780-0500 c20012| 2016-04-06T02:52:22.650-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.786-0500 c20012| 2016-04-06T02:52:22.650-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:20.789-0500 c20012| 2016-04-06T02:52:22.653-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.791-0500 c20012| 2016-04-06T02:52:22.653-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:20.799-0500 c20012| 2016-04-06T02:52:22.653-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.801-0500 c20012| 2016-04-06T02:52:22.653-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|4, t: 2 } and is durable through: { ts: Timestamp 1459929142000|4, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.804-0500 c20012| 2016-04-06T02:52:22.653-0500 D REPL [conn18] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|4, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.808-0500 c20012| 2016-04-06T02:52:22.653-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:20.815-0500 c20012| 2016-04-06T02:52:22.653-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|3, t: 2 } } cursorid:25449496203 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 12ms [js_test:multi_coll_drop] 2016-04-06T02:53:20.820-0500 c20012| 2016-04-06T02:52:22.653-0500 I COMMAND [conn11] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c03365c17830b843f1a5') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 19ms [js_test:multi_coll_drop] 2016-04-06T02:53:20.824-0500 c20012| 2016-04-06T02:52:22.653-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|3, t: 2 } } cursorid:22197973872 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:53:20.826-0500 c20012| 2016-04-06T02:52:22.654-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|4, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:20.828-0500 c20012| 2016-04-06T02:52:22.654-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|4, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:20.830-0500 c20012| 2016-04-06T02:52:22.656-0500 D COMMAND [conn11] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c03665c17830b843f1a7'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929142656), why: "splitting chunk [{ _id: -80.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.834-0500 c20012| 2016-04-06T02:52:22.656-0500 D QUERY [conn11] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:20.839-0500 c20012| 2016-04-06T02:52:22.656-0500 D QUERY [conn11] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:20.842-0500 c20012| 2016-04-06T02:52:22.656-0500 D QUERY [conn11] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.848-0500 c20012| 2016-04-06T02:52:22.656-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.848-0500 c20012| 2016-04-06T02:52:22.656-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:20.851-0500 c20012| 2016-04-06T02:52:22.656-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|4, t: 2 } and is durable through: { ts: Timestamp 1459929142000|4, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.852-0500 c20012| 2016-04-06T02:52:22.656-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.854-0500 c20012| 2016-04-06T02:52:22.656-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:20.857-0500 c20012| 2016-04-06T02:52:22.657-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|4, t: 2 } } cursorid:22197973872 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:20.860-0500 c20012| 2016-04-06T02:52:22.657-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|4, t: 2 } } cursorid:25449496203 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:20.864-0500 c20012| 2016-04-06T02:52:22.659-0500 D REPL [conn11] Required snapshot optime: { ts: Timestamp 1459929142000|5, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929142000|4, t: 2 }, name-id: "194" } [js_test:multi_coll_drop] 2016-04-06T02:53:20.867-0500 c20012| 2016-04-06T02:52:22.659-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|4, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:20.874-0500 c20012| 2016-04-06T02:52:22.660-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.875-0500 c20012| 2016-04-06T02:52:22.660-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:20.881-0500 c20012| 2016-04-06T02:52:22.660-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.885-0500 c20012| 2016-04-06T02:52:22.660-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|5, t: 2 } and is durable through: { ts: Timestamp 1459929142000|4, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.889-0500 c20012| 2016-04-06T02:52:22.660-0500 D REPL [conn18] Required snapshot optime: { ts: Timestamp 1459929142000|5, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929142000|4, t: 2 }, name-id: "194" } [js_test:multi_coll_drop] 2016-04-06T02:53:20.892-0500 c20012| 2016-04-06T02:52:22.660-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:20.895-0500 c20012| 2016-04-06T02:52:22.660-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|4, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:20.899-0500 c20012| 2016-04-06T02:52:22.662-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.901-0500 c20012| 2016-04-06T02:52:22.662-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:20.911-0500 c20012| 2016-04-06T02:52:22.662-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|5, t: 2 } and is durable through: { ts: Timestamp 1459929142000|4, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.915-0500 c20012| 2016-04-06T02:52:22.662-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929142000|5, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929142000|4, t: 2 }, name-id: "194" } [js_test:multi_coll_drop] 2016-04-06T02:53:20.918-0500 c20012| 2016-04-06T02:52:22.662-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.923-0500 c20012| 2016-04-06T02:52:22.662-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:20.928-0500 c20012| 2016-04-06T02:52:22.664-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:20.929-0500 c20012| 2016-04-06T02:52:22.664-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:20.930-0500 c20012| 2016-04-06T02:52:22.664-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.935-0500 c20012| 2016-04-06T02:52:22.664-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|5, t: 2 } and is durable through: { ts: Timestamp 1459929142000|5, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.937-0500 c20012| 2016-04-06T02:52:22.664-0500 D REPL [conn18] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|5, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.945-0500 c20012| 2016-04-06T02:52:22.664-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:20.953-0500 c20012| 2016-04-06T02:52:22.664-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|4, t: 2 } } cursorid:25449496203 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:53:20.959-0500 c20012| 2016-04-06T02:52:22.664-0500 I COMMAND [conn11] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c03665c17830b843f1a7'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929142656), why: "splitting chunk [{ _id: -80.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c03665c17830b843f1a7'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929142656), why: "splitting chunk [{ _id: -80.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:53:20.964-0500 c20012| 2016-04-06T02:52:22.664-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|4, t: 2 } } cursorid:22197973872 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:53:20.967-0500 c20012| 2016-04-06T02:52:22.664-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|5, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:20.969-0500 c20012| 2016-04-06T02:52:22.664-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|5, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:20.974-0500 c20012| 2016-04-06T02:52:22.665-0500 D COMMAND [conn11] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|42 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|5, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.977-0500 c20012| 2016-04-06T02:52:22.665-0500 D COMMAND [conn11] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|5, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:20.983-0500 c20012| 2016-04-06T02:52:22.665-0500 D COMMAND [conn11] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|42 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|5, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.987-0500 c20012| 2016-04-06T02:52:22.665-0500 D QUERY [conn11] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:20.991-0500 c20012| 2016-04-06T02:52:22.666-0500 I COMMAND [conn11] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|42 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|5, t: 2 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:20.997-0500 c20012| 2016-04-06T02:52:22.666-0500 D COMMAND [conn11] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-80.0", lastmod: Timestamp 1000|43, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -80.0 }, max: { _id: -79.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-80.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-79.0", lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -79.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-79.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|42 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:20.998-0500 c20012| 2016-04-06T02:52:22.666-0500 D QUERY [conn11] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:21.016-0500 c20012| 2016-04-06T02:52:22.666-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.016-0500 c20012| 2016-04-06T02:52:22.666-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:21.018-0500 c20012| 2016-04-06T02:52:22.666-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|5, t: 2 } and is durable through: { ts: Timestamp 1459929142000|5, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.018-0500 c20012| 2016-04-06T02:52:22.666-0500 D QUERY [conn11] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:21.020-0500 c20012| 2016-04-06T02:52:22.666-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.022-0500 c20012| 2016-04-06T02:52:22.666-0500 I COMMAND [conn11] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.030-0500 c20012| 2016-04-06T02:52:22.666-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.031-0500 c20012| 2016-04-06T02:52:22.666-0500 D QUERY [conn11] Using idhack: { _id: "multidrop.coll-_id_-80.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:21.031-0500 c20012| 2016-04-06T02:52:22.666-0500 D QUERY [conn11] Using idhack: { _id: "multidrop.coll-_id_-79.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:21.033-0500 c20012| 2016-04-06T02:52:22.667-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|5, t: 2 } } cursorid:25449496203 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.037-0500 c20012| 2016-04-06T02:52:22.667-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|5, t: 2 } } cursorid:22197973872 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.040-0500 c20012| 2016-04-06T02:52:22.669-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.041-0500 c20012| 2016-04-06T02:52:22.669-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:21.047-0500 c20012| 2016-04-06T02:52:22.669-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.054-0500 c20012| 2016-04-06T02:52:22.669-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|6, t: 2 } and is durable through: { ts: Timestamp 1459929142000|5, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.058-0500 c20012| 2016-04-06T02:52:22.669-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.075-0500 c20012| 2016-04-06T02:52:22.669-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|5, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:21.088-0500 c20012| 2016-04-06T02:52:22.669-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|5, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:21.091-0500 c20012| 2016-04-06T02:52:22.669-0500 D REPL [conn11] Required snapshot optime: { ts: Timestamp 1459929142000|6, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929142000|5, t: 2 }, name-id: "195" } [js_test:multi_coll_drop] 2016-04-06T02:53:21.097-0500 c20012| 2016-04-06T02:52:22.670-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.099-0500 c20012| 2016-04-06T02:52:22.670-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:21.100-0500 c20012| 2016-04-06T02:52:22.670-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|6, t: 2 } and is durable through: { ts: Timestamp 1459929142000|5, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.101-0500 c20012| 2016-04-06T02:52:22.670-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929142000|6, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929142000|5, t: 2 }, name-id: "195" } [js_test:multi_coll_drop] 2016-04-06T02:53:21.104-0500 c20012| 2016-04-06T02:52:22.670-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.106-0500 c20012| 2016-04-06T02:52:22.670-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.108-0500 c20012| 2016-04-06T02:52:22.676-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.110-0500 c20012| 2016-04-06T02:52:22.676-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:21.115-0500 c20012| 2016-04-06T02:52:22.676-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.118-0500 c20012| 2016-04-06T02:52:22.676-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|6, t: 2 } and is durable through: { ts: Timestamp 1459929142000|6, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.119-0500 c20012| 2016-04-06T02:52:22.676-0500 D REPL [conn18] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|6, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.122-0500 c20012| 2016-04-06T02:52:22.676-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.130-0500 c20012| 2016-04-06T02:52:22.676-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|5, t: 2 } } cursorid:25449496203 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.133-0500 c20012| 2016-04-06T02:52:22.676-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|5, t: 2 } } cursorid:22197973872 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.137-0500 c20012| 2016-04-06T02:52:22.676-0500 I COMMAND [conn11] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-80.0", lastmod: Timestamp 1000|43, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -80.0 }, max: { _id: -79.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-80.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-79.0", lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -79.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-79.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|42 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 10ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.139-0500 c20012| 2016-04-06T02:52:22.676-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.139-0500 c20012| 2016-04-06T02:52:22.676-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:21.141-0500 c20012| 2016-04-06T02:52:22.676-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|6, t: 2 } and is durable through: { ts: Timestamp 1459929142000|6, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.146-0500 c20012| 2016-04-06T02:52:22.676-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.150-0500 c20012| 2016-04-06T02:52:22.676-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.154-0500 c20012| 2016-04-06T02:52:22.676-0500 D COMMAND [conn11] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:22.676-0500-5704c03665c17830b843f1a8", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929142676), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -80.0 }, max: { _id: MaxKey } }, left: { min: { _id: -80.0 }, max: { _id: -79.0 }, lastmod: Timestamp 1000|43, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -79.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.156-0500 c20012| 2016-04-06T02:52:22.676-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|6, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:21.158-0500 c20012| 2016-04-06T02:52:22.677-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|6, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:21.164-0500 c20012| 2016-04-06T02:52:22.677-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|6, t: 2 } } cursorid:22197973872 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.166-0500 c20012| 2016-04-06T02:52:22.677-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|6, t: 2 } } cursorid:25449496203 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.167-0500 c20012| 2016-04-06T02:52:22.679-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|6, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:21.170-0500 c20012| 2016-04-06T02:52:22.681-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.171-0500 c20012| 2016-04-06T02:52:22.681-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:21.172-0500 c20012| 2016-04-06T02:52:22.681-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.174-0500 c20012| 2016-04-06T02:52:22.681-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|7, t: 2 } and is durable through: { ts: Timestamp 1459929142000|6, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.180-0500 c20012| 2016-04-06T02:52:22.681-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.182-0500 c20012| 2016-04-06T02:52:22.683-0500 D REPL [conn11] Required snapshot optime: { ts: Timestamp 1459929142000|7, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929142000|6, t: 2 }, name-id: "196" } [js_test:multi_coll_drop] 2016-04-06T02:53:21.184-0500 c20012| 2016-04-06T02:52:22.686-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|6, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:21.187-0500 c20012| 2016-04-06T02:52:22.687-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.187-0500 c20012| 2016-04-06T02:52:22.687-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:21.189-0500 c20012| 2016-04-06T02:52:22.687-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|7, t: 2 } and is durable through: { ts: Timestamp 1459929142000|6, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.190-0500 c20012| 2016-04-06T02:52:22.687-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929142000|7, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929142000|6, t: 2 }, name-id: "196" } [js_test:multi_coll_drop] 2016-04-06T02:53:21.193-0500 c20012| 2016-04-06T02:52:22.687-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.196-0500 c20012| 2016-04-06T02:52:22.688-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.198-0500 c20012| 2016-04-06T02:52:22.690-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.198-0500 c20012| 2016-04-06T02:52:22.690-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:21.199-0500 c20012| 2016-04-06T02:52:22.690-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.200-0500 c20012| 2016-04-06T02:52:22.690-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|7, t: 2 } and is durable through: { ts: Timestamp 1459929142000|7, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.202-0500 c20012| 2016-04-06T02:52:22.690-0500 D REPL [conn18] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|7, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.204-0500 c20012| 2016-04-06T02:52:22.690-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.207-0500 c20012| 2016-04-06T02:52:22.690-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.210-0500 c20012| 2016-04-06T02:52:22.690-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:21.217-0500 c20012| 2016-04-06T02:52:22.690-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|7, t: 2 } and is durable through: { ts: Timestamp 1459929142000|7, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.221-0500 c20012| 2016-04-06T02:52:22.690-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.224-0500 c20012| 2016-04-06T02:52:22.690-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.229-0500 c20012| 2016-04-06T02:52:22.690-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|6, t: 2 } } cursorid:25449496203 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 10ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.232-0500 c20012| 2016-04-06T02:52:22.690-0500 I COMMAND [conn11] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:22.676-0500-5704c03665c17830b843f1a8", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929142676), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -80.0 }, max: { _id: MaxKey } }, left: { min: { _id: -80.0 }, max: { _id: -79.0 }, lastmod: Timestamp 1000|43, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -79.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 13ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.234-0500 c20012| 2016-04-06T02:52:22.690-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|6, t: 2 } } cursorid:22197973872 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.236-0500 c20012| 2016-04-06T02:52:22.691-0500 D COMMAND [conn11] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c03665c17830b843f1a7') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.237-0500 c20012| 2016-04-06T02:52:22.691-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|7, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:21.240-0500 c20012| 2016-04-06T02:52:22.691-0500 D QUERY [conn11] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:21.243-0500 c20012| 2016-04-06T02:52:22.691-0500 D QUERY [conn11] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c03665c17830b843f1a7') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.248-0500 c20012| 2016-04-06T02:52:22.691-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|7, t: 2 } } cursorid:25449496203 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.250-0500 c20012| 2016-04-06T02:52:22.692-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|7, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:21.253-0500 c20012| 2016-04-06T02:52:22.692-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|7, t: 2 } } cursorid:22197973872 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.258-0500 c20012| 2016-04-06T02:52:22.693-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.259-0500 c20012| 2016-04-06T02:52:22.693-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:21.261-0500 c20012| 2016-04-06T02:52:22.693-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.264-0500 c20012| 2016-04-06T02:52:22.693-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|8, t: 2 } and is durable through: { ts: Timestamp 1459929142000|7, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.277-0500 c20012| 2016-04-06T02:52:22.693-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.278-0500 c20012| 2016-04-06T02:52:22.694-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|7, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:21.282-0500 c20012| 2016-04-06T02:52:22.694-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|7, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:21.286-0500 c20012| 2016-04-06T02:52:22.695-0500 D REPL [conn11] Required snapshot optime: { ts: Timestamp 1459929142000|8, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929142000|7, t: 2 }, name-id: "197" } [js_test:multi_coll_drop] 2016-04-06T02:53:21.291-0500 c20012| 2016-04-06T02:52:22.696-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.291-0500 c20012| 2016-04-06T02:52:22.696-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:21.294-0500 c20012| 2016-04-06T02:52:22.696-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|8, t: 2 } and is durable through: { ts: Timestamp 1459929142000|7, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.296-0500 c20012| 2016-04-06T02:52:22.696-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929142000|8, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929142000|7, t: 2 }, name-id: "197" } [js_test:multi_coll_drop] 2016-04-06T02:53:21.299-0500 c20012| 2016-04-06T02:52:22.696-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.302-0500 c20012| 2016-04-06T02:52:22.696-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.307-0500 c20012| 2016-04-06T02:52:22.698-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.307-0500 c20012| 2016-04-06T02:52:22.699-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:21.311-0500 c20012| 2016-04-06T02:52:22.699-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.317-0500 c20012| 2016-04-06T02:52:22.699-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|8, t: 2 } and is durable through: { ts: Timestamp 1459929142000|8, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.318-0500 c20012| 2016-04-06T02:52:22.699-0500 D REPL [conn18] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|8, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.320-0500 c20012| 2016-04-06T02:52:22.699-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.323-0500 c20012| 2016-04-06T02:52:22.699-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|7, t: 2 } } cursorid:25449496203 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.331-0500 c20012| 2016-04-06T02:52:22.699-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|7, t: 2 } } cursorid:22197973872 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.340-0500 c20012| 2016-04-06T02:52:22.699-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.341-0500 c20012| 2016-04-06T02:52:22.699-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:21.345-0500 c20012| 2016-04-06T02:52:22.699-0500 I COMMAND [conn11] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c03665c17830b843f1a7') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 8ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.357-0500 c20012| 2016-04-06T02:52:22.699-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|8, t: 2 } and is durable through: { ts: Timestamp 1459929142000|8, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.359-0500 c20012| 2016-04-06T02:52:22.699-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.366-0500 c20012| 2016-04-06T02:52:22.699-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.369-0500 c20012| 2016-04-06T02:52:22.699-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|8, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:21.373-0500 c20012| 2016-04-06T02:52:22.700-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|8, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:21.379-0500 c20012| 2016-04-06T02:52:22.703-0500 D COMMAND [conn11] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c03665c17830b843f1a9'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929142702), why: "splitting chunk [{ _id: -79.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.381-0500 c20012| 2016-04-06T02:52:22.703-0500 D QUERY [conn11] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:21.383-0500 c20012| 2016-04-06T02:52:22.703-0500 D QUERY [conn11] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:21.387-0500 c20012| 2016-04-06T02:52:22.703-0500 D QUERY [conn11] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.390-0500 c20012| 2016-04-06T02:52:22.703-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|8, t: 2 } } cursorid:22197973872 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.392-0500 c20012| 2016-04-06T02:52:22.703-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|8, t: 2 } } cursorid:25449496203 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.395-0500 c20012| 2016-04-06T02:52:22.705-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.397-0500 c20012| 2016-04-06T02:52:22.705-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:21.403-0500 c20012| 2016-04-06T02:52:22.705-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.404-0500 c20012| 2016-04-06T02:52:22.705-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.406-0500 c20012| 2016-04-06T02:52:22.705-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:21.410-0500 c20012| 2016-04-06T02:52:22.705-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|9, t: 2 } and is durable through: { ts: Timestamp 1459929142000|8, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.427-0500 c20012| 2016-04-06T02:52:22.706-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.430-0500 c20012| 2016-04-06T02:52:22.706-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|9, t: 2 } and is durable through: { ts: Timestamp 1459929142000|8, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.432-0500 c20012| 2016-04-06T02:52:22.706-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.439-0500 c20012| 2016-04-06T02:52:22.706-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.441-0500 c20012| 2016-04-06T02:52:22.706-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|8, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:21.443-0500 c20012| 2016-04-06T02:52:22.706-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|8, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:21.446-0500 c20012| 2016-04-06T02:52:22.706-0500 D REPL [conn11] Required snapshot optime: { ts: Timestamp 1459929142000|9, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929142000|8, t: 2 }, name-id: "198" } [js_test:multi_coll_drop] 2016-04-06T02:53:21.448-0500 c20012| 2016-04-06T02:52:22.708-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.448-0500 c20012| 2016-04-06T02:52:22.709-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:21.450-0500 c20012| 2016-04-06T02:52:22.708-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.451-0500 c20012| 2016-04-06T02:52:22.709-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.452-0500 c20012| 2016-04-06T02:52:22.709-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:21.453-0500 c20012| 2016-04-06T02:52:22.709-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|9, t: 2 } and is durable through: { ts: Timestamp 1459929142000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.455-0500 c20012| 2016-04-06T02:52:22.709-0500 D REPL [conn18] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.457-0500 c20012| 2016-04-06T02:52:22.709-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.459-0500 c20012| 2016-04-06T02:52:22.709-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|8, t: 2 } } cursorid:25449496203 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.462-0500 c20012| 2016-04-06T02:52:22.709-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|8, t: 2 } } cursorid:22197973872 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.463-0500 c20012| 2016-04-06T02:52:22.709-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|9, t: 2 } and is durable through: { ts: Timestamp 1459929142000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.466-0500 c20012| 2016-04-06T02:52:22.709-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.471-0500 c20012| 2016-04-06T02:52:22.709-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.476-0500 c20012| 2016-04-06T02:52:22.709-0500 I COMMAND [conn11] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c03665c17830b843f1a9'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929142702), why: "splitting chunk [{ _id: -79.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c03665c17830b843f1a9'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929142702), why: "splitting chunk [{ _id: -79.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.480-0500 c20012| 2016-04-06T02:52:22.709-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:21.481-0500 c20012| 2016-04-06T02:52:22.709-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:21.483-0500 c20012| 2016-04-06T02:52:22.711-0500 D COMMAND [conn11] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-79.0", lastmod: Timestamp 1000|45, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -79.0 }, max: { _id: -78.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-79.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-78.0", lastmod: Timestamp 1000|46, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -78.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-78.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|44 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.485-0500 c20012| 2016-04-06T02:52:22.711-0500 D QUERY [conn11] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:21.486-0500 c20012| 2016-04-06T02:52:22.711-0500 D QUERY [conn11] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:21.490-0500 c20012| 2016-04-06T02:52:22.711-0500 I COMMAND [conn11] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.490-0500 c20012| 2016-04-06T02:52:22.711-0500 D QUERY [conn11] Using idhack: { _id: "multidrop.coll-_id_-79.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:21.491-0500 c20012| 2016-04-06T02:52:22.711-0500 D QUERY [conn11] Using idhack: { _id: "multidrop.coll-_id_-78.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:21.496-0500 c20012| 2016-04-06T02:52:22.711-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|9, t: 2 } } cursorid:25449496203 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:21.497-0500 2016-04-06T02:53:06.741-0500 I NETWORK [ReplicaSetMonitorWatcher] Socket closed remotely, no longer connected (idle 15 secs, remote host 192.168.100.28:20011) [js_test:multi_coll_drop] 2016-04-06T02:53:21.497-0500 d20010| 2016-04-06T02:53:04.668-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:21.499-0500 c20011| 2016-04-06T02:52:41.831-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|9, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|10, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.500-0500 d20010| 2016-04-06T02:53:04.720-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|74||5704c02806c33406d4d9c0c0, took 21353ms) [js_test:multi_coll_drop] 2016-04-06T02:53:21.501-0500 d20010| 2016-04-06T02:53:04.720-0500 I SHARDING [conn5] splitChunk accepted at version 1|74||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:21.501-0500 d20010| 2016-04-06T02:53:04.720-0500 I NETWORK [conn5] Socket closed remotely, no longer connected (idle 8 secs, remote host 192.168.100.28:20011) [js_test:multi_coll_drop] 2016-04-06T02:53:21.502-0500 d20010| 2016-04-06T02:53:04.722-0500 W NETWORK [conn5] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:21.502-0500 d20010| 2016-04-06T02:53:05.223-0500 W NETWORK [conn5] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:21.505-0500 c20013| 2016-04-06T02:52:09.099-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 877 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.506-0500 c20013| 2016-04-06T02:52:09.099-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 877 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:21.506-0500 c20013| 2016-04-06T02:52:09.099-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 877 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.507-0500 c20013| 2016-04-06T02:52:09.100-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 876 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.508-0500 c20013| 2016-04-06T02:52:09.100-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.511-0500 c20013| 2016-04-06T02:52:09.100-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.514-0500 c20013| 2016-04-06T02:52:09.100-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 880 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.516-0500 c20013| 2016-04-06T02:52:09.100-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 880 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:21.516-0500 c20013| 2016-04-06T02:52:09.101-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 880 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.518-0500 c20013| 2016-04-06T02:52:09.101-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:21.522-0500 c20013| 2016-04-06T02:52:09.101-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 882 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.101-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:21.523-0500 c20013| 2016-04-06T02:52:09.101-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 882 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:21.529-0500 c20013| 2016-04-06T02:52:09.102-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 882 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929129000|8, t: 1, h: -8286090448525995533, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-85.0", lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -85.0 }, max: { _id: -84.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-85.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-84.0", lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -84.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-84.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.535-0500 c20013| 2016-04-06T02:52:09.103-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929129000|8 and ending at ts: Timestamp 1459929129000|8 [js_test:multi_coll_drop] 2016-04-06T02:53:21.539-0500 c20013| 2016-04-06T02:52:09.103-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:21.539-0500 c20013| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.540-0500 c20013| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.540-0500 c20013| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.541-0500 c20013| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.541-0500 c20013| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.542-0500 c20013| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.543-0500 c20013| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.544-0500 c20013| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.546-0500 c20013| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.546-0500 c20013| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.547-0500 c20013| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.548-0500 c20013| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.549-0500 c20013| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.550-0500 c20013| 2016-04-06T02:52:09.103-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.551-0500 c20013| 2016-04-06T02:52:09.103-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:21.552-0500 c20013| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.557-0500 c20013| 2016-04-06T02:52:09.104-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-85.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:21.557-0500 c20013| 2016-04-06T02:52:09.104-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-84.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:21.557-0500 c20013| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.559-0500 c20013| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.561-0500 c20013| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.566-0500 c20013| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.567-0500 c20013| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.567-0500 c20013| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.571-0500 c20013| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.572-0500 c20013| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.573-0500 c20013| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.575-0500 c20013| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.576-0500 c20013| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.576-0500 c20013| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.580-0500 c20013| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.581-0500 c20013| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.585-0500 c20013| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.587-0500 c20013| 2016-04-06T02:52:09.104-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.587-0500 c20013| 2016-04-06T02:52:09.105-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.589-0500 c20013| 2016-04-06T02:52:09.105-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:21.592-0500 c20013| 2016-04-06T02:52:09.105-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 884 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.105-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:21.603-0500 c20013| 2016-04-06T02:52:09.105-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.608-0500 c20013| 2016-04-06T02:52:09.105-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 884 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:21.614-0500 c20013| 2016-04-06T02:52:09.105-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 885 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.617-0500 c20013| 2016-04-06T02:52:09.105-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 885 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:21.618-0500 c20013| 2016-04-06T02:52:09.105-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 885 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.619-0500 c20013| 2016-04-06T02:52:09.107-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 884 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.620-0500 c20013| 2016-04-06T02:52:09.107-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.622-0500 c20013| 2016-04-06T02:52:09.107-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:21.624-0500 c20013| 2016-04-06T02:52:09.107-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 888 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.107-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:21.627-0500 c20013| 2016-04-06T02:52:09.107-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 888 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:21.631-0500 c20013| 2016-04-06T02:52:09.107-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 888 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929129000|9, t: 1, h: 6671296048852295689, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:09.107-0500-5704c02965c17830b843f19d", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929129107), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -85.0 }, max: { _id: MaxKey } }, left: { min: { _id: -85.0 }, max: { _id: -84.0 }, lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -84.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.634-0500 c20013| 2016-04-06T02:52:09.108-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929129000|9 and ending at ts: Timestamp 1459929129000|9 [js_test:multi_coll_drop] 2016-04-06T02:53:21.636-0500 c20013| 2016-04-06T02:52:09.110-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 890 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.110-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:21.640-0500 c20013| 2016-04-06T02:52:09.110-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:21.642-0500 c20013| 2016-04-06T02:52:09.110-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 890 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:21.648-0500 c20013| 2016-04-06T02:52:09.110-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.651-0500 c20013| 2016-04-06T02:52:09.110-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.652-0500 c20013| 2016-04-06T02:52:09.110-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.655-0500 c20013| 2016-04-06T02:52:09.110-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.656-0500 c20013| 2016-04-06T02:52:09.110-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.656-0500 c20013| 2016-04-06T02:52:09.110-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.659-0500 c20013| 2016-04-06T02:52:09.110-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.661-0500 c20013| 2016-04-06T02:52:09.110-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.661-0500 c20013| 2016-04-06T02:52:09.110-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.663-0500 c20013| 2016-04-06T02:52:09.110-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.666-0500 c20013| 2016-04-06T02:52:09.110-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.669-0500 c20013| 2016-04-06T02:52:09.110-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.670-0500 c20013| 2016-04-06T02:52:09.110-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.670-0500 c20013| 2016-04-06T02:52:09.110-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:21.675-0500 c20013| 2016-04-06T02:52:09.110-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.678-0500 c20013| 2016-04-06T02:52:09.110-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.680-0500 c20013| 2016-04-06T02:52:09.111-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.683-0500 c20013| 2016-04-06T02:52:09.111-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.686-0500 c20013| 2016-04-06T02:52:09.111-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.690-0500 c20013| 2016-04-06T02:52:09.111-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.693-0500 c20013| 2016-04-06T02:52:09.111-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 891 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.694-0500 c20013| 2016-04-06T02:52:09.111-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 891 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:21.697-0500 c20013| 2016-04-06T02:52:09.111-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.699-0500 c20013| 2016-04-06T02:52:09.111-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.700-0500 c20013| 2016-04-06T02:52:09.111-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 891 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.701-0500 c20013| 2016-04-06T02:52:09.111-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.702-0500 c20013| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.703-0500 c20013| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.705-0500 c20013| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.707-0500 c20013| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.707-0500 c20013| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.711-0500 c20013| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.712-0500 c20013| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.712-0500 c20013| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.717-0500 c20013| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.718-0500 c20013| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.719-0500 c20013| 2016-04-06T02:52:09.112-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.722-0500 c20013| 2016-04-06T02:52:09.112-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:21.728-0500 c20013| 2016-04-06T02:52:09.112-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.732-0500 c20013| 2016-04-06T02:52:09.112-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 893 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.736-0500 c20013| 2016-04-06T02:52:09.112-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 893 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:21.737-0500 c20013| 2016-04-06T02:52:09.112-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 893 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.743-0500 c20013| 2016-04-06T02:52:09.113-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.747-0500 c20013| 2016-04-06T02:52:09.113-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 895 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.747-0500 c20013| 2016-04-06T02:52:09.113-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 895 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:21.753-0500 c20013| 2016-04-06T02:52:09.114-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 895 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.754-0500 c20013| 2016-04-06T02:52:09.114-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 890 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.755-0500 c20013| 2016-04-06T02:52:09.114-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.758-0500 c20013| 2016-04-06T02:52:09.114-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:21.762-0500 c20013| 2016-04-06T02:52:09.114-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 898 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.114-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:21.765-0500 c20013| 2016-04-06T02:52:09.114-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 898 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:21.770-0500 c20013| 2016-04-06T02:52:09.114-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 898 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929129000|10, t: 1, h: -8221257626238961736, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.774-0500 c20013| 2016-04-06T02:52:09.114-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929129000|10 and ending at ts: Timestamp 1459929129000|10 [js_test:multi_coll_drop] 2016-04-06T02:53:21.779-0500 c20013| 2016-04-06T02:52:09.115-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:21.779-0500 c20013| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.781-0500 c20013| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.788-0500 c20013| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.789-0500 c20013| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.793-0500 c20013| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.796-0500 c20013| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.797-0500 c20013| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.797-0500 c20013| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.798-0500 c20013| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.802-0500 c20013| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.803-0500 c20013| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.804-0500 c20013| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.806-0500 c20013| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.807-0500 c20013| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.820-0500 c20013| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.823-0500 c20013| 2016-04-06T02:52:09.115-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:21.825-0500 c20013| 2016-04-06T02:52:09.115-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.827-0500 c20013| 2016-04-06T02:52:09.115-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:21.836-0500 c20013| 2016-04-06T02:52:09.116-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.841-0500 c20013| 2016-04-06T02:52:09.116-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.844-0500 c20013| 2016-04-06T02:52:09.116-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.846-0500 c20013| 2016-04-06T02:52:09.116-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.852-0500 c20013| 2016-04-06T02:52:09.116-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.853-0500 c20013| 2016-04-06T02:52:09.116-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.855-0500 c20013| 2016-04-06T02:52:09.116-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.856-0500 c20013| 2016-04-06T02:52:09.116-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.859-0500 c20013| 2016-04-06T02:52:09.116-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.862-0500 c20013| 2016-04-06T02:52:09.116-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.865-0500 c20013| 2016-04-06T02:52:09.116-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.866-0500 c20013| 2016-04-06T02:52:09.116-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.876-0500 c20013| 2016-04-06T02:52:09.116-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.877-0500 c20013| 2016-04-06T02:52:09.116-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.878-0500 c20013| 2016-04-06T02:52:09.116-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.880-0500 c20013| 2016-04-06T02:52:09.116-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.882-0500 c20013| 2016-04-06T02:52:09.116-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:21.886-0500 c20013| 2016-04-06T02:52:09.116-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.897-0500 c20013| 2016-04-06T02:52:09.116-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 900 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.902-0500 c20013| 2016-04-06T02:52:09.116-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 900 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:21.903-0500 c20013| 2016-04-06T02:52:09.116-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 900 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.906-0500 c20013| 2016-04-06T02:52:09.117-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 902 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.117-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:21.909-0500 c20013| 2016-04-06T02:52:09.117-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 902 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:21.913-0500 c20013| 2016-04-06T02:52:09.125-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.916-0500 c20013| 2016-04-06T02:52:09.125-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 903 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:21.917-0500 c20013| 2016-04-06T02:52:09.126-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 903 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:21.919-0500 c20013| 2016-04-06T02:52:09.126-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 903 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.920-0500 c20013| 2016-04-06T02:52:09.126-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 902 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.921-0500 c20013| 2016-04-06T02:52:09.127-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.921-0500 c20013| 2016-04-06T02:52:09.127-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:21.924-0500 c20013| 2016-04-06T02:52:09.127-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 906 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.127-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:21.925-0500 c20013| 2016-04-06T02:52:09.127-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 906 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:21.933-0500 c20013| 2016-04-06T02:52:09.129-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 906 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929129000|11, t: 1, h: -3977388700970809932, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02965c17830b843f19e'), state: 2, when: new Date(1459929129129), why: "splitting chunk [{ _id: -84.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:21.935-0500 c20013| 2016-04-06T02:52:09.130-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929129000|11 and ending at ts: Timestamp 1459929129000|11 [js_test:multi_coll_drop] 2016-04-06T02:53:21.936-0500 c20013| 2016-04-06T02:52:09.130-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:21.938-0500 c20013| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.939-0500 c20013| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.959-0500 c20013| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.959-0500 c20013| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.962-0500 c20013| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.963-0500 c20013| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.964-0500 c20013| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.967-0500 c20013| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.968-0500 c20013| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.969-0500 c20013| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.972-0500 c20013| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.973-0500 c20013| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.975-0500 c20013| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.977-0500 c20013| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.978-0500 c20013| 2016-04-06T02:52:09.130-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.978-0500 c20013| 2016-04-06T02:52:09.131-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:21.979-0500 c20013| 2016-04-06T02:52:09.131-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.980-0500 c20013| 2016-04-06T02:52:09.131-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:21.980-0500 c20013| 2016-04-06T02:52:09.131-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.980-0500 c20013| 2016-04-06T02:52:09.131-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.982-0500 c20013| 2016-04-06T02:52:09.131-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.982-0500 c20013| 2016-04-06T02:52:09.131-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.983-0500 c20013| 2016-04-06T02:52:09.131-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.984-0500 c20013| 2016-04-06T02:52:09.131-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.985-0500 c20013| 2016-04-06T02:52:09.131-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.985-0500 c20013| 2016-04-06T02:52:09.131-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.986-0500 c20013| 2016-04-06T02:52:09.131-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.988-0500 c20013| 2016-04-06T02:52:09.131-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:21.989-0500 c20013| 2016-04-06T02:52:09.131-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.007-0500 c20013| 2016-04-06T02:52:09.132-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.007-0500 c20013| 2016-04-06T02:52:09.132-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.007-0500 c20013| 2016-04-06T02:52:09.132-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.008-0500 c20013| 2016-04-06T02:52:09.132-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.008-0500 c20013| 2016-04-06T02:52:09.132-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.008-0500 c20013| 2016-04-06T02:52:09.132-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:22.008-0500 c20013| 2016-04-06T02:52:09.132-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.009-0500 c20013| 2016-04-06T02:52:09.132-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 908 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.009-0500 c20013| 2016-04-06T02:52:09.132-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 908 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.010-0500 c20013| 2016-04-06T02:52:09.132-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 908 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.010-0500 c20013| 2016-04-06T02:52:09.133-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 910 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.133-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:22.011-0500 c20013| 2016-04-06T02:52:09.133-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 910 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.011-0500 c20013| 2016-04-06T02:52:09.133-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 910 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.012-0500 c20013| 2016-04-06T02:52:09.139-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|11, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.027-0500 c20013| 2016-04-06T02:52:09.139-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:22.030-0500 c20013| 2016-04-06T02:52:09.139-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 912 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.139-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:22.030-0500 c20013| 2016-04-06T02:52:09.139-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 912 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.032-0500 c20013| 2016-04-06T02:52:09.139-0500 D COMMAND [conn7] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.032-0500 c20013| 2016-04-06T02:52:09.139-0500 D COMMAND [conn7] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:22.033-0500 c20013| 2016-04-06T02:52:09.140-0500 I COMMAND [conn7] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:22.039-0500 c20013| 2016-04-06T02:52:09.144-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 912 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929129000|12, t: 1, h: 8940339967816449048, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-84.0", lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -84.0 }, max: { _id: -83.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-84.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-83.0", lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -83.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-83.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.043-0500 c20013| 2016-04-06T02:52:09.145-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929129000|12 and ending at ts: Timestamp 1459929129000|12 [js_test:multi_coll_drop] 2016-04-06T02:53:22.047-0500 c20013| 2016-04-06T02:52:09.145-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:22.064-0500 c20013| 2016-04-06T02:52:09.145-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.067-0500 c20013| 2016-04-06T02:52:09.145-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 914 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.069-0500 c20013| 2016-04-06T02:52:09.145-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 914 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.069-0500 c20013| 2016-04-06T02:52:09.145-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 914 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.070-0500 c20013| 2016-04-06T02:52:09.145-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.070-0500 c20013| 2016-04-06T02:52:09.145-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.070-0500 c20013| 2016-04-06T02:52:09.145-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.071-0500 c20013| 2016-04-06T02:52:09.145-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.074-0500 c20013| 2016-04-06T02:52:09.145-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.079-0500 c20013| 2016-04-06T02:52:09.145-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.079-0500 c20013| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.081-0500 c20013| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.085-0500 c20013| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.086-0500 c20013| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.090-0500 c20013| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.092-0500 c20013| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.095-0500 c20013| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.097-0500 c20013| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.098-0500 c20013| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.105-0500 c20013| 2016-04-06T02:52:09.146-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:22.105-0500 c20013| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.106-0500 c20013| 2016-04-06T02:52:09.146-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-84.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:22.107-0500 c20013| 2016-04-06T02:52:09.146-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-83.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:22.109-0500 c20013| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.110-0500 c20013| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.110-0500 c20013| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.111-0500 c20013| 2016-04-06T02:52:09.146-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.111-0500 c20013| 2016-04-06T02:52:09.147-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.111-0500 c20013| 2016-04-06T02:52:09.147-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.115-0500 c20013| 2016-04-06T02:52:09.147-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.116-0500 c20013| 2016-04-06T02:52:09.147-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.119-0500 c20013| 2016-04-06T02:52:09.147-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.121-0500 c20013| 2016-04-06T02:52:09.147-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.124-0500 c20013| 2016-04-06T02:52:09.147-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.125-0500 c20013| 2016-04-06T02:52:09.147-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.129-0500 c20013| 2016-04-06T02:52:09.147-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.130-0500 c20013| 2016-04-06T02:52:09.147-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.132-0500 c20013| 2016-04-06T02:52:09.147-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.135-0500 c20013| 2016-04-06T02:52:09.147-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.136-0500 c20013| 2016-04-06T02:52:09.147-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:22.142-0500 c20013| 2016-04-06T02:52:09.152-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 916 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.152-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|11, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:22.146-0500 c20013| 2016-04-06T02:52:09.152-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 916 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.166-0500 c20013| 2016-04-06T02:52:09.153-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.174-0500 c20013| 2016-04-06T02:52:09.153-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 917 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|11, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.175-0500 c20013| 2016-04-06T02:52:09.153-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 917 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.176-0500 c20013| 2016-04-06T02:52:09.153-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 917 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.176-0500 c20013| 2016-04-06T02:52:09.156-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 916 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.178-0500 c20013| 2016-04-06T02:52:09.156-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929129000|12, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.178-0500 c20013| 2016-04-06T02:52:09.156-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:22.179-0500 c20013| 2016-04-06T02:52:09.156-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 920 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:14.156-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:22.179-0500 c20013| 2016-04-06T02:52:09.157-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 920 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.191-0500 c20013| 2016-04-06T02:52:09.158-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.197-0500 c20013| 2016-04-06T02:52:09.158-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 921 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, appliedOpTime: { ts: Timestamp 1459929127000|16, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.199-0500 c20013| 2016-04-06T02:52:09.158-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 921 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.201-0500 c20013| 2016-04-06T02:52:10.074-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 922 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:20.074-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.202-0500 c20013| 2016-04-06T02:52:10.074-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 922 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.204-0500 c20013| 2016-04-06T02:52:10.081-0500 D COMMAND [conn5] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.205-0500 c20013| 2016-04-06T02:52:10.081-0500 D COMMAND [conn5] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:22.207-0500 c20013| 2016-04-06T02:52:10.081-0500 I COMMAND [conn5] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:22.209-0500 c20013| 2016-04-06T02:52:10.082-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 923 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:20.082-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.209-0500 c20013| 2016-04-06T02:52:10.082-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 923 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:22.216-0500 c20013| 2016-04-06T02:52:10.082-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 923 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, opTime: { ts: Timestamp 1459929129000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:22.216-0500 c20013| 2016-04-06T02:52:10.082-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:12.082Z [js_test:multi_coll_drop] 2016-04-06T02:53:22.216-0500 c20013| 2016-04-06T02:52:10.164-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 921 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.219-0500 c20013| 2016-04-06T02:52:10.165-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 922 finished with response: { ok: 1.0, electionTime: new Date(6270347837762961409), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, opTime: { ts: Timestamp 1459929129000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:22.220-0500 c20013| 2016-04-06T02:52:10.165-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:12.165Z [js_test:multi_coll_drop] 2016-04-06T02:53:22.224-0500 c20013| 2016-04-06T02:52:10.165-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 920 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929130000|1, t: 1, h: -7830848170959971096, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:10.165-0500-5704c02a65c17830b843f19f", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929130165), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -84.0 }, max: { _id: MaxKey } }, left: { min: { _id: -84.0 }, max: { _id: -83.0 }, lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -83.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.228-0500 c20013| 2016-04-06T02:52:10.183-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929130000|1 and ending at ts: Timestamp 1459929130000|1 [js_test:multi_coll_drop] 2016-04-06T02:53:22.230-0500 c20013| 2016-04-06T02:52:10.185-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:50129 #11 (7 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:22.231-0500 c20013| 2016-04-06T02:52:10.185-0500 D COMMAND [conn11] run command admin.$cmd { isMaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.234-0500 c20013| 2016-04-06T02:52:10.185-0500 I COMMAND [conn11] command admin.$cmd command: isMaster { isMaster: 1 } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:22.236-0500 c20013| 2016-04-06T02:52:10.185-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:22.237-0500 c20013| 2016-04-06T02:52:10.185-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.238-0500 c20013| 2016-04-06T02:52:10.185-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 928 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.185-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929129000|12, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:22.238-0500 c20013| 2016-04-06T02:52:10.185-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 928 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.238-0500 c20013| 2016-04-06T02:52:10.185-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.239-0500 c20013| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.239-0500 c20013| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.240-0500 c20013| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.241-0500 c20013| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.241-0500 c20013| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.242-0500 c20013| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.243-0500 c20013| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.245-0500 c20013| 2016-04-06T02:52:10.186-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:22.249-0500 c20013| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.250-0500 c20013| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.256-0500 c20013| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.259-0500 c20013| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.264-0500 c20013| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.267-0500 c20013| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.272-0500 c20013| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.273-0500 c20013| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.275-0500 c20013| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.277-0500 c20013| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.280-0500 c20013| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.281-0500 c20013| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.283-0500 c20013| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.283-0500 c20013| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.285-0500 c20013| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.289-0500 c20013| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.291-0500 c20013| 2016-04-06T02:52:10.186-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.293-0500 c20013| 2016-04-06T02:52:10.187-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.295-0500 c20013| 2016-04-06T02:52:10.187-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.298-0500 c20013| 2016-04-06T02:52:10.187-0500 D COMMAND [conn11] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.299-0500 c20013| 2016-04-06T02:52:10.187-0500 I COMMAND [conn11] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:22.300-0500 c20013| 2016-04-06T02:52:10.188-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.300-0500 c20013| 2016-04-06T02:52:10.188-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.301-0500 c20013| 2016-04-06T02:52:10.192-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.302-0500 c20013| 2016-04-06T02:52:10.192-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.304-0500 c20013| 2016-04-06T02:52:10.192-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:22.309-0500 c20013| 2016-04-06T02:52:10.192-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.317-0500 c20013| 2016-04-06T02:52:10.192-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 929 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.322-0500 c20013| 2016-04-06T02:52:10.192-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 929 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.323-0500 c20013| 2016-04-06T02:52:10.192-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 929 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.324-0500 c20013| 2016-04-06T02:52:10.216-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 928 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.328-0500 c20013| 2016-04-06T02:52:10.216-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.328-0500 c20013| 2016-04-06T02:52:10.216-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:22.331-0500 c20013| 2016-04-06T02:52:10.216-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 932 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.216-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:22.333-0500 c20013| 2016-04-06T02:52:10.216-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 932 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.336-0500 c20013| 2016-04-06T02:52:10.217-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.337-0500 c20013| 2016-04-06T02:52:10.217-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 933 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.338-0500 c20013| 2016-04-06T02:52:10.217-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 933 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.338-0500 c20013| 2016-04-06T02:52:10.217-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 933 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.340-0500 c20013| 2016-04-06T02:52:10.217-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 932 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929130000|2, t: 1, h: 1200965899079533550, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.340-0500 c20013| 2016-04-06T02:52:10.217-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929130000|2 and ending at ts: Timestamp 1459929130000|2 [js_test:multi_coll_drop] 2016-04-06T02:53:22.341-0500 c20013| 2016-04-06T02:52:10.217-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:22.342-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.343-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.343-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.344-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.345-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.345-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.346-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.347-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.347-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.349-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.350-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.354-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.354-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.356-0500 c20013| 2016-04-06T02:52:10.218-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:22.356-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.358-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.359-0500 c20013| 2016-04-06T02:52:10.218-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:22.359-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.361-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.362-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.363-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.363-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.363-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.364-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.366-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.367-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.367-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.367-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.369-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.370-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.373-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.375-0500 c20013| 2016-04-06T02:52:10.218-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.376-0500 c20013| 2016-04-06T02:52:10.219-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 936 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.219-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|1, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:22.377-0500 c20013| 2016-04-06T02:52:10.220-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 936 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.379-0500 c20013| 2016-04-06T02:52:10.221-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 936 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.380-0500 c20013| 2016-04-06T02:52:10.221-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|2, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.380-0500 c20013| 2016-04-06T02:52:10.221-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:22.384-0500 c20013| 2016-04-06T02:52:10.221-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 938 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.221-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:22.386-0500 c20013| 2016-04-06T02:52:10.221-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 938 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.389-0500 c20013| 2016-04-06T02:52:10.222-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.390-0500 c20013| 2016-04-06T02:52:10.222-0500 D REPL [conn10] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929130000|2, t: 1 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929130000|1, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.393-0500 c20013| 2016-04-06T02:52:10.222-0500 D REPL [conn10] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999982μs [js_test:multi_coll_drop] 2016-04-06T02:53:22.393-0500 c20013| 2016-04-06T02:52:10.224-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.394-0500 c20013| 2016-04-06T02:52:10.224-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.396-0500 c20013| 2016-04-06T02:52:10.225-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:22.397-0500 c20013| 2016-04-06T02:52:10.225-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|2, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:22.402-0500 c20013| 2016-04-06T02:52:10.225-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.403-0500 c20013| 2016-04-06T02:52:10.225-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.408-0500 c20013| 2016-04-06T02:52:10.225-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 939 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|1, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.411-0500 c20013| 2016-04-06T02:52:10.225-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 939 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.413-0500 c20013| 2016-04-06T02:52:10.225-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:22.417-0500 c20013| 2016-04-06T02:52:10.225-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:53:22.418-0500 c20013| 2016-04-06T02:52:10.225-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 939 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.423-0500 c20013| 2016-04-06T02:52:10.227-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.430-0500 c20013| 2016-04-06T02:52:10.227-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 941 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.431-0500 c20013| 2016-04-06T02:52:10.227-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 941 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.446-0500 c20013| 2016-04-06T02:52:10.227-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 941 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.454-0500 c20013| 2016-04-06T02:52:10.228-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 938 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929130000|3, t: 1, h: 4850188129135545978, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02a65c17830b843f1a0'), state: 2, when: new Date(1459929130228), why: "splitting chunk [{ _id: -83.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.457-0500 c20013| 2016-04-06T02:52:10.228-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929130000|3 and ending at ts: Timestamp 1459929130000|3 [js_test:multi_coll_drop] 2016-04-06T02:53:22.460-0500 c20013| 2016-04-06T02:52:10.229-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:22.461-0500 c20013| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.462-0500 c20013| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.463-0500 c20013| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.463-0500 c20013| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.466-0500 c20013| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.466-0500 c20013| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.467-0500 c20013| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.468-0500 c20013| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.469-0500 c20013| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.470-0500 c20013| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.471-0500 c20013| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.471-0500 c20013| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.473-0500 c20013| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.473-0500 c20013| 2016-04-06T02:52:10.229-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:22.474-0500 c20013| 2016-04-06T02:52:10.229-0500 D QUERY [repl writer worker 3] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:22.478-0500 c20013| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.479-0500 c20013| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.480-0500 c20013| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.482-0500 c20013| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.482-0500 c20013| 2016-04-06T02:52:10.229-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.483-0500 c20013| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.484-0500 c20013| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.485-0500 c20013| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.485-0500 c20013| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.486-0500 c20013| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.488-0500 c20013| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.488-0500 c20013| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.489-0500 c20013| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.491-0500 c20013| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.495-0500 c20013| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.496-0500 c20013| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.498-0500 c20013| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.498-0500 c20013| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.499-0500 c20013| 2016-04-06T02:52:10.230-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.500-0500 c20013| 2016-04-06T02:52:10.230-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:22.506-0500 c20013| 2016-04-06T02:52:10.230-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.508-0500 c20013| 2016-04-06T02:52:10.230-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 944 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|2, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.509-0500 c20013| 2016-04-06T02:52:10.230-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 944 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.510-0500 c20013| 2016-04-06T02:52:10.230-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 944 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.513-0500 c20013| 2016-04-06T02:52:10.231-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 946 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.231-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|2, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:22.516-0500 c20013| 2016-04-06T02:52:10.231-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 946 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.522-0500 c20013| 2016-04-06T02:52:10.232-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.532-0500 c20013| 2016-04-06T02:52:10.232-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 947 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.532-0500 c20013| 2016-04-06T02:52:10.232-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 947 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.532-0500 c20013| 2016-04-06T02:52:10.232-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 947 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.537-0500 c20013| 2016-04-06T02:52:10.232-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 946 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.539-0500 c20013| 2016-04-06T02:52:10.232-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|3, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.541-0500 c20013| 2016-04-06T02:52:10.232-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:22.543-0500 c20013| 2016-04-06T02:52:10.232-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 950 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.232-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:22.544-0500 c20013| 2016-04-06T02:52:10.232-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 950 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.549-0500 c20013| 2016-04-06T02:52:10.234-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 950 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929130000|4, t: 1, h: -5215253636266494371, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-83.0", lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -83.0 }, max: { _id: -82.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-83.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-82.0", lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -82.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-82.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.552-0500 c20013| 2016-04-06T02:52:10.234-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929130000|4 and ending at ts: Timestamp 1459929130000|4 [js_test:multi_coll_drop] 2016-04-06T02:53:22.561-0500 c20013| 2016-04-06T02:52:10.234-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:22.562-0500 c20013| 2016-04-06T02:52:10.234-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.563-0500 c20013| 2016-04-06T02:52:10.234-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.564-0500 c20013| 2016-04-06T02:52:10.234-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.565-0500 c20013| 2016-04-06T02:52:10.234-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.568-0500 c20013| 2016-04-06T02:52:10.234-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.571-0500 c20013| 2016-04-06T02:52:10.234-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.572-0500 c20013| 2016-04-06T02:52:10.234-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.573-0500 c20013| 2016-04-06T02:52:10.234-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.573-0500 c20013| 2016-04-06T02:52:10.234-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.573-0500 c20013| 2016-04-06T02:52:10.234-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.574-0500 c20013| 2016-04-06T02:52:10.234-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.574-0500 c20013| 2016-04-06T02:52:10.234-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.576-0500 c20013| 2016-04-06T02:52:10.234-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.582-0500 c20013| 2016-04-06T02:52:10.234-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:22.589-0500 c20013| 2016-04-06T02:52:10.234-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.589-0500 c20013| 2016-04-06T02:52:10.234-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll-_id_-83.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:22.590-0500 c20013| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.591-0500 c20013| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.592-0500 c20013| 2016-04-06T02:52:10.235-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll-_id_-82.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:22.597-0500 c20013| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.599-0500 c20013| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.601-0500 c20013| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.602-0500 c20013| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.605-0500 c20013| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.609-0500 c20013| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.611-0500 c20013| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.612-0500 c20013| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.615-0500 c20013| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.618-0500 c20013| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.620-0500 c20013| 2016-04-06T02:52:10.235-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.622-0500 c20013| 2016-04-06T02:52:10.236-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.626-0500 c20013| 2016-04-06T02:52:10.236-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.627-0500 c20013| 2016-04-06T02:52:10.236-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.628-0500 c20013| 2016-04-06T02:52:10.236-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.632-0500 c20013| 2016-04-06T02:52:10.236-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 952 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.236-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|3, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:22.635-0500 c20013| 2016-04-06T02:52:10.236-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 952 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.637-0500 c20013| 2016-04-06T02:52:10.236-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.639-0500 c20013| 2016-04-06T02:52:10.236-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:22.645-0500 c20013| 2016-04-06T02:52:10.236-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.647-0500 c20013| 2016-04-06T02:52:10.236-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 953 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|3, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.648-0500 c20013| 2016-04-06T02:52:10.236-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 953 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.652-0500 c20013| 2016-04-06T02:52:10.236-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 953 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.658-0500 c20013| 2016-04-06T02:52:10.238-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.663-0500 c20013| 2016-04-06T02:52:10.238-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 955 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.664-0500 c20013| 2016-04-06T02:52:10.238-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 955 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.665-0500 c20013| 2016-04-06T02:52:10.238-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 955 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.666-0500 c20013| 2016-04-06T02:52:10.239-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 952 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.667-0500 c20013| 2016-04-06T02:52:10.239-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|4, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.668-0500 c20013| 2016-04-06T02:52:10.239-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:22.672-0500 c20013| 2016-04-06T02:52:10.239-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 958 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.239-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:22.673-0500 c20013| 2016-04-06T02:52:10.239-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 958 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.677-0500 c20013| 2016-04-06T02:52:10.239-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 958 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929130000|5, t: 1, h: 3823828548878560264, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:10.239-0500-5704c02a65c17830b843f1a1", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929130239), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -83.0 }, max: { _id: MaxKey } }, left: { min: { _id: -83.0 }, max: { _id: -82.0 }, lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -82.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.678-0500 c20013| 2016-04-06T02:52:10.239-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929130000|5 and ending at ts: Timestamp 1459929130000|5 [js_test:multi_coll_drop] 2016-04-06T02:53:22.680-0500 c20013| 2016-04-06T02:52:10.239-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:22.682-0500 c20013| 2016-04-06T02:52:10.239-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.684-0500 c20013| 2016-04-06T02:52:10.239-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.686-0500 c20013| 2016-04-06T02:52:10.239-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.688-0500 c20013| 2016-04-06T02:52:10.239-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.690-0500 c20013| 2016-04-06T02:52:10.239-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.692-0500 c20013| 2016-04-06T02:52:10.239-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.693-0500 c20013| 2016-04-06T02:52:10.239-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.694-0500 c20013| 2016-04-06T02:52:10.239-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.696-0500 c20013| 2016-04-06T02:52:10.239-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.698-0500 c20013| 2016-04-06T02:52:10.239-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.700-0500 c20013| 2016-04-06T02:52:10.239-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.703-0500 c20013| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.706-0500 c20013| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.710-0500 c20013| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.713-0500 c20013| 2016-04-06T02:52:10.240-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:22.714-0500 c20013| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.717-0500 c20013| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.718-0500 c20013| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.721-0500 c20013| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.722-0500 c20013| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.723-0500 c20013| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.726-0500 c20013| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.726-0500 c20013| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.727-0500 c20013| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.729-0500 c20013| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.730-0500 c20013| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.731-0500 c20013| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.731-0500 c20013| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.732-0500 c20013| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.735-0500 c20013| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.736-0500 c20013| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.738-0500 c20013| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.739-0500 c20013| 2016-04-06T02:52:10.240-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.741-0500 c20013| 2016-04-06T02:52:10.241-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:22.743-0500 c20013| 2016-04-06T02:52:10.241-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.746-0500 c20013| 2016-04-06T02:52:10.241-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 960 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|4, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.761-0500 c20013| 2016-04-06T02:52:10.241-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 960 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.762-0500 c20013| 2016-04-06T02:52:10.241-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 961 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.241-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|4, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:22.763-0500 c20013| 2016-04-06T02:52:10.241-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 960 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.767-0500 c20013| 2016-04-06T02:52:10.241-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 961 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.772-0500 c20013| 2016-04-06T02:52:10.243-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.775-0500 c20013| 2016-04-06T02:52:10.243-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 963 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.778-0500 c20013| 2016-04-06T02:52:10.243-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 963 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.779-0500 c20013| 2016-04-06T02:52:10.243-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 963 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.781-0500 c20013| 2016-04-06T02:52:10.244-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 961 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.782-0500 c20013| 2016-04-06T02:52:10.244-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|5, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.783-0500 c20013| 2016-04-06T02:52:10.244-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:22.789-0500 c20013| 2016-04-06T02:52:10.244-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 966 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.244-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:22.791-0500 c20013| 2016-04-06T02:52:10.244-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 966 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.795-0500 c20013| 2016-04-06T02:52:10.244-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 966 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929130000|6, t: 1, h: 838024042340526810, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.797-0500 c20013| 2016-04-06T02:52:10.244-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929130000|6 and ending at ts: Timestamp 1459929130000|6 [js_test:multi_coll_drop] 2016-04-06T02:53:22.798-0500 c20013| 2016-04-06T02:52:10.244-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:22.800-0500 c20013| 2016-04-06T02:52:10.244-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.804-0500 c20013| 2016-04-06T02:52:10.244-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.805-0500 c20013| 2016-04-06T02:52:10.244-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.807-0500 c20013| 2016-04-06T02:52:10.245-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.809-0500 c20013| 2016-04-06T02:52:10.245-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.810-0500 c20013| 2016-04-06T02:52:10.245-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.816-0500 c20013| 2016-04-06T02:52:10.245-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.817-0500 c20013| 2016-04-06T02:52:10.245-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.818-0500 c20013| 2016-04-06T02:52:10.245-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.819-0500 c20013| 2016-04-06T02:52:10.245-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.821-0500 c20013| 2016-04-06T02:52:10.245-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.824-0500 c20013| 2016-04-06T02:52:10.245-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.825-0500 c20013| 2016-04-06T02:52:10.245-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.826-0500 c20013| 2016-04-06T02:52:10.245-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.826-0500 c20013| 2016-04-06T02:52:10.245-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:22.830-0500 c20013| 2016-04-06T02:52:10.245-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.831-0500 c20013| 2016-04-06T02:52:10.245-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:22.833-0500 c20013| 2016-04-06T02:52:10.245-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.834-0500 c20013| 2016-04-06T02:52:10.245-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.835-0500 c20013| 2016-04-06T02:52:10.245-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.835-0500 c20013| 2016-04-06T02:52:10.245-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.836-0500 c20013| 2016-04-06T02:52:10.245-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.839-0500 c20013| 2016-04-06T02:52:10.245-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.843-0500 c20013| 2016-04-06T02:52:10.245-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.844-0500 c20013| 2016-04-06T02:52:10.245-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.844-0500 c20013| 2016-04-06T02:52:10.245-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.845-0500 c20013| 2016-04-06T02:52:10.245-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.853-0500 c20013| 2016-04-06T02:52:10.245-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.854-0500 c20013| 2016-04-06T02:52:10.245-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.857-0500 c20013| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.857-0500 c20013| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.859-0500 c20013| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.859-0500 c20013| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.860-0500 c20013| 2016-04-06T02:52:10.246-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:22.865-0500 c20013| 2016-04-06T02:52:10.246-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:22.868-0500 c20013| 2016-04-06T02:52:10.246-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.875-0500 c20013| 2016-04-06T02:52:10.246-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 968 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|5, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.875-0500 c20013| 2016-04-06T02:52:10.246-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 968 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.879-0500 c20013| 2016-04-06T02:52:10.246-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 969 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.246-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|5, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:22.880-0500 c20013| 2016-04-06T02:52:10.246-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 968 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.883-0500 c20013| 2016-04-06T02:52:10.246-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 969 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.885-0500 c20013| 2016-04-06T02:52:10.254-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 969 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.887-0500 c20013| 2016-04-06T02:52:10.254-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|6, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.890-0500 c20013| 2016-04-06T02:52:10.254-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:22.893-0500 c20013| 2016-04-06T02:52:10.254-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 972 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.254-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:22.897-0500 c20013| 2016-04-06T02:52:10.254-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.898-0500 c20013| 2016-04-06T02:52:10.254-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 972 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.905-0500 c20013| 2016-04-06T02:52:10.254-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 973 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:22.906-0500 c20013| 2016-04-06T02:52:10.254-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 973 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.909-0500 c20013| 2016-04-06T02:52:10.254-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 973 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.913-0500 c20013| 2016-04-06T02:52:10.255-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|36 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|6, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.921-0500 c20013| 2016-04-06T02:52:10.255-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|6, t: 1 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:22.924-0500 c20013| 2016-04-06T02:52:10.255-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|36 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|6, t: 1 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.926-0500 c20013| 2016-04-06T02:52:10.255-0500 D QUERY [conn10] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:22.931-0500 c20013| 2016-04-06T02:52:10.255-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|36 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929130000|6, t: 1 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:712 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:22.936-0500 s20015| 2016-04-06T02:53:04.663-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Failed to execute command: RemoteCommand 83 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:21.773-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929171773), up: 44, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } reason: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:22.936-0500 s20015| 2016-04-06T02:53:04.663-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 83 finished with response: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:22.937-0500 s20015| 2016-04-06T02:53:04.664-0500 D NETWORK [Balancer] Marking host mongovm16:20011 as failed [js_test:multi_coll_drop] 2016-04-06T02:53:22.938-0500 s20015| 2016-04-06T02:53:04.664-0500 D SHARDING [Balancer] Command failed with retriable error and will be retried :: caused by :: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:22.940-0500 s20015| 2016-04-06T02:53:04.664-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:22.941-0500 s20015| 2016-04-06T02:53:04.664-0500 D NETWORK [Balancer] polling for status of connection to 192.168.100.28:20011, event detected [js_test:multi_coll_drop] 2016-04-06T02:53:22.943-0500 s20015| 2016-04-06T02:53:04.664-0500 I NETWORK [Balancer] Socket closed remotely, no longer connected (idle 13 secs, remote host 192.168.100.28:20011) [js_test:multi_coll_drop] 2016-04-06T02:53:22.944-0500 s20015| 2016-04-06T02:53:04.664-0500 D NETWORK [Balancer] creating new connection to:mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:22.947-0500 s20015| 2016-04-06T02:53:04.664-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:53:22.950-0500 s20015| 2016-04-06T02:53:04.664-0500 D NETWORK [Balancer] connected to server mongovm16:20011 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:53:22.951-0500 s20015| 2016-04-06T02:53:04.665-0500 D NETWORK [Balancer] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:53:22.952-0500 s20015| 2016-04-06T02:53:04.665-0500 D NETWORK [Balancer] polling for status of connection to 192.168.100.28:20012, no events [js_test:multi_coll_drop] 2016-04-06T02:53:22.956-0500 s20015| 2016-04-06T02:53:04.666-0500 D NETWORK [Balancer] creating new connection to:mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:22.957-0500 s20015| 2016-04-06T02:53:04.666-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:53:22.959-0500 s20015| 2016-04-06T02:53:04.668-0500 D NETWORK [Balancer] connected to server mongovm16:20013 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:53:22.960-0500 s20015| 2016-04-06T02:53:04.668-0500 D NETWORK [Balancer] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:53:22.961-0500 s20015| 2016-04-06T02:53:04.668-0500 W NETWORK [Balancer] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:22.963-0500 s20015| 2016-04-06T02:53:05.169-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:22.965-0500 s20015| 2016-04-06T02:53:05.174-0500 W NETWORK [Balancer] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:22.969-0500 s20015| 2016-04-06T02:53:05.674-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:22.973-0500 s20015| 2016-04-06T02:53:05.682-0500 D ASIO [Balancer] startCommand: RemoteCommand 85 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:35.682-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929171773), up: 44, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.975-0500 s20015| 2016-04-06T02:53:05.687-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 85 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:22.976-0500 s20015| 2016-04-06T02:53:06.715-0500 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:22.979-0500 s20015| 2016-04-06T02:53:07.373-0500 D - [PeriodicTaskRunner] cleaning up unused lock buckets of the global lock manager [js_test:multi_coll_drop] 2016-04-06T02:53:22.982-0500 s20015| 2016-04-06T02:53:07.373-0500 D ASIO [UserCacheInvalidator] startCommand: RemoteCommand 86 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:37.373-0500 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.982-0500 s20015| 2016-04-06T02:53:07.373-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:22.983-0500 s20015| 2016-04-06T02:53:07.374-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 87 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:22.985-0500 s20015| 2016-04-06T02:53:07.374-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:22.986-0500 s20015| 2016-04-06T02:53:07.375-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 87 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:22.988-0500 s20015| 2016-04-06T02:53:07.375-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 86 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:22.990-0500 s20015| 2016-04-06T02:53:07.375-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 86 finished with response: { cacheGeneration: ObjectId('5704c01f525046a6a8063338'), ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:22.993-0500 s20015| 2016-04-06T02:53:07.375-0500 I ACCESS [UserCacheInvalidator] User cache generation changed from 5704c01c3876c4cfd2eb3eb7 to 5704c01f525046a6a8063338; invalidating user cache [js_test:multi_coll_drop] 2016-04-06T02:53:22.997-0500 s20015| 2016-04-06T02:53:08.212-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 85 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929185000|3, t: 4 }, electionId: ObjectId('7fffffff0000000000000004') } [js_test:multi_coll_drop] 2016-04-06T02:53:23.024-0500 s20015| 2016-04-06T02:53:08.212-0500 D ASIO [Balancer] startCommand: RemoteCommand 90 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:38.212-0500 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|4, t: 4 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.025-0500 s20015| 2016-04-06T02:53:08.212-0500 I ASIO [Balancer] dropping unhealthy pooled connection to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:23.027-0500 s20015| 2016-04-06T02:53:08.212-0500 I ASIO [Balancer] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:53:23.044-0500 s20015| 2016-04-06T02:53:08.212-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:23.051-0500 s20015| 2016-04-06T02:53:08.213-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 91 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:23.053-0500 s20015| 2016-04-06T02:53:08.216-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:23.055-0500 s20015| 2016-04-06T02:53:08.216-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 91 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:23.057-0500 s20015| 2016-04-06T02:53:08.216-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 90 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:23.062-0500 s20015| 2016-04-06T02:53:08.217-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 90 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "shard0000", host: "mongovm16:20010" } ], id: 0, ns: "config.shards" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.063-0500 s20015| 2016-04-06T02:53:08.218-0500 D SHARDING [Balancer] found 1 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp 1459929185000|4, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.064-0500 s20015| 2016-04-06T02:53:08.218-0500 D ASIO [Balancer] startCommand: RemoteCommand 93 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:38.218-0500 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|4, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.066-0500 s20015| 2016-04-06T02:53:08.219-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 93 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:23.069-0500 s20015| 2016-04-06T02:53:08.219-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 93 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "chunksize", value: 50 } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.069-0500 s20015| 2016-04-06T02:53:08.219-0500 D SHARDING [Balancer] Refreshing MaxChunkSize: 50MB [js_test:multi_coll_drop] 2016-04-06T02:53:23.084-0500 s20015| 2016-04-06T02:53:08.219-0500 D ASIO [Balancer] startCommand: RemoteCommand 95 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:38.219-0500 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|4, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.084-0500 s20015| 2016-04-06T02:53:08.219-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 95 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:23.089-0500 s20015| 2016-04-06T02:53:08.221-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 95 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "balancer", stopped: true } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.090-0500 s20015| 2016-04-06T02:53:08.221-0500 D SHARDING [Balancer] skipping balancing round because balancing is disabled [js_test:multi_coll_drop] 2016-04-06T02:53:23.092-0500 s20015| 2016-04-06T02:53:08.221-0500 D ASIO [Balancer] startCommand: RemoteCommand 97 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:38.221-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929188221), up: 61, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.097-0500 s20015| 2016-04-06T02:53:08.221-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 97 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:23.101-0500 s20015| 2016-04-06T02:53:08.271-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 97 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929188000|3, t: 4 }, electionId: ObjectId('7fffffff0000000000000004') } [js_test:multi_coll_drop] 2016-04-06T02:53:23.105-0500 s20014| 2016-04-06T02:53:04.664-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Failed to execute command: RemoteCommand 387 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:21.765-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929171765), up: 44, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } reason: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:23.109-0500 s20014| 2016-04-06T02:53:04.664-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 387 finished with response: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:23.111-0500 s20014| 2016-04-06T02:53:04.664-0500 D NETWORK [Balancer] Marking host mongovm16:20011 as failed [js_test:multi_coll_drop] 2016-04-06T02:53:23.116-0500 c20011| 2016-04-06T02:52:41.832-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:23.119-0500 c20011| 2016-04-06T02:52:41.832-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.122-0500 c20011| 2016-04-06T02:52:41.832-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|10, t: 3 } and is durable through: { ts: Timestamp 1459929161000|9, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.124-0500 c20011| 2016-04-06T02:52:41.832-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929161000|10, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|9, t: 3 }, name-id: "209" } [js_test:multi_coll_drop] 2016-04-06T02:53:23.128-0500 c20011| 2016-04-06T02:52:41.832-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|9, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|10, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.130-0500 c20011| 2016-04-06T02:52:41.838-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|10, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|10, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:23.131-0500 c20011| 2016-04-06T02:52:41.838-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:23.133-0500 c20011| 2016-04-06T02:52:41.838-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.135-0500 c20011| 2016-04-06T02:52:41.838-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|10, t: 3 } and is durable through: { ts: Timestamp 1459929161000|10, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.137-0500 c20011| 2016-04-06T02:52:41.838-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929161000|10, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.140-0500 c20011| 2016-04-06T02:52:41.838-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|10, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|10, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.142-0500 c20011| 2016-04-06T02:52:41.839-0500 I COMMAND [conn40] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c04965c17830b843f1b1') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 16ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.144-0500 c20011| 2016-04-06T02:52:41.839-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|9, t: 3 } } cursorid:19853084149 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 11ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.147-0500 c20011| 2016-04-06T02:52:41.840-0500 D COMMAND [conn36] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|10, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.147-0500 c20011| 2016-04-06T02:52:41.840-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|10, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:23.150-0500 c20011| 2016-04-06T02:52:41.840-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|10, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.150-0500 c20011| 2016-04-06T02:52:41.840-0500 D QUERY [conn36] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:23.155-0500 c20011| 2016-04-06T02:52:41.840-0500 I COMMAND [conn36] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|10, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.156-0500 c20011| 2016-04-06T02:52:41.841-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|10, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:23.160-0500 c20011| 2016-04-06T02:52:41.842-0500 D COMMAND [conn40] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c04965c17830b843f1b3'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929161842), why: "splitting chunk [{ _id: -74.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.164-0500 c20011| 2016-04-06T02:52:41.842-0500 D QUERY [conn40] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:23.166-0500 c20011| 2016-04-06T02:52:41.842-0500 D QUERY [conn40] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:23.170-0500 c20011| 2016-04-06T02:52:41.842-0500 D QUERY [conn40] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.186-0500 c20011| 2016-04-06T02:52:41.843-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|10, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.188-0500 c20011| 2016-04-06T02:52:41.845-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|10, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:23.189-0500 c20011| 2016-04-06T02:52:41.866-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929161000|11, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|10, t: 3 }, name-id: "210" } [js_test:multi_coll_drop] 2016-04-06T02:53:23.191-0500 c20011| 2016-04-06T02:52:41.875-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|10, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:23.192-0500 c20011| 2016-04-06T02:52:41.875-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:23.194-0500 c20011| 2016-04-06T02:52:41.875-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.195-0500 c20011| 2016-04-06T02:52:41.875-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|11, t: 3 } and is durable through: { ts: Timestamp 1459929161000|10, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.198-0500 c20011| 2016-04-06T02:52:41.875-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929161000|11, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|10, t: 3 }, name-id: "210" } [js_test:multi_coll_drop] 2016-04-06T02:53:23.202-0500 c20011| 2016-04-06T02:52:41.875-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|10, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.204-0500 c20011| 2016-04-06T02:52:41.878-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:23.204-0500 c20011| 2016-04-06T02:52:41.878-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:23.205-0500 c20011| 2016-04-06T02:52:41.878-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.207-0500 c20011| 2016-04-06T02:52:41.878-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|11, t: 3 } and is durable through: { ts: Timestamp 1459929161000|11, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.208-0500 c20011| 2016-04-06T02:52:41.878-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929161000|11, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.211-0500 c20011| 2016-04-06T02:52:41.878-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.213-0500 c20011| 2016-04-06T02:52:41.879-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|10, t: 3 } } cursorid:19853084149 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 33ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.218-0500 c20011| 2016-04-06T02:52:41.879-0500 I COMMAND [conn40] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c04965c17830b843f1b3'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929161842), why: "splitting chunk [{ _id: -74.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c04965c17830b843f1b3'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929161842), why: "splitting chunk [{ _id: -74.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 36ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.221-0500 c20011| 2016-04-06T02:52:41.880-0500 D COMMAND [conn40] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-74.0", lastmod: Timestamp 1000|55, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -74.0 }, max: { _id: -73.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-74.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-73.0", lastmod: Timestamp 1000|56, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -73.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-73.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|54 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.222-0500 c20011| 2016-04-06T02:52:41.880-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|11, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:23.223-0500 c20011| 2016-04-06T02:52:41.880-0500 D QUERY [conn40] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:23.224-0500 c20011| 2016-04-06T02:52:41.880-0500 D QUERY [conn40] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:23.227-0500 c20011| 2016-04-06T02:52:41.881-0500 I COMMAND [conn40] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.229-0500 c20011| 2016-04-06T02:52:41.881-0500 D QUERY [conn40] Using idhack: { _id: "multidrop.coll-_id_-74.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:23.229-0500 c20011| 2016-04-06T02:52:41.881-0500 D QUERY [conn40] Using idhack: { _id: "multidrop.coll-_id_-73.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:23.233-0500 c20011| 2016-04-06T02:52:41.881-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|11, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 274 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.235-0500 c20011| 2016-04-06T02:52:41.883-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|11, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:23.237-0500 c20011| 2016-04-06T02:52:41.887-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|12, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:23.237-0500 c20011| 2016-04-06T02:52:41.887-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:23.238-0500 c20011| 2016-04-06T02:52:41.887-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.241-0500 c20011| 2016-04-06T02:52:41.887-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|12, t: 3 } and is durable through: { ts: Timestamp 1459929161000|11, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.245-0500 c20011| 2016-04-06T02:52:41.887-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|12, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.249-0500 c20011| 2016-04-06T02:52:41.891-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929161000|12, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|11, t: 3 }, name-id: "211" } [js_test:multi_coll_drop] 2016-04-06T02:53:23.252-0500 c20011| 2016-04-06T02:52:41.893-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|12, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|12, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:23.253-0500 c20011| 2016-04-06T02:52:41.893-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:23.254-0500 c20011| 2016-04-06T02:52:41.893-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.256-0500 c20011| 2016-04-06T02:52:41.893-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|12, t: 3 } and is durable through: { ts: Timestamp 1459929161000|12, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.257-0500 c20011| 2016-04-06T02:52:41.893-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929161000|12, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.263-0500 c20011| 2016-04-06T02:52:41.893-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|12, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|12, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.269-0500 c20011| 2016-04-06T02:52:41.893-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|11, t: 3 } } cursorid:19853084149 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 9ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.273-0500 c20011| 2016-04-06T02:52:41.893-0500 I COMMAND [conn40] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-74.0", lastmod: Timestamp 1000|55, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -74.0 }, max: { _id: -73.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-74.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-73.0", lastmod: Timestamp 1000|56, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -73.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-73.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|54 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 12ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.280-0500 c20011| 2016-04-06T02:52:41.893-0500 D COMMAND [conn40] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:41.893-0500-5704c04965c17830b843f1b4", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929161893), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -74.0 }, max: { _id: MaxKey } }, left: { min: { _id: -74.0 }, max: { _id: -73.0 }, lastmod: Timestamp 1000|55, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -73.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|56, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.282-0500 c20011| 2016-04-06T02:52:41.894-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|12, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:23.284-0500 c20011| 2016-04-06T02:52:41.895-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|12, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.286-0500 c20011| 2016-04-06T02:52:41.895-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929161000|13, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|12, t: 3 }, name-id: "212" } [js_test:multi_coll_drop] 2016-04-06T02:53:23.287-0500 c20011| 2016-04-06T02:52:41.897-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|12, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:23.289-0500 c20011| 2016-04-06T02:52:41.906-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|12, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|13, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:23.289-0500 c20011| 2016-04-06T02:52:41.906-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:23.292-0500 c20011| 2016-04-06T02:52:41.906-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.298-0500 c20011| 2016-04-06T02:52:41.906-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|13, t: 3 } and is durable through: { ts: Timestamp 1459929161000|12, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.301-0500 c20011| 2016-04-06T02:52:41.906-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929161000|13, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|12, t: 3 }, name-id: "212" } [js_test:multi_coll_drop] 2016-04-06T02:53:23.304-0500 c20011| 2016-04-06T02:52:41.906-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|12, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|13, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.309-0500 c20011| 2016-04-06T02:52:41.906-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|13, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|13, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:23.310-0500 c20011| 2016-04-06T02:52:41.906-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:23.312-0500 c20011| 2016-04-06T02:52:41.906-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.314-0500 c20011| 2016-04-06T02:52:41.906-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|13, t: 3 } and is durable through: { ts: Timestamp 1459929161000|13, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.315-0500 c20011| 2016-04-06T02:52:41.906-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929161000|13, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.317-0500 c20011| 2016-04-06T02:52:41.906-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|13, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|13, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.323-0500 c20011| 2016-04-06T02:52:41.907-0500 I COMMAND [conn40] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:41.893-0500-5704c04965c17830b843f1b4", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929161893), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -74.0 }, max: { _id: MaxKey } }, left: { min: { _id: -74.0 }, max: { _id: -73.0 }, lastmod: Timestamp 1000|55, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -73.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|56, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 13ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.330-0500 c20011| 2016-04-06T02:52:41.907-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|12, t: 3 } } cursorid:19853084149 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 9ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.348-0500 c20011| 2016-04-06T02:52:41.907-0500 D COMMAND [conn40] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c04965c17830b843f1b3') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.356-0500 c20011| 2016-04-06T02:52:41.907-0500 D QUERY [conn40] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:23.359-0500 c20011| 2016-04-06T02:52:41.907-0500 D QUERY [conn40] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c04965c17830b843f1b3') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.365-0500 c20011| 2016-04-06T02:52:41.908-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|13, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:23.373-0500 c20011| 2016-04-06T02:52:41.908-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|13, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.375-0500 c20011| 2016-04-06T02:52:41.911-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|13, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:23.379-0500 c20011| 2016-04-06T02:52:41.912-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|13, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|14, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:23.380-0500 c20011| 2016-04-06T02:52:41.912-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:23.382-0500 c20011| 2016-04-06T02:52:41.912-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.385-0500 c20011| 2016-04-06T02:52:41.912-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|14, t: 3 } and is durable through: { ts: Timestamp 1459929161000|13, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.386-0500 c20011| 2016-04-06T02:52:41.912-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929161000|14, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|13, t: 3 }, name-id: "213" } [js_test:multi_coll_drop] 2016-04-06T02:53:23.390-0500 c20011| 2016-04-06T02:52:41.912-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|13, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|14, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.392-0500 c20011| 2016-04-06T02:52:41.947-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|14, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|14, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:23.393-0500 c20011| 2016-04-06T02:52:41.947-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:23.396-0500 c20011| 2016-04-06T02:52:41.947-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.399-0500 c20011| 2016-04-06T02:52:41.947-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|14, t: 3 } and is durable through: { ts: Timestamp 1459929161000|14, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.400-0500 c20011| 2016-04-06T02:52:41.947-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929161000|14, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.403-0500 c20011| 2016-04-06T02:52:41.947-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|14, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|14, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.405-0500 c20011| 2016-04-06T02:52:41.951-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|13, t: 3 } } cursorid:19853084149 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 40ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.424-0500 c20011| 2016-04-06T02:52:41.951-0500 I COMMAND [conn40] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c04965c17830b843f1b3') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 44ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.426-0500 c20011| 2016-04-06T02:52:41.953-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|14, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:23.430-0500 c20011| 2016-04-06T02:52:41.954-0500 D COMMAND [conn36] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|14, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.433-0500 c20011| 2016-04-06T02:52:41.954-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|14, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:23.434-0500 c20011| 2016-04-06T02:52:41.954-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|14, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.436-0500 c20011| 2016-04-06T02:52:41.954-0500 D QUERY [conn36] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:23.439-0500 c20011| 2016-04-06T02:52:41.955-0500 I COMMAND [conn36] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|14, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.442-0500 c20011| 2016-04-06T02:52:41.955-0500 D COMMAND [conn40] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c04965c17830b843f1b5'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929161955), why: "splitting chunk [{ _id: -73.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.443-0500 c20011| 2016-04-06T02:52:41.955-0500 D QUERY [conn40] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:23.445-0500 c20011| 2016-04-06T02:52:41.955-0500 D QUERY [conn40] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:23.445-0500 c20011| 2016-04-06T02:52:41.955-0500 D QUERY [conn40] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.447-0500 c20011| 2016-04-06T02:52:41.956-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|14, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.449-0500 c20011| 2016-04-06T02:52:41.959-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|14, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:23.452-0500 c20011| 2016-04-06T02:52:41.967-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|14, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|15, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:23.453-0500 c20011| 2016-04-06T02:52:41.967-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:23.456-0500 c20011| 2016-04-06T02:52:41.967-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.458-0500 c20011| 2016-04-06T02:52:41.967-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|15, t: 3 } and is durable through: { ts: Timestamp 1459929161000|14, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.460-0500 c20011| 2016-04-06T02:52:41.967-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|14, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|15, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.462-0500 c20011| 2016-04-06T02:52:41.970-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929161000|15, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|14, t: 3 }, name-id: "214" } [js_test:multi_coll_drop] 2016-04-06T02:53:23.466-0500 c20011| 2016-04-06T02:52:41.996-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|15, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|15, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:23.466-0500 c20011| 2016-04-06T02:52:41.996-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:23.468-0500 c20011| 2016-04-06T02:52:41.996-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.471-0500 c20011| 2016-04-06T02:52:41.996-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|15, t: 3 } and is durable through: { ts: Timestamp 1459929161000|15, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.472-0500 c20011| 2016-04-06T02:52:41.996-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929161000|15, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.478-0500 c20011| 2016-04-06T02:52:41.997-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|15, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|15, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.484-0500 c20011| 2016-04-06T02:52:41.997-0500 I COMMAND [conn40] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c04965c17830b843f1b5'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929161955), why: "splitting chunk [{ _id: -73.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c04965c17830b843f1b5'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929161955), why: "splitting chunk [{ _id: -73.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 41ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.486-0500 c20011| 2016-04-06T02:52:41.997-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|14, t: 3 } } cursorid:19853084149 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 37ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.487-0500 c20011| 2016-04-06T02:52:42.016-0500 D COMMAND [conn40] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|56 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|15, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.488-0500 c20011| 2016-04-06T02:52:42.016-0500 D COMMAND [conn40] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|15, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:23.489-0500 c20011| 2016-04-06T02:52:42.016-0500 D COMMAND [conn40] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|56 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|15, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.489-0500 c20011| 2016-04-06T02:52:42.016-0500 D QUERY [conn40] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:23.490-0500 c20011| 2016-04-06T02:52:42.017-0500 I COMMAND [conn40] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|56 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|15, t: 3 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.493-0500 c20011| 2016-04-06T02:52:42.017-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|15, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:23.497-0500 c20011| 2016-04-06T02:52:42.018-0500 D COMMAND [conn40] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-73.0", lastmod: Timestamp 1000|57, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -73.0 }, max: { _id: -72.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-73.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-72.0", lastmod: Timestamp 1000|58, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -72.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-72.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|56 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.497-0500 c20011| 2016-04-06T02:52:42.018-0500 D QUERY [conn40] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:23.502-0500 c20011| 2016-04-06T02:52:42.018-0500 D QUERY [conn40] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:23.504-0500 c20011| 2016-04-06T02:52:42.018-0500 I COMMAND [conn40] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.505-0500 c20011| 2016-04-06T02:52:42.018-0500 D QUERY [conn40] Using idhack: { _id: "multidrop.coll-_id_-73.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:23.507-0500 c20011| 2016-04-06T02:52:42.018-0500 D QUERY [conn40] Using idhack: { _id: "multidrop.coll-_id_-72.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:23.513-0500 c20011| 2016-04-06T02:52:42.020-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|15, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 44 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.515-0500 c20011| 2016-04-06T02:52:42.024-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|15, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:23.518-0500 c20011| 2016-04-06T02:52:42.025-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929162000|1, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|15, t: 3 }, name-id: "215" } [js_test:multi_coll_drop] 2016-04-06T02:53:23.520-0500 c20011| 2016-04-06T02:52:42.026-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|15, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:23.521-0500 c20011| 2016-04-06T02:52:42.026-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:23.524-0500 c20011| 2016-04-06T02:52:42.026-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.525-0500 c20011| 2016-04-06T02:52:42.026-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|1, t: 3 } and is durable through: { ts: Timestamp 1459929161000|15, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.526-0500 c20011| 2016-04-06T02:52:42.026-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929162000|1, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929161000|15, t: 3 }, name-id: "215" } [js_test:multi_coll_drop] 2016-04-06T02:53:23.529-0500 c20011| 2016-04-06T02:52:42.026-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|15, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.533-0500 c20011| 2016-04-06T02:52:42.034-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:23.533-0500 c20011| 2016-04-06T02:52:42.034-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:23.535-0500 c20011| 2016-04-06T02:52:42.034-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.538-0500 c20011| 2016-04-06T02:52:42.034-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|1, t: 3 } and is durable through: { ts: Timestamp 1459929162000|1, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.538-0500 c20011| 2016-04-06T02:52:42.034-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|1, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.546-0500 c20011| 2016-04-06T02:52:42.034-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.552-0500 c20011| 2016-04-06T02:52:42.034-0500 I COMMAND [conn40] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-73.0", lastmod: Timestamp 1000|57, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -73.0 }, max: { _id: -72.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-73.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-72.0", lastmod: Timestamp 1000|58, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -72.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-72.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|56 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 16ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.555-0500 c20011| 2016-04-06T02:52:42.034-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|15, t: 3 } } cursorid:19853084149 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 10ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.567-0500 c20011| 2016-04-06T02:52:42.035-0500 D COMMAND [conn40] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:42.035-0500-5704c04a65c17830b843f1b6", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162035), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -73.0 }, max: { _id: MaxKey } }, left: { min: { _id: -73.0 }, max: { _id: -72.0 }, lastmod: Timestamp 1000|57, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -72.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|58, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.572-0500 c20011| 2016-04-06T02:52:42.035-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|1, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:23.581-0500 c20011| 2016-04-06T02:52:42.035-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|1, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.586-0500 c20011| 2016-04-06T02:52:42.043-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|1, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:23.594-0500 c20011| 2016-04-06T02:52:42.051-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929162000|2, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|1, t: 3 }, name-id: "216" } [js_test:multi_coll_drop] 2016-04-06T02:53:23.601-0500 c20011| 2016-04-06T02:52:42.059-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:23.606-0500 c20013| 2016-04-06T02:52:10.257-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 972 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929130000|7, t: 1, h: -6994787252017545484, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c02a65c17830b843f1a2'), state: 2, when: new Date(1459929130256), why: "splitting chunk [{ _id: -82.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.611-0500 c20013| 2016-04-06T02:52:10.257-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929130000|7 and ending at ts: Timestamp 1459929130000|7 [js_test:multi_coll_drop] 2016-04-06T02:53:23.615-0500 c20013| 2016-04-06T02:52:10.257-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:23.616-0500 c20013| 2016-04-06T02:52:10.257-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.619-0500 c20013| 2016-04-06T02:52:10.257-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.620-0500 c20013| 2016-04-06T02:52:10.257-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.623-0500 c20013| 2016-04-06T02:52:10.257-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.623-0500 c20013| 2016-04-06T02:52:10.257-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.625-0500 c20013| 2016-04-06T02:52:10.257-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.626-0500 c20013| 2016-04-06T02:52:10.257-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.631-0500 c20013| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.631-0500 c20013| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.633-0500 c20013| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.635-0500 c20013| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.638-0500 c20013| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.639-0500 c20013| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.639-0500 c20013| 2016-04-06T02:52:10.258-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:23.640-0500 c20013| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.641-0500 c20013| 2016-04-06T02:52:10.258-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:23.642-0500 c20013| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.642-0500 c20013| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.643-0500 c20013| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.648-0500 c20013| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.650-0500 c20013| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.650-0500 c20013| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.652-0500 c20013| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.653-0500 c20013| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.654-0500 c20013| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.655-0500 c20013| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.662-0500 c20013| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.663-0500 c20013| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.666-0500 c20013| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.676-0500 c20013| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.676-0500 c20013| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.678-0500 c20013| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.679-0500 c20013| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.679-0500 c20013| 2016-04-06T02:52:10.258-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.682-0500 c20013| 2016-04-06T02:52:10.259-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:23.691-0500 c20013| 2016-04-06T02:52:10.259-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:23.700-0500 c20013| 2016-04-06T02:52:10.259-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 976 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|6, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:23.700-0500 c20013| 2016-04-06T02:52:10.259-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 976 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:23.703-0500 c20013| 2016-04-06T02:52:10.259-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 976 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.704-0500 c20013| 2016-04-06T02:52:10.259-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 977 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.259-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|6, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:23.706-0500 c20013| 2016-04-06T02:52:10.259-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 977 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:23.712-0500 c20013| 2016-04-06T02:52:10.265-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:23.717-0500 c20013| 2016-04-06T02:52:10.265-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 979 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:23.721-0500 c20013| 2016-04-06T02:52:10.265-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 979 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:23.723-0500 c20013| 2016-04-06T02:52:10.265-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 979 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.737-0500 c20013| 2016-04-06T02:52:10.265-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 977 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.737-0500 c20013| 2016-04-06T02:52:10.265-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|7, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.740-0500 c20013| 2016-04-06T02:52:10.265-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:23.742-0500 c20013| 2016-04-06T02:52:10.265-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 982 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.265-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:23.745-0500 c20013| 2016-04-06T02:52:10.265-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 982 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:23.749-0500 c20013| 2016-04-06T02:52:10.267-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 982 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929130000|8, t: 1, h: -1899469897052357851, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-82.0", lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -82.0 }, max: { _id: -81.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-82.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-81.0", lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -81.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-81.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.752-0500 c20013| 2016-04-06T02:52:10.267-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929130000|8 and ending at ts: Timestamp 1459929130000|8 [js_test:multi_coll_drop] 2016-04-06T02:53:23.753-0500 c20013| 2016-04-06T02:52:10.267-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:23.754-0500 c20013| 2016-04-06T02:52:10.267-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.754-0500 c20013| 2016-04-06T02:52:10.267-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.755-0500 c20013| 2016-04-06T02:52:10.268-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.755-0500 c20013| 2016-04-06T02:52:10.268-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.758-0500 c20013| 2016-04-06T02:52:10.268-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.759-0500 c20013| 2016-04-06T02:52:10.268-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.759-0500 c20013| 2016-04-06T02:52:10.268-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.760-0500 c20013| 2016-04-06T02:52:10.268-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.762-0500 c20013| 2016-04-06T02:52:10.268-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.763-0500 c20013| 2016-04-06T02:52:10.268-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.766-0500 c20013| 2016-04-06T02:52:10.268-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.774-0500 c20013| 2016-04-06T02:52:10.268-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.775-0500 c20013| 2016-04-06T02:52:10.268-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.776-0500 c20013| 2016-04-06T02:52:10.268-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.776-0500 c20013| 2016-04-06T02:52:10.268-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:23.776-0500 c20013| 2016-04-06T02:52:10.268-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:23.777-0500 c20013| 2016-04-06T02:52:10.268-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll-_id_-82.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:23.779-0500 s20014| 2016-04-06T02:53:04.664-0500 D SHARDING [Balancer] Command failed with retriable error and will be retried :: caused by :: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:23.782-0500 s20014| 2016-04-06T02:53:04.665-0500 D NETWORK [ReplicaSetMonitorWatcher] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:53:23.786-0500 s20014| 2016-04-06T02:53:05.165-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:23.790-0500 s20014| 2016-04-06T02:53:05.165-0500 D NETWORK [Balancer] SocketException: remote: (NONE):0 error: 9001 socket exception [CLOSED] server [192.168.100.28:20011] [js_test:multi_coll_drop] 2016-04-06T02:53:23.794-0500 s20014| 2016-04-06T02:53:05.165-0500 D - [Balancer] User Assertion: 6:network error while attempting to run command 'ismaster' on host 'mongovm16:20011' [js_test:multi_coll_drop] 2016-04-06T02:53:23.796-0500 s20014| 2016-04-06T02:53:05.166-0500 I NETWORK [Balancer] Detected bad connection created at 1459929137201069 microSec, clearing pool for mongovm16:20011 of 0 connections [js_test:multi_coll_drop] 2016-04-06T02:53:23.797-0500 s20014| 2016-04-06T02:53:05.166-0500 D NETWORK [Balancer] Marking host mongovm16:20011 as failed [js_test:multi_coll_drop] 2016-04-06T02:53:23.798-0500 s20014| 2016-04-06T02:53:05.167-0500 W NETWORK [Balancer] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:23.799-0500 s20014| 2016-04-06T02:53:05.667-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:23.800-0500 s20014| 2016-04-06T02:53:05.667-0500 D NETWORK [Balancer] creating new connection to:mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:23.802-0500 s20014| 2016-04-06T02:53:05.676-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:53:23.806-0500 s20014| 2016-04-06T02:53:05.677-0500 D NETWORK [Balancer] connected to server mongovm16:20011 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:53:23.807-0500 s20014| 2016-04-06T02:53:05.677-0500 D NETWORK [Balancer] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:53:23.819-0500 s20014| 2016-04-06T02:53:05.682-0500 D ASIO [Balancer] startCommand: RemoteCommand 389 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:35.682-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929171765), up: 44, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.821-0500 s20014| 2016-04-06T02:53:05.682-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 389 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:23.822-0500 s20014| 2016-04-06T02:53:07.132-0500 D NETWORK [PeriodicTaskRunner] polling for status of connection to 192.168.100.28:20010, no events [js_test:multi_coll_drop] 2016-04-06T02:53:23.826-0500 s20014| 2016-04-06T02:53:07.132-0500 D - [PeriodicTaskRunner] cleaning up unused lock buckets of the global lock manager [js_test:multi_coll_drop] 2016-04-06T02:53:23.829-0500 s20014| 2016-04-06T02:53:07.133-0500 D ASIO [UserCacheInvalidator] startCommand: RemoteCommand 390 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:37.133-0500 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.830-0500 s20014| 2016-04-06T02:53:07.133-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:23.831-0500 s20014| 2016-04-06T02:53:07.133-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 391 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:23.835-0500 s20014| 2016-04-06T02:53:07.133-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:23.836-0500 s20014| 2016-04-06T02:53:07.133-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 391 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:23.837-0500 s20014| 2016-04-06T02:53:07.133-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 390 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:23.842-0500 s20014| 2016-04-06T02:53:07.134-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 390 finished with response: { cacheGeneration: ObjectId('5704c01f525046a6a8063338'), ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.849-0500 s20014| 2016-04-06T02:53:07.134-0500 I ACCESS [UserCacheInvalidator] User cache generation changed from 5704c01c3876c4cfd2eb3eb7 to 5704c01f525046a6a8063338; invalidating user cache [js_test:multi_coll_drop] 2016-04-06T02:53:23.851-0500 s20014| 2016-04-06T02:53:08.205-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 389 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929185000|2, t: 4 }, electionId: ObjectId('7fffffff0000000000000004') } [js_test:multi_coll_drop] 2016-04-06T02:53:23.859-0500 s20014| 2016-04-06T02:53:08.206-0500 D ASIO [Balancer] startCommand: RemoteCommand 394 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:38.206-0500 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|2, t: 4 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.860-0500 s20014| 2016-04-06T02:53:08.206-0500 I ASIO [Balancer] dropping unhealthy pooled connection to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:23.861-0500 s20014| 2016-04-06T02:53:08.206-0500 I ASIO [Balancer] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:53:23.863-0500 s20014| 2016-04-06T02:53:08.206-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:23.864-0500 s20014| 2016-04-06T02:53:08.206-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 395 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:23.869-0500 s20014| 2016-04-06T02:53:08.207-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:23.869-0500 s20014| 2016-04-06T02:53:08.207-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 395 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:23.874-0500 s20014| 2016-04-06T02:53:08.207-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 394 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:23.877-0500 s20014| 2016-04-06T02:53:08.208-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 394 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "shard0000", host: "mongovm16:20010" } ], id: 0, ns: "config.shards" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.880-0500 s20014| 2016-04-06T02:53:08.209-0500 D SHARDING [Balancer] found 1 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp 1459929185000|2, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.890-0500 s20014| 2016-04-06T02:53:08.209-0500 D ASIO [Balancer] startCommand: RemoteCommand 397 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:38.209-0500 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|2, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.908-0500 s20014| 2016-04-06T02:53:08.209-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 397 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:23.914-0500 s20014| 2016-04-06T02:53:08.212-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 397 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "chunksize", value: 50 } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.916-0500 s20014| 2016-04-06T02:53:08.215-0500 D SHARDING [Balancer] Refreshing MaxChunkSize: 50MB [js_test:multi_coll_drop] 2016-04-06T02:53:23.919-0500 s20014| 2016-04-06T02:53:08.215-0500 D ASIO [Balancer] startCommand: RemoteCommand 399 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:38.215-0500 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|4, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.921-0500 s20014| 2016-04-06T02:53:08.215-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 399 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:23.922-0500 s20014| 2016-04-06T02:53:08.216-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 399 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "balancer", stopped: true } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.922-0500 s20014| 2016-04-06T02:53:08.220-0500 D SHARDING [Balancer] skipping balancing round because balancing is disabled [js_test:multi_coll_drop] 2016-04-06T02:53:23.933-0500 s20014| 2016-04-06T02:53:08.221-0500 D ASIO [Balancer] startCommand: RemoteCommand 401 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:38.220-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929188220), up: 61, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.934-0500 s20014| 2016-04-06T02:53:08.221-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 401 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:23.936-0500 s20014| 2016-04-06T02:53:08.271-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 401 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929188000|2, t: 4 }, electionId: ObjectId('7fffffff0000000000000004') } [js_test:multi_coll_drop] 2016-04-06T02:53:23.938-0500 s20014| 2016-04-06T02:53:08.312-0500 D ASIO [conn1] startCommand: RemoteCommand 403 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:38.312-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|4, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.939-0500 s20014| 2016-04-06T02:53:08.313-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 403 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:23.940-0500 s20014| 2016-04-06T02:53:08.313-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 403 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-63.0", lastmod: Timestamp 1000|76, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -63.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.941-0500 s20014| 2016-04-06T02:53:08.313-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|74||5704c02806c33406d4d9c0c0 and 38 chunks [js_test:multi_coll_drop] 2016-04-06T02:53:23.944-0500 s20014| 2016-04-06T02:53:08.313-0500 D SHARDING [conn1] major version query from 1|74||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|74 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.946-0500 s20014| 2016-04-06T02:53:08.313-0500 D ASIO [conn1] startCommand: RemoteCommand 405 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:38.313-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|74 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|4, t: 4 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.946-0500 s20014| 2016-04-06T02:53:08.313-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 405 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:23.947-0500 s20014| 2016-04-06T02:53:08.314-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 405 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-64.0", lastmod: Timestamp 1000|75, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -64.0 }, max: { _id: -63.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-63.0", lastmod: Timestamp 1000|76, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -63.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.948-0500 s20014| 2016-04-06T02:53:08.314-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|76||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:23.948-0500 s20014| 2016-04-06T02:53:08.314-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 0ms sequenceNumber: 41 version: 1|76||5704c02806c33406d4d9c0c0 based on: 1|74||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:23.951-0500 s20014| 2016-04-06T02:53:08.314-0500 D ASIO [conn1] startCommand: RemoteCommand 407 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:38.314-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|4, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.952-0500 s20014| 2016-04-06T02:53:08.314-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 407 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:23.953-0500 s20014| 2016-04-06T02:53:08.315-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 407 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-63.0", lastmod: Timestamp 1000|76, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -63.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.954-0500 s20014| 2016-04-06T02:53:08.315-0500 I COMMAND [conn1] splitting chunk [{ _id: -63.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:23.955-0500 s20014| 2016-04-06T02:53:08.315-0500 D NETWORK [conn1] polling for status of connection to 192.168.100.28:20010, no events [js_test:multi_coll_drop] 2016-04-06T02:53:23.956-0500 s20014| 2016-04-06T02:53:08.433-0500 D ASIO [conn1] startCommand: RemoteCommand 409 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:38.433-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|8, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.957-0500 s20014| 2016-04-06T02:53:08.433-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 409 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:23.958-0500 s20014| 2016-04-06T02:53:08.434-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 409 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-62.0", lastmod: Timestamp 1000|78, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -62.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.962-0500 s20014| 2016-04-06T02:53:08.435-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|76||5704c02806c33406d4d9c0c0 and 39 chunks [js_test:multi_coll_drop] 2016-04-06T02:53:23.964-0500 s20014| 2016-04-06T02:53:08.435-0500 D SHARDING [conn1] major version query from 1|76||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|76 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.969-0500 s20014| 2016-04-06T02:53:08.435-0500 D ASIO [conn1] startCommand: RemoteCommand 411 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:38.435-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|76 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|8, t: 4 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.970-0500 s20014| 2016-04-06T02:53:08.435-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 411 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:23.971-0500 s20014| 2016-04-06T02:53:08.437-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 411 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-63.0", lastmod: Timestamp 1000|77, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -63.0 }, max: { _id: -62.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-62.0", lastmod: Timestamp 1000|78, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -62.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.972-0500 s20014| 2016-04-06T02:53:08.437-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|78||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:23.972-0500 s20014| 2016-04-06T02:53:08.437-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 2ms sequenceNumber: 42 version: 1|78||5704c02806c33406d4d9c0c0 based on: 1|76||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:23.976-0500 s20014| 2016-04-06T02:53:08.437-0500 D ASIO [conn1] startCommand: RemoteCommand 413 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:38.437-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|8, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.976-0500 s20014| 2016-04-06T02:53:08.437-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:23.977-0500 s20014| 2016-04-06T02:53:08.445-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 414 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:23.977-0500 s20014| 2016-04-06T02:53:08.446-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:23.980-0500 s20014| 2016-04-06T02:53:08.446-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 414 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:23.981-0500 s20014| 2016-04-06T02:53:08.446-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 413 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:23.982-0500 s20014| 2016-04-06T02:53:08.725-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 413 finished with response: { waitedMS: 278, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-62.0", lastmod: Timestamp 1000|78, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -62.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.983-0500 s20014| 2016-04-06T02:53:08.726-0500 I COMMAND [conn1] splitting chunk [{ _id: -62.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:23.984-0500 c20012| 2016-04-06T02:52:22.712-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|9, t: 2 } } cursorid:22197973872 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:23.986-0500 c20012| 2016-04-06T02:52:22.714-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:23.990-0500 c20012| 2016-04-06T02:52:22.714-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:23.991-0500 c20012| 2016-04-06T02:52:22.714-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:23.994-0500 c20012| 2016-04-06T02:52:22.714-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|10, t: 2 } and is durable through: { ts: Timestamp 1459929142000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:23.997-0500 c20012| 2016-04-06T02:52:22.714-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.004-0500 c20012| 2016-04-06T02:52:22.714-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.005-0500 c20012| 2016-04-06T02:52:22.714-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.005-0500 c20012| 2016-04-06T02:52:22.723-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:24.005-0500 c20012| 2016-04-06T02:52:22.723-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:24.005-0500 c20012| 2016-04-06T02:52:22.723-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.006-0500 c20012| 2016-04-06T02:52:22.723-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|10, t: 2 } and is durable through: { ts: Timestamp 1459929142000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.006-0500 c20012| 2016-04-06T02:52:22.723-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.007-0500 c20012| 2016-04-06T02:52:22.724-0500 D REPL [conn11] Required snapshot optime: { ts: Timestamp 1459929142000|10, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929142000|9, t: 2 }, name-id: "199" } [js_test:multi_coll_drop] 2016-04-06T02:53:24.007-0500 c20012| 2016-04-06T02:52:22.727-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:24.008-0500 c20012| 2016-04-06T02:52:22.727-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:24.008-0500 c20012| 2016-04-06T02:52:22.727-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|10, t: 2 } and is durable through: { ts: Timestamp 1459929142000|10, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.008-0500 c20012| 2016-04-06T02:52:22.727-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|10, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.009-0500 c20012| 2016-04-06T02:52:22.727-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.009-0500 d20010| 2016-04-06T02:53:08.213-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:53:08.213-0500-5704c06465c17830b843f1c8", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929188213), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -64.0 }, max: { _id: MaxKey } }, left: { min: { _id: -64.0 }, max: { _id: -63.0 }, lastmod: Timestamp 1000|75, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -63.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|76, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.009-0500 c20012| 2016-04-06T02:52:22.727-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|9, t: 2 } } cursorid:25449496203 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 13ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.010-0500 c20012| 2016-04-06T02:52:22.727-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:24.010-0500 c20012| 2016-04-06T02:52:22.727-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:24.011-0500 c20012| 2016-04-06T02:52:22.727-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.011-0500 c20012| 2016-04-06T02:52:22.727-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|10, t: 2 } and is durable through: { ts: Timestamp 1459929142000|10, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.012-0500 c20012| 2016-04-06T02:52:22.727-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.014-0500 c20012| 2016-04-06T02:52:22.727-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|9, t: 2 } } cursorid:22197973872 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 13ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.015-0500 c20012| 2016-04-06T02:52:22.727-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.016-0500 c20012| 2016-04-06T02:52:22.727-0500 I COMMAND [conn11] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-79.0", lastmod: Timestamp 1000|45, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -79.0 }, max: { _id: -78.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-79.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-78.0", lastmod: Timestamp 1000|46, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -78.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-78.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|44 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 16ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.016-0500 c20012| 2016-04-06T02:52:22.727-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.018-0500 c20012| 2016-04-06T02:52:22.727-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.019-0500 c20012| 2016-04-06T02:52:22.728-0500 D COMMAND [conn11] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:22.727-0500-5704c03665c17830b843f1aa", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929142727), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -79.0 }, max: { _id: MaxKey } }, left: { min: { _id: -79.0 }, max: { _id: -78.0 }, lastmod: Timestamp 1000|45, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -78.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|46, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.019-0500 c20012| 2016-04-06T02:52:22.728-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|10, t: 2 } } cursorid:25449496203 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.020-0500 c20012| 2016-04-06T02:52:22.728-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|10, t: 2 } } cursorid:22197973872 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.020-0500 c20012| 2016-04-06T02:52:22.731-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.021-0500 c20012| 2016-04-06T02:52:22.731-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:24.021-0500 c20012| 2016-04-06T02:52:22.731-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:24.021-0500 c20012| 2016-04-06T02:52:22.731-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.022-0500 c20012| 2016-04-06T02:52:22.731-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|11, t: 2 } and is durable through: { ts: Timestamp 1459929142000|10, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.023-0500 c20012| 2016-04-06T02:52:22.731-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.024-0500 d20010| 2016-04-06T02:53:08.312-0500 I SHARDING [conn5] distributed lock with ts: 5704c04b65c17830b843f1c7' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:53:24.025-0500 d20010| 2016-04-06T02:53:08.312-0500 I COMMAND [conn5] command admin.$cmd command: splitChunk { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -64.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -63.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|74, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } numYields:0 reslen:74 locks:{ Global: { acquireCount: { r: 6, w: 2 } }, Database: { acquireCount: { r: 2, w: 2 } }, Collection: { acquireCount: { r: 2, W: 2 } } } protocol:op_command 24977ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.025-0500 d20010| 2016-04-06T02:53:08.315-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -63.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -62.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|76, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:24.026-0500 d20010| 2016-04-06T02:53:08.359-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -63.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c06465c17830b843f1c9 [js_test:multi_coll_drop] 2016-04-06T02:53:24.026-0500 d20010| 2016-04-06T02:53:08.359-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|76||5704c02806c33406d4d9c0c0, current metadata version is 1|76||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:24.027-0500 d20010| 2016-04-06T02:53:08.366-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:24.027-0500 d20010| 2016-04-06T02:53:08.367-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|76||5704c02806c33406d4d9c0c0, took 7ms) [js_test:multi_coll_drop] 2016-04-06T02:53:24.028-0500 d20010| 2016-04-06T02:53:08.367-0500 I SHARDING [conn5] splitChunk accepted at version 1|76||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:24.030-0500 d20010| 2016-04-06T02:53:08.379-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:53:08.379-0500-5704c06465c17830b843f1ca", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929188379), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -63.0 }, max: { _id: MaxKey } }, left: { min: { _id: -63.0 }, max: { _id: -62.0 }, lastmod: Timestamp 1000|77, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -62.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|78, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.030-0500 d20010| 2016-04-06T02:53:08.433-0500 I SHARDING [conn5] distributed lock with ts: 5704c06465c17830b843f1c9' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:53:24.031-0500 c20012| 2016-04-06T02:52:22.732-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.034-0500 c20012| 2016-04-06T02:52:22.735-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:24.035-0500 c20012| 2016-04-06T02:52:22.735-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:24.036-0500 c20012| 2016-04-06T02:52:22.735-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|11, t: 2 } and is durable through: { ts: Timestamp 1459929142000|10, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.037-0500 c20012| 2016-04-06T02:52:22.735-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.044-0500 c20012| 2016-04-06T02:52:22.735-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.049-0500 c20012| 2016-04-06T02:52:22.738-0500 D REPL [conn11] Required snapshot optime: { ts: Timestamp 1459929142000|11, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929142000|10, t: 2 }, name-id: "200" } [js_test:multi_coll_drop] 2016-04-06T02:53:24.055-0500 c20012| 2016-04-06T02:52:22.740-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:24.056-0500 c20012| 2016-04-06T02:52:22.740-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:24.057-0500 c20012| 2016-04-06T02:52:22.740-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.060-0500 c20012| 2016-04-06T02:52:22.740-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|11, t: 2 } and is durable through: { ts: Timestamp 1459929142000|11, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.061-0500 c20012| 2016-04-06T02:52:22.740-0500 D REPL [conn18] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|11, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.063-0500 c20012| 2016-04-06T02:52:22.740-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.071-0500 c20012| 2016-04-06T02:52:22.740-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|10, t: 2 } } cursorid:25449496203 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 9ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.084-0500 c20012| 2016-04-06T02:52:22.740-0500 I COMMAND [conn11] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:22.727-0500-5704c03665c17830b843f1aa", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929142727), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -79.0 }, max: { _id: MaxKey } }, left: { min: { _id: -79.0 }, max: { _id: -78.0 }, lastmod: Timestamp 1000|45, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -78.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|46, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 12ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.092-0500 c20012| 2016-04-06T02:52:22.741-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|10, t: 2 } } cursorid:22197973872 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 8ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.098-0500 c20012| 2016-04-06T02:52:22.741-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|11, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.107-0500 c20012| 2016-04-06T02:52:22.741-0500 D COMMAND [conn11] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c03665c17830b843f1a9') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.108-0500 c20012| 2016-04-06T02:52:22.741-0500 D QUERY [conn11] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:24.110-0500 c20012| 2016-04-06T02:52:22.741-0500 D QUERY [conn11] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c03665c17830b843f1a9') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.113-0500 c20012| 2016-04-06T02:52:22.741-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|11, t: 2 } } cursorid:25449496203 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.117-0500 c20012| 2016-04-06T02:52:22.741-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|11, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.121-0500 c20012| 2016-04-06T02:52:22.742-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|11, t: 2 } } cursorid:22197973872 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.123-0500 c20012| 2016-04-06T02:52:22.743-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|11, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.124-0500 c20012| 2016-04-06T02:52:22.744-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:24.126-0500 d20010| 2016-04-06T02:53:08.433-0500 I COMMAND [conn5] command admin.$cmd command: splitChunk { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -63.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -62.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|76, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } numYields:0 reslen:74 locks:{ Global: { acquireCount: { r: 6, w: 2 } }, Database: { acquireCount: { r: 2, w: 2 } }, Collection: { acquireCount: { r: 2, W: 2 } } } protocol:op_command 117ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.132-0500 d20010| 2016-04-06T02:53:08.727-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -62.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -61.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|78, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:24.133-0500 d20010| 2016-04-06T02:53:08.755-0500 I SHARDING [conn5] distributed lock 'multidrop.coll' acquired for 'splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll', ts : 5704c06465c17830b843f1cb [js_test:multi_coll_drop] 2016-04-06T02:53:24.134-0500 d20010| 2016-04-06T02:53:08.756-0500 I SHARDING [conn5] remotely refreshing metadata for multidrop.coll based on current shard version 1|78||5704c02806c33406d4d9c0c0, current metadata version is 1|78||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:24.138-0500 d20010| 2016-04-06T02:53:08.771-0500 I SHARDING [conn5] metadata of collection multidrop.coll already up to date (shard version : 1|78||5704c02806c33406d4d9c0c0, took 15ms) [js_test:multi_coll_drop] 2016-04-06T02:53:24.139-0500 d20010| 2016-04-06T02:53:08.771-0500 I SHARDING [conn5] splitChunk accepted at version 1|78||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:24.147-0500 d20010| 2016-04-06T02:53:08.786-0500 I SHARDING [conn5] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:53:08.786-0500-5704c06465c17830b843f1cc", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929188786), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -62.0 }, max: { _id: MaxKey } }, left: { min: { _id: -62.0 }, max: { _id: -61.0 }, lastmod: Timestamp 1000|79, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -61.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.147-0500 c20012| 2016-04-06T02:52:22.744-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:24.148-0500 c20012| 2016-04-06T02:52:22.744-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.152-0500 c20012| 2016-04-06T02:52:22.744-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|11, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.156-0500 c20012| 2016-04-06T02:52:22.744-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.160-0500 c20012| 2016-04-06T02:52:22.744-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|11, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.161-0500 c20012| 2016-04-06T02:52:22.744-0500 D REPL [conn11] Required snapshot optime: { ts: Timestamp 1459929142000|12, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929142000|11, t: 2 }, name-id: "201" } [js_test:multi_coll_drop] 2016-04-06T02:53:24.163-0500 c20012| 2016-04-06T02:52:22.745-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:24.164-0500 c20012| 2016-04-06T02:52:22.745-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:24.166-0500 c20012| 2016-04-06T02:52:22.745-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|11, t: 2 } and is durable through: { ts: Timestamp 1459929142000|11, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.168-0500 c20012| 2016-04-06T02:52:22.745-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929142000|12, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929142000|11, t: 2 }, name-id: "201" } [js_test:multi_coll_drop] 2016-04-06T02:53:24.185-0500 c20012| 2016-04-06T02:52:22.745-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.190-0500 c20012| 2016-04-06T02:52:22.745-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.198-0500 c20012| 2016-04-06T02:52:22.746-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:24.199-0500 c20012| 2016-04-06T02:52:22.746-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:24.234-0500 c20012| 2016-04-06T02:52:22.746-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.243-0500 c20012| 2016-04-06T02:52:22.746-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.245-0500 c20012| 2016-04-06T02:52:22.746-0500 D REPL [conn18] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.250-0500 c20012| 2016-04-06T02:52:22.746-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.252-0500 c20012| 2016-04-06T02:52:22.746-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|11, t: 2 } } cursorid:25449496203 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.261-0500 c20012| 2016-04-06T02:52:22.746-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|11, t: 2 } } cursorid:22197973872 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.268-0500 c20012| 2016-04-06T02:52:22.746-0500 I COMMAND [conn11] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c03665c17830b843f1a9') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.271-0500 c20012| 2016-04-06T02:52:22.747-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:24.271-0500 c20012| 2016-04-06T02:52:22.747-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:24.274-0500 c20012| 2016-04-06T02:52:22.747-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|11, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.279-0500 c20012| 2016-04-06T02:52:22.747-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|12, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.285-0500 c20012| 2016-04-06T02:52:22.747-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.288-0500 c20012| 2016-04-06T02:52:22.747-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.293-0500 c20012| 2016-04-06T02:52:22.747-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|12, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.296-0500 c20012| 2016-04-06T02:52:22.747-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|12, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.297-0500 c20012| 2016-04-06T02:52:22.747-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|12, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.299-0500 c20012| 2016-04-06T02:52:22.747-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|12, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.300-0500 c20012| 2016-04-06T02:52:22.747-0500 D QUERY [conn7] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:24.304-0500 c20012| 2016-04-06T02:52:22.748-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|12, t: 2 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.308-0500 c20012| 2016-04-06T02:52:22.749-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:24.309-0500 c20012| 2016-04-06T02:52:26.805-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:24.313-0500 c20012| 2016-04-06T02:52:26.805-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.314-0500 c20012| 2016-04-06T02:52:23.145-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1060 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:33.145-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.317-0500 c20012| 2016-04-06T02:52:24.056-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.318-0500 c20012| 2016-04-06T02:52:26.805-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:24.320-0500 c20012| 2016-04-06T02:52:26.805-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1061 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:36.805-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.322-0500 c20012| 2016-04-06T02:52:24.077-0500 D COMMAND [conn5] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.322-0500 c20012| 2016-04-06T02:52:26.805-0500 D COMMAND [conn5] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:24.322-0500 c20012| 2016-04-06T02:52:25.026-0500 D COMMAND [conn13] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.326-0500 c20012| 2016-04-06T02:52:26.805-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929130000|10, t: 1 } and is durable through: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.330-0500 c20012| 2016-04-06T02:52:26.805-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.332-0500 c20013| 2016-04-06T02:52:10.268-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll-_id_-81.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:24.334-0500 c20013| 2016-04-06T02:52:10.268-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.335-0500 c20013| 2016-04-06T02:52:10.268-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.335-0500 c20013| 2016-04-06T02:52:10.268-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.336-0500 c20013| 2016-04-06T02:52:10.269-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.341-0500 c20013| 2016-04-06T02:52:10.269-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.343-0500 c20013| 2016-04-06T02:52:10.269-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.344-0500 c20013| 2016-04-06T02:52:10.269-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.345-0500 c20013| 2016-04-06T02:52:10.269-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.349-0500 c20013| 2016-04-06T02:52:10.269-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.350-0500 c20013| 2016-04-06T02:52:10.269-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.350-0500 c20013| 2016-04-06T02:52:10.269-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.357-0500 c20013| 2016-04-06T02:52:10.269-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.357-0500 c20013| 2016-04-06T02:52:10.269-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.358-0500 c20013| 2016-04-06T02:52:10.269-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.358-0500 c20013| 2016-04-06T02:52:10.269-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.359-0500 c20013| 2016-04-06T02:52:10.269-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.359-0500 c20013| 2016-04-06T02:52:10.269-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.362-0500 c20013| 2016-04-06T02:52:10.269-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:24.366-0500 c20013| 2016-04-06T02:52:10.269-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:24.368-0500 c20013| 2016-04-06T02:52:10.269-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 984 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|7, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:24.369-0500 c20013| 2016-04-06T02:52:10.269-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 984 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:24.376-0500 c20013| 2016-04-06T02:52:10.269-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 985 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.269-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|7, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.378-0500 c20013| 2016-04-06T02:52:10.269-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 984 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.380-0500 c20013| 2016-04-06T02:52:10.269-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 985 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:24.384-0500 c20013| 2016-04-06T02:52:10.276-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:24.391-0500 c20013| 2016-04-06T02:52:10.276-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 987 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:24.392-0500 c20013| 2016-04-06T02:52:10.276-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 987 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:24.392-0500 c20013| 2016-04-06T02:52:10.276-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 987 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.400-0500 c20013| 2016-04-06T02:52:10.276-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 985 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.404-0500 c20013| 2016-04-06T02:52:10.276-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|8, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.409-0500 c20013| 2016-04-06T02:52:10.276-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:24.411-0500 c20013| 2016-04-06T02:52:10.277-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 990 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.277-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.413-0500 c20013| 2016-04-06T02:52:10.277-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 990 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:24.420-0500 c20013| 2016-04-06T02:52:10.277-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 990 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929130000|9, t: 1, h: -5293347687548571671, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:10.276-0500-5704c02a65c17830b843f1a3", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929130276), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -82.0 }, max: { _id: MaxKey } }, left: { min: { _id: -82.0 }, max: { _id: -81.0 }, lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -81.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.422-0500 c20013| 2016-04-06T02:52:10.277-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929130000|9 and ending at ts: Timestamp 1459929130000|9 [js_test:multi_coll_drop] 2016-04-06T02:53:24.423-0500 c20013| 2016-04-06T02:52:10.277-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:24.424-0500 c20013| 2016-04-06T02:52:10.277-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.424-0500 c20013| 2016-04-06T02:52:10.277-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.426-0500 c20013| 2016-04-06T02:52:10.277-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.428-0500 c20013| 2016-04-06T02:52:10.277-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.430-0500 c20013| 2016-04-06T02:52:10.277-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.433-0500 c20013| 2016-04-06T02:52:10.277-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.435-0500 c20013| 2016-04-06T02:52:10.277-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.435-0500 c20013| 2016-04-06T02:52:10.278-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.435-0500 c20013| 2016-04-06T02:52:10.278-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.437-0500 c20013| 2016-04-06T02:52:10.278-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.438-0500 c20013| 2016-04-06T02:52:10.278-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.439-0500 c20013| 2016-04-06T02:52:10.278-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.441-0500 c20013| 2016-04-06T02:52:10.278-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.442-0500 c20013| 2016-04-06T02:52:10.278-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.442-0500 c20013| 2016-04-06T02:52:10.278-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.443-0500 c20013| 2016-04-06T02:52:10.278-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:24.443-0500 c20013| 2016-04-06T02:52:10.278-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.444-0500 c20013| 2016-04-06T02:52:10.278-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.447-0500 c20013| 2016-04-06T02:52:10.278-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.448-0500 c20013| 2016-04-06T02:52:10.278-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.448-0500 c20013| 2016-04-06T02:52:10.278-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.466-0500 c20012| 2016-04-06T02:52:26.806-0500 I COMMAND [ftdc] serverStatus was very slow: { after basic: 0, after asserts: 0, after connections: 0, after extra_info: 0, after globalLock: 0, after locks: 0, after network: 0, after opcounters: 0, after opcountersRepl: 0, after repl: 2770, after storageEngine: 2770, after tcmalloc: 2770, after wiredTiger: 2770, at end: 2770 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.471-0500 c20012| 2016-04-06T02:52:26.806-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|12, t: 2 } } cursorid:22197973872 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 4059ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.473-0500 c20012| 2016-04-06T02:52:26.807-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|12, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.477-0500 c20012| 2016-04-06T02:52:26.809-0500 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } numYields:0 reslen:480 locks:{} protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.484-0500 c20012| 2016-04-06T02:52:26.809-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|12, t: 2 } } cursorid:25449496203 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 4061ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.485-0500 c20012| 2016-04-06T02:52:26.809-0500 I COMMAND [conn13] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.489-0500 c20012| 2016-04-06T02:52:26.810-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|12, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.491-0500 c20012| 2016-04-06T02:52:26.810-0500 I COMMAND [conn5] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } numYields:0 reslen:480 locks:{} protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.496-0500 c20012| 2016-04-06T02:52:25.246-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:24.497-0500 c20012| 2016-04-06T02:52:26.811-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:24.497-0500 c20012| 2016-04-06T02:52:26.805-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1060 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:24.508-0500 c20012| 2016-04-06T02:52:26.811-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.517-0500 c20012| 2016-04-06T02:52:26.811-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1061 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:24.521-0500 c20012| 2016-04-06T02:52:26.811-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.529-0500 c20012| 2016-04-06T02:52:26.811-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.538-0500 c20012| 2016-04-06T02:52:26.811-0500 D COMMAND [conn11] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c03a65c17830b843f1ab'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929146811), why: "splitting chunk [{ _id: -78.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.542-0500 c20012| 2016-04-06T02:52:26.811-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1061 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20012", term: 2, primaryId: 1, durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, opTime: { ts: Timestamp 1459929142000|12, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.549-0500 c20012| 2016-04-06T02:52:26.811-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1060 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20012", term: 2, primaryId: 1, durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, opTime: { ts: Timestamp 1459929142000|12, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.553-0500 c20012| 2016-04-06T02:52:26.811-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:28.811Z [js_test:multi_coll_drop] 2016-04-06T02:53:24.555-0500 c20012| 2016-04-06T02:52:26.811-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:28.811Z [js_test:multi_coll_drop] 2016-04-06T02:53:24.565-0500 c20012| 2016-04-06T02:52:26.811-0500 D QUERY [conn11] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:24.567-0500 c20012| 2016-04-06T02:52:26.811-0500 D QUERY [conn11] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:24.568-0500 c20011| 2016-04-06T02:52:42.059-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:24.571-0500 c20011| 2016-04-06T02:52:42.059-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.574-0500 c20011| 2016-04-06T02:52:42.059-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|2, t: 3 } and is durable through: { ts: Timestamp 1459929162000|1, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.578-0500 c20011| 2016-04-06T02:52:42.059-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929162000|2, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|1, t: 3 }, name-id: "216" } [js_test:multi_coll_drop] 2016-04-06T02:53:24.582-0500 c20011| 2016-04-06T02:52:42.059-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.587-0500 c20011| 2016-04-06T02:52:42.064-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:24.588-0500 c20011| 2016-04-06T02:52:42.064-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:24.589-0500 c20011| 2016-04-06T02:52:42.064-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.597-0500 c20011| 2016-04-06T02:52:42.064-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|2, t: 3 } and is durable through: { ts: Timestamp 1459929162000|2, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.599-0500 c20011| 2016-04-06T02:52:42.064-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|2, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.611-0500 c20011| 2016-04-06T02:52:42.064-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.624-0500 c20011| 2016-04-06T02:52:42.064-0500 I COMMAND [conn40] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:42.035-0500-5704c04a65c17830b843f1b6", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162035), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -73.0 }, max: { _id: MaxKey } }, left: { min: { _id: -73.0 }, max: { _id: -72.0 }, lastmod: Timestamp 1000|57, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -72.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|58, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 29ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.631-0500 c20011| 2016-04-06T02:52:42.065-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|1, t: 3 } } cursorid:19853084149 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 21ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.633-0500 c20011| 2016-04-06T02:52:42.065-0500 D COMMAND [conn40] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c04965c17830b843f1b5') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.643-0500 c20011| 2016-04-06T02:52:42.065-0500 D QUERY [conn40] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:24.645-0500 c20011| 2016-04-06T02:52:42.065-0500 D QUERY [conn40] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c04965c17830b843f1b5') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.647-0500 c20011| 2016-04-06T02:52:42.066-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.648-0500 c20012| 2016-04-06T02:52:26.811-0500 D QUERY [conn11] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.671-0500 c20012| 2016-04-06T02:52:26.812-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|12, t: 2 } } cursorid:22197973872 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.681-0500 c20011| 2016-04-06T02:52:42.068-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|2, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.689-0500 c20012| 2016-04-06T02:52:26.812-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|12, t: 2 } } cursorid:25449496203 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.700-0500 c20013| 2016-04-06T02:52:10.278-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.707-0500 c20011| 2016-04-06T02:52:42.068-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929162000|3, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|2, t: 3 }, name-id: "217" } [js_test:multi_coll_drop] 2016-04-06T02:53:24.708-0500 c20011| 2016-04-06T02:52:42.071-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.710-0500 c20011| 2016-04-06T02:52:42.072-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:24.711-0500 c20011| 2016-04-06T02:52:42.072-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:24.715-0500 c20011| 2016-04-06T02:52:42.072-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.719-0500 c20011| 2016-04-06T02:52:42.072-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|3, t: 3 } and is durable through: { ts: Timestamp 1459929162000|2, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.720-0500 c20011| 2016-04-06T02:52:42.072-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929162000|3, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|2, t: 3 }, name-id: "217" } [js_test:multi_coll_drop] 2016-04-06T02:53:24.723-0500 c20011| 2016-04-06T02:52:42.072-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.727-0500 c20011| 2016-04-06T02:52:42.077-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|3, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:24.728-0500 c20011| 2016-04-06T02:52:42.077-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:24.730-0500 c20011| 2016-04-06T02:52:42.077-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.732-0500 c20011| 2016-04-06T02:52:42.077-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|3, t: 3 } and is durable through: { ts: Timestamp 1459929162000|3, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.732-0500 c20011| 2016-04-06T02:52:42.077-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|3, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.738-0500 c20011| 2016-04-06T02:52:42.077-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|3, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.748-0500 c20011| 2016-04-06T02:52:42.077-0500 I COMMAND [conn40] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c04965c17830b843f1b5') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 12ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.751-0500 c20011| 2016-04-06T02:52:42.078-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|2, t: 3 } } cursorid:19853084149 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.752-0500 c20011| 2016-04-06T02:52:42.081-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|3, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.753-0500 c20011| 2016-04-06T02:52:42.082-0500 D COMMAND [conn36] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|56 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|3, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.760-0500 c20011| 2016-04-06T02:52:42.082-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|3, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.762-0500 c20011| 2016-04-06T02:52:42.082-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|56 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|3, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.766-0500 c20011| 2016-04-06T02:52:42.082-0500 D QUERY [conn36] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:24.768-0500 c20011| 2016-04-06T02:52:42.082-0500 I COMMAND [conn36] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|56 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|3, t: 3 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:732 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.772-0500 c20011| 2016-04-06T02:52:42.087-0500 D COMMAND [conn40] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c04a65c17830b843f1b7'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929162087), why: "splitting chunk [{ _id: -72.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.774-0500 c20011| 2016-04-06T02:52:42.087-0500 D QUERY [conn40] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:24.778-0500 c20011| 2016-04-06T02:52:42.087-0500 D QUERY [conn40] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:24.780-0500 c20011| 2016-04-06T02:52:42.087-0500 D QUERY [conn40] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.784-0500 c20011| 2016-04-06T02:52:42.087-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|3, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.785-0500 c20011| 2016-04-06T02:52:42.091-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|3, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.786-0500 c20011| 2016-04-06T02:52:42.092-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|3, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:24.787-0500 c20011| 2016-04-06T02:52:42.093-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:24.788-0500 c20011| 2016-04-06T02:52:42.093-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.791-0500 c20011| 2016-04-06T02:52:42.093-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|4, t: 3 } and is durable through: { ts: Timestamp 1459929162000|3, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.801-0500 c20011| 2016-04-06T02:52:42.093-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|3, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.803-0500 c20011| 2016-04-06T02:52:42.093-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929162000|4, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|3, t: 3 }, name-id: "218" } [js_test:multi_coll_drop] 2016-04-06T02:53:24.818-0500 c20011| 2016-04-06T02:52:42.097-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:24.819-0500 c20011| 2016-04-06T02:52:42.097-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:24.820-0500 c20011| 2016-04-06T02:52:42.097-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.835-0500 c20011| 2016-04-06T02:52:42.097-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|4, t: 3 } and is durable through: { ts: Timestamp 1459929162000|4, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.838-0500 c20011| 2016-04-06T02:52:42.097-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|4, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.840-0500 c20012| 2016-04-06T02:52:26.814-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|12, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.843-0500 c20012| 2016-04-06T02:52:26.815-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|12, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.845-0500 c20012| 2016-04-06T02:52:26.815-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:24.845-0500 c20012| 2016-04-06T02:52:26.815-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:24.849-0500 c20012| 2016-04-06T02:52:26.815-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|1, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.851-0500 c20011| 2016-04-06T02:52:42.097-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.854-0500 c20011| 2016-04-06T02:52:42.098-0500 I COMMAND [conn40] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c04a65c17830b843f1b7'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929162087), why: "splitting chunk [{ _id: -72.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c04a65c17830b843f1b7'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929162087), why: "splitting chunk [{ _id: -72.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 10ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.871-0500 c20011| 2016-04-06T02:52:42.098-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|3, t: 3 } } cursorid:19853084149 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:53:24.874-0500 c20011| 2016-04-06T02:52:42.098-0500 D COMMAND [conn40] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|4, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.875-0500 c20011| 2016-04-06T02:52:42.098-0500 D COMMAND [conn40] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|4, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.880-0500 c20011| 2016-04-06T02:52:42.098-0500 D COMMAND [conn40] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|4, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.881-0500 c20011| 2016-04-06T02:52:42.098-0500 D QUERY [conn40] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:24.882-0500 c20013| 2016-04-06T02:52:10.278-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.883-0500 c20013| 2016-04-06T02:52:10.278-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.883-0500 c20013| 2016-04-06T02:52:10.278-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.886-0500 c20013| 2016-04-06T02:52:10.278-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.887-0500 c20013| 2016-04-06T02:52:10.278-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.888-0500 c20013| 2016-04-06T02:52:10.278-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.891-0500 c20013| 2016-04-06T02:52:10.278-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.893-0500 c20013| 2016-04-06T02:52:10.278-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.893-0500 c20013| 2016-04-06T02:52:10.278-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.894-0500 c20013| 2016-04-06T02:52:10.278-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.897-0500 c20013| 2016-04-06T02:52:10.278-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.898-0500 c20013| 2016-04-06T02:52:10.279-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:24.913-0500 c20013| 2016-04-06T02:52:10.279-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:24.920-0500 c20013| 2016-04-06T02:52:10.279-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 992 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|8, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:24.925-0500 c20013| 2016-04-06T02:52:10.279-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 992 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:24.926-0500 c20013| 2016-04-06T02:52:10.279-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 992 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.938-0500 c20013| 2016-04-06T02:52:10.279-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 994 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.279-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|8, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.939-0500 c20013| 2016-04-06T02:52:10.279-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 994 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:24.943-0500 c20013| 2016-04-06T02:52:10.281-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:24.949-0500 c20013| 2016-04-06T02:52:10.281-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 995 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|9, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:24.951-0500 c20013| 2016-04-06T02:52:10.281-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 995 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:24.951-0500 c20013| 2016-04-06T02:52:10.281-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 995 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.956-0500 c20013| 2016-04-06T02:52:10.281-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 994 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.956-0500 c20013| 2016-04-06T02:52:10.281-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|9, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.956-0500 c20013| 2016-04-06T02:52:10.281-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:24.958-0500 c20013| 2016-04-06T02:52:10.281-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 998 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.281-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:24.959-0500 c20013| 2016-04-06T02:52:10.281-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 998 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:24.967-0500 c20013| 2016-04-06T02:52:10.282-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 998 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929130000|10, t: 1, h: 3135197531614568333, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:24.971-0500 c20013| 2016-04-06T02:52:10.282-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929130000|10 and ending at ts: Timestamp 1459929130000|10 [js_test:multi_coll_drop] 2016-04-06T02:53:24.972-0500 c20013| 2016-04-06T02:52:10.282-0500 D REPL [rsBackgroundSync-0] bgsync buffer has 0 bytes [js_test:multi_coll_drop] 2016-04-06T02:53:24.976-0500 c20013| 2016-04-06T02:52:10.282-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:24.978-0500 c20013| 2016-04-06T02:52:10.282-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.979-0500 c20013| 2016-04-06T02:52:10.282-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.979-0500 c20013| 2016-04-06T02:52:10.282-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.980-0500 c20013| 2016-04-06T02:52:10.282-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.981-0500 c20013| 2016-04-06T02:52:10.282-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.981-0500 c20013| 2016-04-06T02:52:10.282-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.982-0500 c20013| 2016-04-06T02:52:10.282-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.984-0500 c20013| 2016-04-06T02:52:10.282-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.985-0500 c20013| 2016-04-06T02:52:10.282-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.991-0500 c20013| 2016-04-06T02:52:10.282-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.991-0500 c20013| 2016-04-06T02:52:10.282-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.991-0500 c20013| 2016-04-06T02:52:10.282-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.993-0500 c20013| 2016-04-06T02:52:10.282-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.996-0500 c20013| 2016-04-06T02:52:10.282-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.997-0500 c20013| 2016-04-06T02:52:10.282-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.998-0500 c20013| 2016-04-06T02:52:10.282-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:24.999-0500 c20013| 2016-04-06T02:52:10.282-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:24.999-0500 c20013| 2016-04-06T02:52:10.283-0500 D QUERY [repl writer worker 3] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:25.000-0500 c20013| 2016-04-06T02:52:10.283-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:25.000-0500 c20013| 2016-04-06T02:52:10.283-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:25.001-0500 c20013| 2016-04-06T02:52:10.283-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:25.002-0500 c20013| 2016-04-06T02:52:10.283-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:25.002-0500 c20013| 2016-04-06T02:52:10.283-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:25.003-0500 c20013| 2016-04-06T02:52:10.283-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:25.004-0500 c20013| 2016-04-06T02:52:10.283-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:25.004-0500 c20013| 2016-04-06T02:52:10.283-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:25.004-0500 c20013| 2016-04-06T02:52:10.283-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:25.004-0500 c20011| 2016-04-06T02:52:42.098-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|4, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:25.007-0500 c20011| 2016-04-06T02:52:42.099-0500 I COMMAND [conn40] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|4, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:512 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.009-0500 c20011| 2016-04-06T02:52:42.099-0500 D COMMAND [conn40] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|58 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|4, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.010-0500 c20011| 2016-04-06T02:52:42.099-0500 D COMMAND [conn40] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|4, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:25.011-0500 c20011| 2016-04-06T02:52:42.099-0500 D COMMAND [conn40] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|58 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|4, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.012-0500 c20011| 2016-04-06T02:52:42.099-0500 D QUERY [conn40] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:25.013-0500 c20011| 2016-04-06T02:52:42.099-0500 I COMMAND [conn40] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|58 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|4, t: 3 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.015-0500 c20011| 2016-04-06T02:52:42.100-0500 D COMMAND [conn40] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-72.0", lastmod: Timestamp 1000|59, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -72.0 }, max: { _id: -71.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-72.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-71.0", lastmod: Timestamp 1000|60, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -71.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-71.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|58 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.017-0500 c20011| 2016-04-06T02:52:42.100-0500 D QUERY [conn40] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:25.021-0500 c20011| 2016-04-06T02:52:42.100-0500 D QUERY [conn40] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:25.023-0500 c20011| 2016-04-06T02:52:42.100-0500 I COMMAND [conn40] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.025-0500 c20011| 2016-04-06T02:52:42.100-0500 D QUERY [conn40] Using idhack: { _id: "multidrop.coll-_id_-72.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:25.025-0500 c20011| 2016-04-06T02:52:42.100-0500 D QUERY [conn40] Using idhack: { _id: "multidrop.coll-_id_-71.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:25.033-0500 c20011| 2016-04-06T02:52:42.101-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|4, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.038-0500 c20011| 2016-04-06T02:52:42.103-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|4, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:25.045-0500 c20011| 2016-04-06T02:52:42.105-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:25.045-0500 c20011| 2016-04-06T02:52:42.105-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:25.047-0500 c20011| 2016-04-06T02:52:42.105-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.050-0500 c20011| 2016-04-06T02:52:42.105-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|5, t: 3 } and is durable through: { ts: Timestamp 1459929162000|4, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.053-0500 c20011| 2016-04-06T02:52:42.105-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.055-0500 c20011| 2016-04-06T02:52:42.105-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929162000|5, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|4, t: 3 }, name-id: "219" } [js_test:multi_coll_drop] 2016-04-06T02:53:25.061-0500 c20011| 2016-04-06T02:52:42.109-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:25.064-0500 c20011| 2016-04-06T02:52:42.109-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:25.065-0500 c20011| 2016-04-06T02:52:42.109-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.070-0500 c20011| 2016-04-06T02:52:42.109-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|5, t: 3 } and is durable through: { ts: Timestamp 1459929162000|5, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.070-0500 c20011| 2016-04-06T02:52:42.109-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|5, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.079-0500 c20011| 2016-04-06T02:52:42.109-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.086-0500 c20011| 2016-04-06T02:52:42.110-0500 I COMMAND [conn40] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-72.0", lastmod: Timestamp 1000|59, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -72.0 }, max: { _id: -71.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-72.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-71.0", lastmod: Timestamp 1000|60, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -71.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-71.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|58 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 10ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.091-0500 c20011| 2016-04-06T02:52:42.110-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|4, t: 3 } } cursorid:19853084149 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.094-0500 c20011| 2016-04-06T02:52:42.111-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|5, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:25.098-0500 c20011| 2016-04-06T02:52:42.111-0500 D COMMAND [conn40] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:42.110-0500-5704c04a65c17830b843f1b8", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162110), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -72.0 }, max: { _id: MaxKey } }, left: { min: { _id: -72.0 }, max: { _id: -71.0 }, lastmod: Timestamp 1000|59, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -71.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|60, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.105-0500 c20011| 2016-04-06T02:52:42.111-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|5, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.116-0500 c20011| 2016-04-06T02:52:42.114-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|5, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:25.136-0500 c20011| 2016-04-06T02:52:42.115-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929162000|6, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|5, t: 3 }, name-id: "220" } [js_test:multi_coll_drop] 2016-04-06T02:53:25.144-0500 c20011| 2016-04-06T02:52:42.117-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|6, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:25.145-0500 c20011| 2016-04-06T02:52:42.117-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:25.154-0500 c20011| 2016-04-06T02:52:42.117-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.156-0500 c20011| 2016-04-06T02:52:42.117-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|6, t: 3 } and is durable through: { ts: Timestamp 1459929162000|5, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.164-0500 c20011| 2016-04-06T02:52:42.117-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929162000|6, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|5, t: 3 }, name-id: "220" } [js_test:multi_coll_drop] 2016-04-06T02:53:25.168-0500 c20011| 2016-04-06T02:52:42.117-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|6, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.172-0500 c20011| 2016-04-06T02:52:42.129-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|6, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|6, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:25.176-0500 c20011| 2016-04-06T02:52:42.129-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:25.180-0500 c20011| 2016-04-06T02:52:42.129-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.181-0500 c20011| 2016-04-06T02:52:42.129-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|6, t: 3 } and is durable through: { ts: Timestamp 1459929162000|6, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.183-0500 c20011| 2016-04-06T02:52:42.129-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|6, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.185-0500 c20011| 2016-04-06T02:52:42.130-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|6, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|6, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.194-0500 c20011| 2016-04-06T02:52:42.130-0500 I COMMAND [conn40] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:42.110-0500-5704c04a65c17830b843f1b8", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162110), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -72.0 }, max: { _id: MaxKey } }, left: { min: { _id: -72.0 }, max: { _id: -71.0 }, lastmod: Timestamp 1000|59, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -71.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|60, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 18ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.197-0500 c20011| 2016-04-06T02:52:42.131-0500 D COMMAND [conn40] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c04a65c17830b843f1b7') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.198-0500 c20011| 2016-04-06T02:52:42.131-0500 D QUERY [conn40] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:25.200-0500 c20011| 2016-04-06T02:52:42.131-0500 D QUERY [conn40] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c04a65c17830b843f1b7') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.203-0500 c20011| 2016-04-06T02:52:42.133-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|5, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 18ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.204-0500 c20011| 2016-04-06T02:52:42.135-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|6, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:25.206-0500 c20011| 2016-04-06T02:52:42.137-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929162000|7, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|6, t: 3 }, name-id: "221" } [js_test:multi_coll_drop] 2016-04-06T02:53:25.209-0500 c20011| 2016-04-06T02:52:42.141-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|6, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|7, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:25.209-0500 c20011| 2016-04-06T02:52:42.141-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:25.210-0500 c20011| 2016-04-06T02:52:42.141-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.215-0500 c20011| 2016-04-06T02:52:42.141-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|7, t: 3 } and is durable through: { ts: Timestamp 1459929162000|6, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.218-0500 c20011| 2016-04-06T02:52:42.141-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929162000|7, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|6, t: 3 }, name-id: "221" } [js_test:multi_coll_drop] 2016-04-06T02:53:25.221-0500 c20011| 2016-04-06T02:52:42.141-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|6, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|7, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.229-0500 c20011| 2016-04-06T02:52:42.148-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|7, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|7, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:25.230-0500 c20011| 2016-04-06T02:52:42.148-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:25.237-0500 c20011| 2016-04-06T02:52:42.148-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.238-0500 c20011| 2016-04-06T02:52:42.148-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|7, t: 3 } and is durable through: { ts: Timestamp 1459929162000|7, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.242-0500 c20011| 2016-04-06T02:52:42.148-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|7, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.248-0500 c20011| 2016-04-06T02:52:42.148-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|7, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|7, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.252-0500 c20011| 2016-04-06T02:52:42.149-0500 I COMMAND [conn40] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c04a65c17830b843f1b7') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 17ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.260-0500 c20011| 2016-04-06T02:52:42.149-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|6, t: 3 } } cursorid:19853084149 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 13ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.261-0500 c20011| 2016-04-06T02:52:42.149-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|7, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:25.268-0500 c20011| 2016-04-06T02:52:42.151-0500 D COMMAND [conn36] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|7, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.269-0500 c20011| 2016-04-06T02:52:42.151-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|7, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:25.272-0500 c20011| 2016-04-06T02:52:42.151-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|7, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.273-0500 c20011| 2016-04-06T02:52:42.152-0500 D QUERY [conn36] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:25.280-0500 c20011| 2016-04-06T02:52:42.152-0500 I COMMAND [conn36] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|7, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.280-0500 c20013| 2016-04-06T02:52:10.283-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:25.284-0500 c20013| 2016-04-06T02:52:10.283-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:25.285-0500 c20013| 2016-04-06T02:52:10.283-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:25.288-0500 c20013| 2016-04-06T02:52:10.283-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:25.289-0500 c20013| 2016-04-06T02:52:10.283-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:25.290-0500 c20013| 2016-04-06T02:52:10.283-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:25.292-0500 c20013| 2016-04-06T02:52:10.283-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:25.295-0500 c20013| 2016-04-06T02:52:10.283-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:25.302-0500 c20013| 2016-04-06T02:52:10.283-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:25.306-0500 c20012| 2016-04-06T02:52:26.815-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.312-0500 c20012| 2016-04-06T02:52:26.815-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.314-0500 c20012| 2016-04-06T02:52:26.817-0500 D REPL [conn11] Required snapshot optime: { ts: Timestamp 1459929146000|1, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929142000|12, t: 2 }, name-id: "202" } [js_test:multi_coll_drop] 2016-04-06T02:53:25.322-0500 c20012| 2016-04-06T02:52:26.818-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:25.323-0500 c20012| 2016-04-06T02:52:26.818-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:25.328-0500 c20012| 2016-04-06T02:52:26.818-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|1, t: 2 } and is durable through: { ts: Timestamp 1459929146000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.335-0500 c20013| 2016-04-06T02:52:10.283-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1000 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|9, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:25.337-0500 c20012| 2016-04-06T02:52:26.819-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.341-0500 c20012| 2016-04-06T02:52:26.819-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.350-0500 c20012| 2016-04-06T02:52:26.819-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.351-0500 c20013| 2016-04-06T02:52:10.283-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1000 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:25.354-0500 c20013| 2016-04-06T02:52:10.284-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1000 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.358-0500 c20013| 2016-04-06T02:52:10.284-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1002 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.284-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|9, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:25.361-0500 c20013| 2016-04-06T02:52:10.284-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1002 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:25.368-0500 c20013| 2016-04-06T02:52:10.284-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:25.381-0500 c20013| 2016-04-06T02:52:10.284-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1003 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, appliedOpTime: { ts: Timestamp 1459929129000|12, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:25.382-0500 c20013| 2016-04-06T02:52:10.284-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1003 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:25.384-0500 c20013| 2016-04-06T02:52:10.284-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1003 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.385-0500 c20013| 2016-04-06T02:52:10.285-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1002 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.386-0500 c20013| 2016-04-06T02:52:10.285-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.387-0500 c20013| 2016-04-06T02:52:10.285-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:25.392-0500 c20012| 2016-04-06T02:52:26.819-0500 I COMMAND [conn11] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c03a65c17830b843f1ab'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929146811), why: "splitting chunk [{ _id: -78.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c03a65c17830b843f1ab'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929146811), why: "splitting chunk [{ _id: -78.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.398-0500 c20012| 2016-04-06T02:52:26.820-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|12, t: 2 } } cursorid:22197973872 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.400-0500 c20012| 2016-04-06T02:52:26.820-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|12, t: 2 } } cursorid:25449496203 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.402-0500 c20012| 2016-04-06T02:52:26.820-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:25.404-0500 c20012| 2016-04-06T02:52:26.820-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:25.407-0500 c20012| 2016-04-06T02:52:26.821-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:25.407-0500 c20012| 2016-04-06T02:52:26.821-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:25.409-0500 c20012| 2016-04-06T02:52:26.821-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.410-0500 c20012| 2016-04-06T02:52:26.821-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|1, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.414-0500 c20012| 2016-04-06T02:52:26.821-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.416-0500 c20012| 2016-04-06T02:52:26.821-0500 D COMMAND [conn11] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-78.0", lastmod: Timestamp 1000|47, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -78.0 }, max: { _id: -77.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-78.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-77.0", lastmod: Timestamp 1000|48, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -77.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-77.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|46 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.417-0500 c20012| 2016-04-06T02:52:26.821-0500 D QUERY [conn11] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:25.419-0500 c20012| 2016-04-06T02:52:26.821-0500 D QUERY [conn11] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:25.421-0500 c20012| 2016-04-06T02:52:26.821-0500 I COMMAND [conn11] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.684-0500 c20012| 2016-04-06T02:52:26.821-0500 D QUERY [conn11] Using idhack: { _id: "multidrop.coll-_id_-78.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:25.685-0500 c20012| 2016-04-06T02:52:26.822-0500 D QUERY [conn11] Using idhack: { _id: "multidrop.coll-_id_-77.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:25.688-0500 c20012| 2016-04-06T02:52:26.822-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:25.692-0500 c20012| 2016-04-06T02:52:26.822-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|1, t: 2 } } cursorid:25449496203 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.696-0500 c20012| 2016-04-06T02:52:26.822-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|1, t: 2 } } cursorid:22197973872 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.697-0500 c20012| 2016-04-06T02:52:26.822-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:25.699-0500 c20012| 2016-04-06T02:52:26.822-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.701-0500 c20012| 2016-04-06T02:52:26.822-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|1, t: 2 } and is durable through: { ts: Timestamp 1459929146000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.705-0500 c20012| 2016-04-06T02:52:26.822-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.711-0500 c20012| 2016-04-06T02:52:26.824-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:25.718-0500 c20012| 2016-04-06T02:52:26.824-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:25.721-0500 c20012| 2016-04-06T02:52:26.824-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.722-0500 c20012| 2016-04-06T02:52:26.824-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|2, t: 2 } and is durable through: { ts: Timestamp 1459929146000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.730-0500 c20012| 2016-04-06T02:52:26.824-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.732-0500 c20012| 2016-04-06T02:52:26.825-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:25.733-0500 c20012| 2016-04-06T02:52:26.825-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:25.737-0500 c20012| 2016-04-06T02:52:26.826-0500 D REPL [conn11] Required snapshot optime: { ts: Timestamp 1459929146000|2, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929146000|1, t: 2 }, name-id: "203" } [js_test:multi_coll_drop] 2016-04-06T02:53:25.739-0500 c20012| 2016-04-06T02:52:26.826-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:25.740-0500 c20012| 2016-04-06T02:52:26.826-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:25.745-0500 c20012| 2016-04-06T02:52:26.826-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|2, t: 2 } and is durable through: { ts: Timestamp 1459929146000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.746-0500 c20012| 2016-04-06T02:52:26.826-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929146000|2, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929146000|1, t: 2 }, name-id: "203" } [js_test:multi_coll_drop] 2016-04-06T02:53:25.748-0500 c20012| 2016-04-06T02:52:26.826-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.752-0500 c20012| 2016-04-06T02:52:26.826-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.755-0500 c20012| 2016-04-06T02:52:26.827-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:25.755-0500 c20012| 2016-04-06T02:52:26.827-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:25.757-0500 c20012| 2016-04-06T02:52:26.827-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.760-0500 c20012| 2016-04-06T02:52:26.827-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|2, t: 2 } and is durable through: { ts: Timestamp 1459929146000|2, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.760-0500 c20012| 2016-04-06T02:52:26.827-0500 D REPL [conn18] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|2, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.767-0500 c20012| 2016-04-06T02:52:26.827-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.771-0500 c20012| 2016-04-06T02:52:26.827-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|1, t: 2 } } cursorid:22197973872 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.774-0500 c20012| 2016-04-06T02:52:26.827-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:25.775-0500 c20012| 2016-04-06T02:52:26.827-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:25.779-0500 c20012| 2016-04-06T02:52:26.827-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|2, t: 2 } and is durable through: { ts: Timestamp 1459929146000|2, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.785-0500 c20012| 2016-04-06T02:52:26.827-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.804-0500 c20012| 2016-04-06T02:52:26.827-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.829-0500 c20012| 2016-04-06T02:52:26.827-0500 I COMMAND [conn11] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-78.0", lastmod: Timestamp 1000|47, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -78.0 }, max: { _id: -77.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-78.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-77.0", lastmod: Timestamp 1000|48, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -77.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-77.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|46 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.834-0500 c20012| 2016-04-06T02:52:26.828-0500 D COMMAND [conn11] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:26.828-0500-5704c03a65c17830b843f1ac", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929146828), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -78.0 }, max: { _id: MaxKey } }, left: { min: { _id: -78.0 }, max: { _id: -77.0 }, lastmod: Timestamp 1000|47, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -77.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|48, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.837-0500 c20012| 2016-04-06T02:52:26.828-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|2, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:25.840-0500 c20012| 2016-04-06T02:52:26.828-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|2, t: 2 } } cursorid:22197973872 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.843-0500 c20012| 2016-04-06T02:52:26.830-0500 D REPL [conn11] Required snapshot optime: { ts: Timestamp 1459929146000|3, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929146000|2, t: 2 }, name-id: "204" } [js_test:multi_coll_drop] 2016-04-06T02:53:25.849-0500 c20012| 2016-04-06T02:52:26.830-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|1, t: 2 } } cursorid:25449496203 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.857-0500 c20012| 2016-04-06T02:52:26.831-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|2, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:25.860-0500 c20012| 2016-04-06T02:52:26.831-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:25.861-0500 c20012| 2016-04-06T02:52:26.831-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:25.864-0500 c20012| 2016-04-06T02:52:26.831-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|3, t: 2 } and is durable through: { ts: Timestamp 1459929146000|2, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.868-0500 c20012| 2016-04-06T02:52:26.831-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929146000|3, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929146000|2, t: 2 }, name-id: "204" } [js_test:multi_coll_drop] 2016-04-06T02:53:25.870-0500 c20012| 2016-04-06T02:52:26.831-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.874-0500 c20012| 2016-04-06T02:52:26.831-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.875-0500 c20012| 2016-04-06T02:52:26.832-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:25.876-0500 c20012| 2016-04-06T02:52:26.832-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:25.877-0500 c20012| 2016-04-06T02:52:26.832-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|3, t: 2 } and is durable through: { ts: Timestamp 1459929146000|3, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.877-0500 c20012| 2016-04-06T02:52:26.832-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|3, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.878-0500 c20012| 2016-04-06T02:52:26.832-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.884-0500 c20012| 2016-04-06T02:52:26.832-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.888-0500 c20012| 2016-04-06T02:52:26.832-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|2, t: 2 } } cursorid:22197973872 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.893-0500 c20012| 2016-04-06T02:52:26.833-0500 I COMMAND [conn11] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:26.828-0500-5704c03a65c17830b843f1ac", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929146828), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -78.0 }, max: { _id: MaxKey } }, left: { min: { _id: -78.0 }, max: { _id: -77.0 }, lastmod: Timestamp 1000|47, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -77.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|48, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.898-0500 c20012| 2016-04-06T02:52:26.833-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|3, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:25.902-0500 c20012| 2016-04-06T02:52:26.833-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:25.904-0500 c20012| 2016-04-06T02:52:26.833-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:25.930-0500 c20012| 2016-04-06T02:52:26.833-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.934-0500 c20012| 2016-04-06T02:52:26.833-0500 D COMMAND [conn11] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c03a65c17830b843f1ab') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.937-0500 c20012| 2016-04-06T02:52:26.833-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|3, t: 2 } and is durable through: { ts: Timestamp 1459929146000|2, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.939-0500 c20012| 2016-04-06T02:52:26.833-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.940-0500 c20012| 2016-04-06T02:52:26.833-0500 D QUERY [conn11] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:25.942-0500 c20012| 2016-04-06T02:52:26.833-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|2, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:25.943-0500 c20012| 2016-04-06T02:52:26.833-0500 D QUERY [conn11] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c03a65c17830b843f1ab') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.946-0500 c20012| 2016-04-06T02:52:26.833-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|2, t: 2 } } cursorid:25449496203 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.948-0500 c20012| 2016-04-06T02:52:26.833-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|3, t: 2 } } cursorid:22197973872 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.950-0500 c20012| 2016-04-06T02:52:26.834-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|3, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:25.952-0500 c20012| 2016-04-06T02:52:26.834-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|3, t: 2 } } cursorid:25449496203 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.954-0500 c20012| 2016-04-06T02:52:26.835-0500 D REPL [conn11] Required snapshot optime: { ts: Timestamp 1459929146000|4, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929146000|3, t: 2 }, name-id: "205" } [js_test:multi_coll_drop] 2016-04-06T02:53:25.958-0500 c20012| 2016-04-06T02:52:26.835-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:25.958-0500 c20012| 2016-04-06T02:52:26.835-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:25.961-0500 c20012| 2016-04-06T02:52:26.835-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.966-0500 c20012| 2016-04-06T02:52:26.835-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|3, t: 2 } and is durable through: { ts: Timestamp 1459929146000|3, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.968-0500 c20012| 2016-04-06T02:52:26.835-0500 D REPL [conn18] Required snapshot optime: { ts: Timestamp 1459929146000|4, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929146000|3, t: 2 }, name-id: "205" } [js_test:multi_coll_drop] 2016-04-06T02:53:25.970-0500 c20012| 2016-04-06T02:52:26.835-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:25.974-0500 c20012| 2016-04-06T02:52:26.836-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|3, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:25.976-0500 c20012| 2016-04-06T02:52:26.836-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:25.977-0500 c20012| 2016-04-06T02:52:26.836-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:25.980-0500 c20012| 2016-04-06T02:52:26.836-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|3, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:25.983-0500 c20012| 2016-04-06T02:52:26.836-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|4, t: 2 } and is durable through: { ts: Timestamp 1459929146000|3, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.983-0500 c20012| 2016-04-06T02:52:26.836-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929146000|4, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929146000|3, t: 2 }, name-id: "205" } [js_test:multi_coll_drop] 2016-04-06T02:53:25.994-0500 c20012| 2016-04-06T02:52:26.836-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:25.998-0500 c20012| 2016-04-06T02:52:26.836-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.021-0500 c20012| 2016-04-06T02:52:26.837-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:26.023-0500 c20012| 2016-04-06T02:52:26.837-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:26.024-0500 c20012| 2016-04-06T02:52:26.837-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|4, t: 2 } and is durable through: { ts: Timestamp 1459929146000|4, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.025-0500 c20012| 2016-04-06T02:52:26.837-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|4, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.026-0500 c20012| 2016-04-06T02:52:26.837-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.030-0500 c20012| 2016-04-06T02:52:26.837-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.033-0500 c20012| 2016-04-06T02:52:26.837-0500 I COMMAND [conn11] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c03a65c17830b843f1ab') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.035-0500 c20012| 2016-04-06T02:52:26.837-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|3, t: 2 } } cursorid:25449496203 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.043-0500 c20012| 2016-04-06T02:52:26.837-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|3, t: 2 } } cursorid:22197973872 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.047-0500 c20012| 2016-04-06T02:52:26.838-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|4, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.054-0500 c20012| 2016-04-06T02:52:26.838-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|4, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.057-0500 c20012| 2016-04-06T02:52:26.838-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|4, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.060-0500 c20012| 2016-04-06T02:52:26.838-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|4, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.062-0500 c20012| 2016-04-06T02:52:26.838-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|4, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.069-0500 c20012| 2016-04-06T02:52:26.838-0500 D QUERY [conn7] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:26.071-0500 c20012| 2016-04-06T02:52:26.838-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|4, t: 2 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.072-0500 c20012| 2016-04-06T02:52:26.840-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:26.072-0500 c20012| 2016-04-06T02:52:26.840-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:26.079-0500 c20012| 2016-04-06T02:52:26.840-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.081-0500 c20012| 2016-04-06T02:52:26.840-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|4, t: 2 } and is durable through: { ts: Timestamp 1459929146000|3, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.083-0500 c20012| 2016-04-06T02:52:26.840-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.087-0500 c20012| 2016-04-06T02:52:26.841-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|4, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.091-0500 c20012| 2016-04-06T02:52:26.841-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|4, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.096-0500 c20012| 2016-04-06T02:52:26.841-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|4, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.100-0500 c20012| 2016-04-06T02:52:26.841-0500 D QUERY [conn7] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:26.113-0500 c20012| 2016-04-06T02:52:26.841-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|4, t: 2 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.129-0500 c20012| 2016-04-06T02:52:26.841-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:26.129-0500 c20012| 2016-04-06T02:52:26.841-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:26.132-0500 c20012| 2016-04-06T02:52:26.841-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.138-0500 c20012| 2016-04-06T02:52:26.841-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|4, t: 2 } and is durable through: { ts: Timestamp 1459929146000|4, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.144-0500 c20012| 2016-04-06T02:52:26.841-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.151-0500 c20012| 2016-04-06T02:52:26.841-0500 D COMMAND [conn11] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c03a65c17830b843f1ad'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929146841), why: "splitting chunk [{ _id: -77.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.154-0500 c20012| 2016-04-06T02:52:26.841-0500 D QUERY [conn11] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:26.161-0500 c20012| 2016-04-06T02:52:26.841-0500 D QUERY [conn11] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:26.164-0500 c20012| 2016-04-06T02:52:26.841-0500 D QUERY [conn11] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.169-0500 c20012| 2016-04-06T02:52:26.842-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|4, t: 2 } } cursorid:25449496203 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.170-0500 c20012| 2016-04-06T02:52:26.842-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|4, t: 2 } } cursorid:22197973872 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.173-0500 c20012| 2016-04-06T02:52:26.844-0500 D REPL [conn11] Required snapshot optime: { ts: Timestamp 1459929146000|5, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929146000|4, t: 2 }, name-id: "206" } [js_test:multi_coll_drop] 2016-04-06T02:53:26.179-0500 c20012| 2016-04-06T02:52:26.844-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:26.180-0500 c20012| 2016-04-06T02:52:26.844-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:26.182-0500 c20012| 2016-04-06T02:52:26.844-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|5, t: 2 } and is durable through: { ts: Timestamp 1459929146000|4, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.184-0500 c20012| 2016-04-06T02:52:26.844-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929146000|5, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929146000|4, t: 2 }, name-id: "206" } [js_test:multi_coll_drop] 2016-04-06T02:53:26.188-0500 c20011| 2016-04-06T02:52:42.154-0500 D COMMAND [conn36] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|7, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.190-0500 c20011| 2016-04-06T02:52:42.154-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|7, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.192-0500 c20011| 2016-04-06T02:52:42.154-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|7, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.193-0500 c20011| 2016-04-06T02:52:42.154-0500 D QUERY [conn36] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:26.265-0500 c20011| 2016-04-06T02:52:42.154-0500 I COMMAND [conn36] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|7, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.270-0500 c20011| 2016-04-06T02:52:42.155-0500 D COMMAND [conn40] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c04a65c17830b843f1b9'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929162154), why: "splitting chunk [{ _id: -71.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.271-0500 c20011| 2016-04-06T02:52:42.155-0500 D QUERY [conn40] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:26.272-0500 c20011| 2016-04-06T02:52:42.155-0500 D QUERY [conn40] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:26.274-0500 c20011| 2016-04-06T02:52:42.155-0500 D QUERY [conn40] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.278-0500 c20011| 2016-04-06T02:52:42.155-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|7, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.281-0500 c20011| 2016-04-06T02:52:42.158-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|7, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.287-0500 c20011| 2016-04-06T02:52:42.160-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|7, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:26.287-0500 c20011| 2016-04-06T02:52:42.160-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:26.288-0500 c20011| 2016-04-06T02:52:42.160-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.289-0500 c20011| 2016-04-06T02:52:42.160-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|8, t: 3 } and is durable through: { ts: Timestamp 1459929162000|7, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.293-0500 c20011| 2016-04-06T02:52:42.160-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|7, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.295-0500 c20011| 2016-04-06T02:52:42.175-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929162000|8, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|7, t: 3 }, name-id: "222" } [js_test:multi_coll_drop] 2016-04-06T02:53:26.298-0500 c20011| 2016-04-06T02:52:42.175-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:26.298-0500 c20011| 2016-04-06T02:52:42.175-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:26.302-0500 c20011| 2016-04-06T02:52:42.175-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.306-0500 c20011| 2016-04-06T02:52:42.175-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|8, t: 3 } and is durable through: { ts: Timestamp 1459929162000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.306-0500 c20011| 2016-04-06T02:52:42.175-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.313-0500 c20011| 2016-04-06T02:52:42.175-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.316-0500 c20011| 2016-04-06T02:52:42.175-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|7, t: 3 } } cursorid:19853084149 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 16ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.327-0500 c20011| 2016-04-06T02:52:42.175-0500 I COMMAND [conn40] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c04a65c17830b843f1b9'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929162154), why: "splitting chunk [{ _id: -71.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c04a65c17830b843f1b9'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929162154), why: "splitting chunk [{ _id: -71.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 20ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.328-0500 c20011| 2016-04-06T02:52:42.176-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.335-0500 c20011| 2016-04-06T02:52:42.179-0500 D COMMAND [conn40] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|60 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|8, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.345-0500 c20011| 2016-04-06T02:52:42.179-0500 D COMMAND [conn40] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|8, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.349-0500 c20011| 2016-04-06T02:52:42.179-0500 D COMMAND [conn40] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|60 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|8, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.356-0500 c20011| 2016-04-06T02:52:42.179-0500 D QUERY [conn40] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:26.357-0500 c20011| 2016-04-06T02:52:42.180-0500 I COMMAND [conn40] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|60 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|8, t: 3 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.367-0500 c20011| 2016-04-06T02:52:42.180-0500 D COMMAND [conn40] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-71.0", lastmod: Timestamp 1000|61, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -71.0 }, max: { _id: -70.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-71.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-70.0", lastmod: Timestamp 1000|62, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -70.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-70.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|60 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.368-0500 c20011| 2016-04-06T02:52:42.180-0500 D QUERY [conn40] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:26.370-0500 c20011| 2016-04-06T02:52:42.180-0500 D QUERY [conn40] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:26.372-0500 c20011| 2016-04-06T02:52:42.180-0500 I COMMAND [conn40] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.373-0500 c20011| 2016-04-06T02:52:42.180-0500 D QUERY [conn40] Using idhack: { _id: "multidrop.coll-_id_-71.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:26.374-0500 c20011| 2016-04-06T02:52:42.180-0500 D QUERY [conn40] Using idhack: { _id: "multidrop.coll-_id_-70.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:26.375-0500 s20014| 2016-04-06T02:53:11.722-0500 D ASIO [replSetDistLockPinger] startCommand: RemoteCommand 416 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:41.722-0500 cmd:{ findAndModify: "lockpings", query: { _id: "mongovm16:20014:1459929123:-665935931" }, update: { $set: { ping: new Date(1459929191722) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.381-0500 c20013| 2016-04-06T02:52:10.285-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1006 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:15.285-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.383-0500 c20013| 2016-04-06T02:52:10.285-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1006 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:26.385-0500 c20013| 2016-04-06T02:52:12.081-0500 D COMMAND [conn5] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.386-0500 c20013| 2016-04-06T02:52:12.081-0500 D COMMAND [conn5] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:26.387-0500 s20014| 2016-04-06T02:53:11.722-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 416 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:26.389-0500 s20015| 2016-04-06T02:53:11.723-0500 D ASIO [replSetDistLockPinger] startCommand: RemoteCommand 99 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:41.723-0500 cmd:{ findAndModify: "lockpings", query: { _id: "mongovm16:20015:1459929127:-1485108316" }, update: { $set: { ping: new Date(1459929191723) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.395-0500 c20011| 2016-04-06T02:52:42.181-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|8, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.397-0500 c20011| 2016-04-06T02:52:42.184-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.400-0500 c20011| 2016-04-06T02:52:42.185-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:26.400-0500 c20011| 2016-04-06T02:52:42.185-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:26.402-0500 c20011| 2016-04-06T02:52:42.185-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.404-0500 c20011| 2016-04-06T02:52:42.185-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|9, t: 3 } and is durable through: { ts: Timestamp 1459929162000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.417-0500 c20011| 2016-04-06T02:52:42.185-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.422-0500 c20011| 2016-04-06T02:52:42.190-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929162000|9, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|8, t: 3 }, name-id: "223" } [js_test:multi_coll_drop] 2016-04-06T02:53:26.431-0500 c20011| 2016-04-06T02:52:42.190-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:26.432-0500 c20011| 2016-04-06T02:52:42.190-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:26.436-0500 c20011| 2016-04-06T02:52:42.190-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.440-0500 c20011| 2016-04-06T02:52:42.190-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|9, t: 3 } and is durable through: { ts: Timestamp 1459929162000|9, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.443-0500 c20011| 2016-04-06T02:52:42.190-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|9, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.449-0500 c20011| 2016-04-06T02:52:42.191-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.451-0500 c20011| 2016-04-06T02:52:42.191-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|8, t: 3 } } cursorid:19853084149 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.457-0500 c20011| 2016-04-06T02:52:42.191-0500 I COMMAND [conn40] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-71.0", lastmod: Timestamp 1000|61, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -71.0 }, max: { _id: -70.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-71.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-70.0", lastmod: Timestamp 1000|62, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -70.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-70.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|60 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 10ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.459-0500 c20011| 2016-04-06T02:52:42.191-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|9, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.463-0500 c20011| 2016-04-06T02:52:42.192-0500 D COMMAND [conn40] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:42.191-0500-5704c04a65c17830b843f1ba", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162191), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -71.0 }, max: { _id: MaxKey } }, left: { min: { _id: -71.0 }, max: { _id: -70.0 }, lastmod: Timestamp 1000|61, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -70.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|62, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.466-0500 c20011| 2016-04-06T02:52:42.196-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|9, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.467-0500 c20011| 2016-04-06T02:52:42.198-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929162000|10, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|9, t: 3 }, name-id: "224" } [js_test:multi_coll_drop] 2016-04-06T02:53:26.469-0500 c20011| 2016-04-06T02:52:42.207-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|9, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.476-0500 c20011| 2016-04-06T02:52:42.211-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|10, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:26.479-0500 c20011| 2016-04-06T02:52:42.211-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:26.480-0500 c20011| 2016-04-06T02:52:42.211-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.487-0500 c20011| 2016-04-06T02:52:42.211-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|10, t: 3 } and is durable through: { ts: Timestamp 1459929162000|9, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.489-0500 c20011| 2016-04-06T02:52:42.211-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929162000|10, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|9, t: 3 }, name-id: "224" } [js_test:multi_coll_drop] 2016-04-06T02:53:26.494-0500 c20011| 2016-04-06T02:52:42.211-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|10, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.495-0500 c20011| 2016-04-06T02:52:42.213-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 304 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:52.213-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.496-0500 c20011| 2016-04-06T02:52:42.213-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 304 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:26.498-0500 c20011| 2016-04-06T02:52:42.213-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 304 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, opTime: { ts: Timestamp 1459929162000|10, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.499-0500 c20011| 2016-04-06T02:52:42.213-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:44.213Z [js_test:multi_coll_drop] 2016-04-06T02:53:26.504-0500 c20011| 2016-04-06T02:52:42.235-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|10, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|10, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:26.506-0500 c20011| 2016-04-06T02:52:42.235-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:26.508-0500 c20011| 2016-04-06T02:52:42.235-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.510-0500 c20011| 2016-04-06T02:52:42.235-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|10, t: 3 } and is durable through: { ts: Timestamp 1459929162000|10, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.511-0500 c20011| 2016-04-06T02:52:42.235-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|10, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.517-0500 c20011| 2016-04-06T02:52:42.235-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|10, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|10, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.524-0500 c20011| 2016-04-06T02:52:42.235-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|9, t: 3 } } cursorid:19853084149 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 28ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.525-0500 c20011| 2016-04-06T02:52:42.236-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|10, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.530-0500 c20011| 2016-04-06T02:52:42.240-0500 I COMMAND [conn40] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:42.191-0500-5704c04a65c17830b843f1ba", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162191), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -71.0 }, max: { _id: MaxKey } }, left: { min: { _id: -71.0 }, max: { _id: -70.0 }, lastmod: Timestamp 1000|61, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -70.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|62, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 48ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.532-0500 c20011| 2016-04-06T02:52:42.241-0500 D COMMAND [conn40] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c04a65c17830b843f1b9') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.536-0500 c20011| 2016-04-06T02:52:42.241-0500 D QUERY [conn40] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:26.540-0500 c20011| 2016-04-06T02:52:42.241-0500 D QUERY [conn40] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c04a65c17830b843f1b9') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.542-0500 c20011| 2016-04-06T02:52:42.241-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|10, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.546-0500 c20011| 2016-04-06T02:52:42.245-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|10, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.550-0500 c20011| 2016-04-06T02:52:42.248-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|10, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|11, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:26.552-0500 c20011| 2016-04-06T02:52:42.248-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:26.555-0500 c20011| 2016-04-06T02:52:42.248-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.556-0500 c20011| 2016-04-06T02:52:42.248-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|11, t: 3 } and is durable through: { ts: Timestamp 1459929162000|10, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.559-0500 c20011| 2016-04-06T02:52:42.248-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|10, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|11, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.561-0500 c20011| 2016-04-06T02:52:42.274-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929162000|11, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|10, t: 3 }, name-id: "225" } [js_test:multi_coll_drop] 2016-04-06T02:53:26.564-0500 c20011| 2016-04-06T02:52:42.274-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|11, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|11, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:26.565-0500 c20011| 2016-04-06T02:52:42.274-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:26.566-0500 c20011| 2016-04-06T02:52:42.274-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.570-0500 c20011| 2016-04-06T02:52:42.274-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|11, t: 3 } and is durable through: { ts: Timestamp 1459929162000|11, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.571-0500 c20011| 2016-04-06T02:52:42.274-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|11, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.575-0500 c20011| 2016-04-06T02:52:42.274-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|11, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|11, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.578-0500 c20011| 2016-04-06T02:52:42.285-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|10, t: 3 } } cursorid:19853084149 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 40ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.580-0500 c20011| 2016-04-06T02:52:42.285-0500 I COMMAND [conn40] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c04a65c17830b843f1b9') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 44ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.583-0500 c20011| 2016-04-06T02:52:42.287-0500 D COMMAND [conn36] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|11, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.586-0500 c20011| 2016-04-06T02:52:42.287-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|11, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.593-0500 c20011| 2016-04-06T02:52:42.287-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|11, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.596-0500 c20011| 2016-04-06T02:52:42.288-0500 D QUERY [conn36] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:26.601-0500 c20011| 2016-04-06T02:52:42.289-0500 I COMMAND [conn36] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|11, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.604-0500 c20011| 2016-04-06T02:52:42.304-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|11, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.607-0500 c20011| 2016-04-06T02:52:42.313-0500 D COMMAND [conn40] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c04a65c17830b843f1bb'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929162313), why: "splitting chunk [{ _id: -70.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.614-0500 c20011| 2016-04-06T02:52:42.314-0500 D QUERY [conn40] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:26.616-0500 c20011| 2016-04-06T02:52:42.314-0500 D QUERY [conn40] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:26.620-0500 c20011| 2016-04-06T02:52:42.314-0500 D QUERY [conn40] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.622-0500 c20011| 2016-04-06T02:52:42.314-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|11, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 9ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.626-0500 c20011| 2016-04-06T02:52:42.317-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|11, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.631-0500 c20011| 2016-04-06T02:52:42.323-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929162000|12, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|11, t: 3 }, name-id: "226" } [js_test:multi_coll_drop] 2016-04-06T02:53:26.639-0500 c20011| 2016-04-06T02:52:42.323-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|11, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:26.640-0500 c20011| 2016-04-06T02:52:42.323-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:26.644-0500 c20011| 2016-04-06T02:52:42.323-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.646-0500 c20011| 2016-04-06T02:52:42.323-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|12, t: 3 } and is durable through: { ts: Timestamp 1459929162000|11, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.647-0500 c20011| 2016-04-06T02:52:42.323-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929162000|12, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|11, t: 3 }, name-id: "226" } [js_test:multi_coll_drop] 2016-04-06T02:53:26.655-0500 c20011| 2016-04-06T02:52:42.323-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|11, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.658-0500 c20012| 2016-04-06T02:52:26.844-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.685-0500 c20012| 2016-04-06T02:52:26.844-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.693-0500 c20012| 2016-04-06T02:52:26.844-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|4, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.705-0500 c20012| 2016-04-06T02:52:26.844-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|4, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.719-0500 c20012| 2016-04-06T02:52:26.845-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:26.720-0500 c20012| 2016-04-06T02:52:26.845-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:26.723-0500 c20012| 2016-04-06T02:52:26.845-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.729-0500 c20012| 2016-04-06T02:52:26.845-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|5, t: 2 } and is durable through: { ts: Timestamp 1459929146000|4, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.731-0500 c20012| 2016-04-06T02:52:26.845-0500 D REPL [conn18] Required snapshot optime: { ts: Timestamp 1459929146000|5, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929146000|4, t: 2 }, name-id: "206" } [js_test:multi_coll_drop] 2016-04-06T02:53:26.735-0500 c20012| 2016-04-06T02:52:26.845-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.740-0500 c20012| 2016-04-06T02:52:26.846-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:26.741-0500 c20012| 2016-04-06T02:52:26.846-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:26.742-0500 c20012| 2016-04-06T02:52:26.846-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|5, t: 2 } and is durable through: { ts: Timestamp 1459929146000|5, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.743-0500 c20012| 2016-04-06T02:52:26.846-0500 D REPL [conn16] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|5, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.745-0500 c20012| 2016-04-06T02:52:26.846-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.750-0500 c20012| 2016-04-06T02:52:26.846-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.751-0500 c20012| 2016-04-06T02:52:26.846-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|4, t: 2 } } cursorid:25449496203 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.756-0500 c20012| 2016-04-06T02:52:26.846-0500 I COMMAND [conn11] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c03a65c17830b843f1ad'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929146841), why: "splitting chunk [{ _id: -77.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c03a65c17830b843f1ad'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929146841), why: "splitting chunk [{ _id: -77.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.758-0500 c20012| 2016-04-06T02:52:26.846-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|4, t: 2 } } cursorid:22197973872 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.759-0500 c20012| 2016-04-06T02:52:26.846-0500 D COMMAND [conn11] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|5, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.760-0500 c20012| 2016-04-06T02:52:26.846-0500 D COMMAND [conn11] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|5, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.762-0500 c20012| 2016-04-06T02:52:26.846-0500 D COMMAND [conn11] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|5, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.764-0500 c20012| 2016-04-06T02:52:26.846-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|5, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.765-0500 c20012| 2016-04-06T02:52:26.846-0500 D QUERY [conn11] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:26.769-0500 c20012| 2016-04-06T02:52:26.846-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:26.770-0500 c20012| 2016-04-06T02:52:26.846-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:26.775-0500 c20012| 2016-04-06T02:52:26.846-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.779-0500 c20012| 2016-04-06T02:52:26.846-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|5, t: 2 } and is durable through: { ts: Timestamp 1459929146000|5, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.785-0500 c20012| 2016-04-06T02:52:26.846-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.787-0500 c20012| 2016-04-06T02:52:26.846-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|5, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.797-0500 c20012| 2016-04-06T02:52:26.847-0500 I COMMAND [conn11] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|5, t: 2 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:512 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.807-0500 c20012| 2016-04-06T02:52:26.852-0500 D COMMAND [conn11] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-77.0", lastmod: Timestamp 1000|49, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -77.0 }, max: { _id: -76.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-77.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-76.0", lastmod: Timestamp 1000|50, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -76.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-76.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|48 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.808-0500 c20012| 2016-04-06T02:52:26.852-0500 D QUERY [conn11] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:26.810-0500 c20012| 2016-04-06T02:52:26.852-0500 D QUERY [conn11] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:26.817-0500 c20012| 2016-04-06T02:52:26.852-0500 I COMMAND [conn11] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.817-0500 c20012| 2016-04-06T02:52:26.852-0500 D QUERY [conn11] Using idhack: { _id: "multidrop.coll-_id_-77.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:26.818-0500 c20012| 2016-04-06T02:52:26.852-0500 D QUERY [conn11] Using idhack: { _id: "multidrop.coll-_id_-76.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:26.823-0500 c20012| 2016-04-06T02:52:26.853-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|5, t: 2 } } cursorid:22197973872 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.827-0500 c20012| 2016-04-06T02:52:26.853-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|5, t: 2 } } cursorid:25449496203 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.828-0500 c20012| 2016-04-06T02:52:26.854-0500 D REPL [conn11] Required snapshot optime: { ts: Timestamp 1459929146000|6, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929146000|5, t: 2 }, name-id: "207" } [js_test:multi_coll_drop] 2016-04-06T02:53:26.835-0500 c20012| 2016-04-06T02:52:26.855-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:26.835-0500 c20012| 2016-04-06T02:52:26.855-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:26.840-0500 c20012| 2016-04-06T02:52:26.855-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.841-0500 c20012| 2016-04-06T02:52:26.855-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|6, t: 2 } and is durable through: { ts: Timestamp 1459929146000|5, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.842-0500 c20012| 2016-04-06T02:52:26.855-0500 D REPL [conn18] Required snapshot optime: { ts: Timestamp 1459929146000|6, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929146000|5, t: 2 }, name-id: "207" } [js_test:multi_coll_drop] 2016-04-06T02:53:26.845-0500 c20012| 2016-04-06T02:52:26.855-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.847-0500 c20012| 2016-04-06T02:52:26.855-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|5, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.848-0500 c20012| 2016-04-06T02:52:26.856-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|5, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.851-0500 c20012| 2016-04-06T02:52:26.861-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:26.851-0500 c20012| 2016-04-06T02:52:26.862-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:26.852-0500 c20012| 2016-04-06T02:52:26.862-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|6, t: 2 } and is durable through: { ts: Timestamp 1459929146000|5, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.853-0500 c20012| 2016-04-06T02:52:26.862-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929146000|6, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929146000|5, t: 2 }, name-id: "207" } [js_test:multi_coll_drop] 2016-04-06T02:53:26.855-0500 c20012| 2016-04-06T02:52:26.862-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.858-0500 c20012| 2016-04-06T02:52:26.862-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.861-0500 c20012| 2016-04-06T02:52:26.862-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:26.861-0500 c20012| 2016-04-06T02:52:26.862-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:26.863-0500 c20012| 2016-04-06T02:52:26.862-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.864-0500 c20012| 2016-04-06T02:52:26.862-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|6, t: 2 } and is durable through: { ts: Timestamp 1459929146000|6, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.866-0500 c20012| 2016-04-06T02:52:26.862-0500 D REPL [conn18] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|6, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.870-0500 c20012| 2016-04-06T02:52:26.862-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.877-0500 c20012| 2016-04-06T02:52:26.862-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|5, t: 2 } } cursorid:22197973872 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.882-0500 c20012| 2016-04-06T02:52:26.862-0500 I COMMAND [conn11] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-77.0", lastmod: Timestamp 1000|49, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -77.0 }, max: { _id: -76.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-77.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-76.0", lastmod: Timestamp 1000|50, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -76.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-76.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|48 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 10ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.884-0500 c20012| 2016-04-06T02:52:26.863-0500 D COMMAND [conn11] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:26.862-0500-5704c03a65c17830b843f1ae", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929146862), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -77.0 }, max: { _id: MaxKey } }, left: { min: { _id: -77.0 }, max: { _id: -76.0 }, lastmod: Timestamp 1000|49, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -76.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|50, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.888-0500 c20012| 2016-04-06T02:52:26.863-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:26.891-0500 c20012| 2016-04-06T02:52:26.863-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:26.894-0500 c20012| 2016-04-06T02:52:26.863-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|6, t: 2 } and is durable through: { ts: Timestamp 1459929146000|6, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.895-0500 c20012| 2016-04-06T02:52:26.863-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.901-0500 c20012| 2016-04-06T02:52:26.863-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.908-0500 c20012| 2016-04-06T02:52:26.863-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|5, t: 2 } } cursorid:25449496203 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.913-0500 c20012| 2016-04-06T02:52:26.864-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|6, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.920-0500 c20012| 2016-04-06T02:52:26.864-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|6, t: 2 } } cursorid:22197973872 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.927-0500 c20012| 2016-04-06T02:52:26.870-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|6, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.934-0500 c20012| 2016-04-06T02:52:26.870-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:26.935-0500 c20012| 2016-04-06T02:52:26.870-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:26.937-0500 c20012| 2016-04-06T02:52:26.870-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.940-0500 c20012| 2016-04-06T02:52:26.870-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|7, t: 2 } and is durable through: { ts: Timestamp 1459929146000|6, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.943-0500 c20012| 2016-04-06T02:52:26.870-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.945-0500 c20012| 2016-04-06T02:52:26.871-0500 D REPL [conn11] Required snapshot optime: { ts: Timestamp 1459929146000|7, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929146000|6, t: 2 }, name-id: "208" } [js_test:multi_coll_drop] 2016-04-06T02:53:26.949-0500 c20012| 2016-04-06T02:52:26.875-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|6, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:26.957-0500 c20012| 2016-04-06T02:52:26.876-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:26.958-0500 c20012| 2016-04-06T02:52:26.876-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:26.960-0500 c20012| 2016-04-06T02:52:26.876-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:26.961-0500 c20012| 2016-04-06T02:52:26.876-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:26.964-0500 c20012| 2016-04-06T02:52:26.876-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.968-0500 c20012| 2016-04-06T02:52:26.876-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|7, t: 2 } and is durable through: { ts: Timestamp 1459929146000|7, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.970-0500 c20012| 2016-04-06T02:52:26.876-0500 D REPL [conn18] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|7, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.974-0500 c20012| 2016-04-06T02:52:26.876-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.977-0500 c20012| 2016-04-06T02:52:26.876-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|6, t: 2 } } cursorid:25449496203 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.977-0500 c20012| 2016-04-06T02:52:26.876-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|7, t: 2 } and is durable through: { ts: Timestamp 1459929146000|6, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.983-0500 c20012| 2016-04-06T02:52:26.876-0500 I COMMAND [conn11] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:26.862-0500-5704c03a65c17830b843f1ae", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929146862), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -77.0 }, max: { _id: MaxKey } }, left: { min: { _id: -77.0 }, max: { _id: -76.0 }, lastmod: Timestamp 1000|49, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -76.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|50, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 13ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.984-0500 c20012| 2016-04-06T02:52:26.876-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.989-0500 c20012| 2016-04-06T02:52:26.876-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.996-0500 c20012| 2016-04-06T02:52:26.877-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|6, t: 2 } } cursorid:22197973872 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:26.997-0500 c20012| 2016-04-06T02:52:26.877-0500 D COMMAND [conn11] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c03a65c17830b843f1ad') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:26.999-0500 c20012| 2016-04-06T02:52:26.877-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|7, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:27.032-0500 c20012| 2016-04-06T02:52:26.877-0500 D QUERY [conn11] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.037-0500 c20012| 2016-04-06T02:52:26.877-0500 D QUERY [conn11] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c03a65c17830b843f1ad') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.041-0500 c20012| 2016-04-06T02:52:26.877-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|7, t: 2 } } cursorid:25449496203 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.042-0500 c20012| 2016-04-06T02:52:26.877-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|7, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:27.045-0500 c20012| 2016-04-06T02:52:26.877-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|7, t: 2 } } cursorid:22197973872 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.060-0500 c20012| 2016-04-06T02:52:26.878-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:27.060-0500 c20012| 2016-04-06T02:52:26.878-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:27.061-0500 c20012| 2016-04-06T02:52:26.878-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|7, t: 2 } and is durable through: { ts: Timestamp 1459929146000|7, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.062-0500 c20012| 2016-04-06T02:52:26.878-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.067-0500 c20012| 2016-04-06T02:52:26.878-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.069-0500 c20012| 2016-04-06T02:52:26.879-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:27.070-0500 c20012| 2016-04-06T02:52:26.879-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:27.074-0500 c20012| 2016-04-06T02:52:26.879-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.076-0500 c20012| 2016-04-06T02:52:26.879-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|8, t: 2 } and is durable through: { ts: Timestamp 1459929146000|7, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.090-0500 c20012| 2016-04-06T02:52:26.879-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.093-0500 c20012| 2016-04-06T02:52:26.879-0500 D REPL [conn11] Required snapshot optime: { ts: Timestamp 1459929146000|8, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929146000|7, t: 2 }, name-id: "209" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.101-0500 c20012| 2016-04-06T02:52:26.880-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|7, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:27.103-0500 c20012| 2016-04-06T02:52:26.880-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|7, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:27.122-0500 c20012| 2016-04-06T02:52:26.881-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:27.124-0500 c20012| 2016-04-06T02:52:26.881-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:27.143-0500 c20012| 2016-04-06T02:52:26.881-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.156-0500 c20012| 2016-04-06T02:52:26.881-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|8, t: 2 } and is durable through: { ts: Timestamp 1459929146000|8, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.157-0500 c20012| 2016-04-06T02:52:26.881-0500 D REPL [conn18] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|8, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.159-0500 c20012| 2016-04-06T02:52:26.881-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.161-0500 c20012| 2016-04-06T02:52:26.881-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|7, t: 2 } } cursorid:25449496203 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.163-0500 c20012| 2016-04-06T02:52:26.881-0500 I COMMAND [conn11] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c03a65c17830b843f1ad') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.166-0500 c20012| 2016-04-06T02:52:26.881-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|7, t: 2 } } cursorid:22197973872 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.167-0500 c20012| 2016-04-06T02:52:26.881-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|8, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:27.168-0500 c20012| 2016-04-06T02:52:26.881-0500 D COMMAND [conn7] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|8, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.169-0500 c20012| 2016-04-06T02:52:26.881-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|8, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:27.170-0500 c20012| 2016-04-06T02:52:26.882-0500 D COMMAND [conn7] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|8, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:27.171-0500 c20012| 2016-04-06T02:52:26.882-0500 D COMMAND [conn7] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|8, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.171-0500 c20012| 2016-04-06T02:52:26.882-0500 D QUERY [conn7] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:27.172-0500 c20012| 2016-04-06T02:52:26.882-0500 I COMMAND [conn7] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|8, t: 2 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.175-0500 c20012| 2016-04-06T02:52:26.883-0500 D COMMAND [conn11] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c03a65c17830b843f1af'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929146883), why: "splitting chunk [{ _id: -76.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.175-0500 c20012| 2016-04-06T02:52:26.883-0500 D QUERY [conn11] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.176-0500 c20012| 2016-04-06T02:52:26.883-0500 D QUERY [conn11] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.177-0500 c20012| 2016-04-06T02:52:26.883-0500 D QUERY [conn11] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.177-0500 c20012| 2016-04-06T02:52:26.884-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|8, t: 2 } } cursorid:25449496203 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.180-0500 c20012| 2016-04-06T02:52:26.884-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|8, t: 2 } } cursorid:22197973872 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.182-0500 c20012| 2016-04-06T02:52:26.886-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|8, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:27.182-0500 c20012| 2016-04-06T02:52:26.887-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|8, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:27.187-0500 c20012| 2016-04-06T02:52:26.888-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:27.187-0500 c20012| 2016-04-06T02:52:26.888-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:27.188-0500 c20012| 2016-04-06T02:52:26.888-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|8, t: 2 } and is durable through: { ts: Timestamp 1459929146000|7, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.189-0500 c20012| 2016-04-06T02:52:26.888-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.192-0500 c20012| 2016-04-06T02:52:26.888-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.196-0500 c20012| 2016-04-06T02:52:26.888-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:27.197-0500 c20012| 2016-04-06T02:52:26.888-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:27.198-0500 c20012| 2016-04-06T02:52:26.888-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.201-0500 c20012| 2016-04-06T02:52:26.888-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|9, t: 2 } and is durable through: { ts: Timestamp 1459929146000|8, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.205-0500 c20012| 2016-04-06T02:52:26.888-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.206-0500 c20012| 2016-04-06T02:52:26.889-0500 D REPL [conn11] Required snapshot optime: { ts: Timestamp 1459929146000|9, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929146000|8, t: 2 }, name-id: "210" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.211-0500 c20012| 2016-04-06T02:52:26.891-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:27.212-0500 c20012| 2016-04-06T02:52:26.891-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:27.214-0500 c20012| 2016-04-06T02:52:26.891-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.216-0500 c20012| 2016-04-06T02:52:26.891-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|9, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.217-0500 c20012| 2016-04-06T02:52:26.891-0500 D REPL [conn18] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.221-0500 c20012| 2016-04-06T02:52:26.891-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.223-0500 c20012| 2016-04-06T02:52:26.891-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:27.224-0500 c20012| 2016-04-06T02:52:26.891-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:27.225-0500 c20012| 2016-04-06T02:52:26.891-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|8, t: 2 } and is durable through: { ts: Timestamp 1459929146000|8, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.228-0500 c20012| 2016-04-06T02:52:26.891-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.231-0500 c20012| 2016-04-06T02:52:26.891-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.232-0500 c20012| 2016-04-06T02:52:26.891-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|8, t: 2 } } cursorid:25449496203 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.237-0500 c20012| 2016-04-06T02:52:26.891-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|8, t: 2 } } cursorid:22197973872 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.242-0500 c20012| 2016-04-06T02:52:26.891-0500 I COMMAND [conn11] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c03a65c17830b843f1af'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929146883), why: "splitting chunk [{ _id: -76.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c03a65c17830b843f1af'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929146883), why: "splitting chunk [{ _id: -76.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 8ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.245-0500 c20012| 2016-04-06T02:52:26.892-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:27.253-0500 c20012| 2016-04-06T02:52:26.892-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:27.255-0500 c20012| 2016-04-06T02:52:26.892-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:27.259-0500 c20012| 2016-04-06T02:52:26.892-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|9, t: 2 } and is durable through: { ts: Timestamp 1459929146000|8, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.263-0500 c20012| 2016-04-06T02:52:26.892-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.278-0500 c20012| 2016-04-06T02:52:26.892-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.280-0500 c20012| 2016-04-06T02:52:26.892-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:27.283-0500 c20012| 2016-04-06T02:52:26.892-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:27.285-0500 c20012| 2016-04-06T02:52:26.892-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:27.288-0500 c20012| 2016-04-06T02:52:26.892-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|9, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.292-0500 c20012| 2016-04-06T02:52:26.892-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.300-0500 c20012| 2016-04-06T02:52:26.892-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.303-0500 c20012| 2016-04-06T02:52:26.892-0500 D COMMAND [conn11] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|50 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|9, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.310-0500 c20012| 2016-04-06T02:52:26.892-0500 D COMMAND [conn11] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:27.315-0500 c20012| 2016-04-06T02:52:26.892-0500 D COMMAND [conn11] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|50 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|9, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.319-0500 c20012| 2016-04-06T02:52:26.893-0500 D QUERY [conn11] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:27.339-0500 c20012| 2016-04-06T02:52:26.893-0500 I COMMAND [conn11] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|50 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|9, t: 2 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.344-0500 c20012| 2016-04-06T02:52:26.893-0500 D COMMAND [conn11] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-76.0", lastmod: Timestamp 1000|51, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -76.0 }, max: { _id: -75.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-76.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-75.0", lastmod: Timestamp 1000|52, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -75.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-75.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|50 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.345-0500 c20012| 2016-04-06T02:52:26.893-0500 D QUERY [conn11] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:27.348-0500 c20012| 2016-04-06T02:52:26.893-0500 D QUERY [conn11] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:27.351-0500 c20012| 2016-04-06T02:52:26.893-0500 I COMMAND [conn11] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.352-0500 c20012| 2016-04-06T02:52:26.893-0500 D QUERY [conn11] Using idhack: { _id: "multidrop.coll-_id_-76.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.353-0500 c20012| 2016-04-06T02:52:26.893-0500 D QUERY [conn11] Using idhack: { _id: "multidrop.coll-_id_-75.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.357-0500 c20012| 2016-04-06T02:52:26.894-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } cursorid:25449496203 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.359-0500 c20012| 2016-04-06T02:52:26.894-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } cursorid:22197973872 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.363-0500 c20012| 2016-04-06T02:52:26.895-0500 D REPL [conn11] Required snapshot optime: { ts: Timestamp 1459929146000|10, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929146000|9, t: 2 }, name-id: "211" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.366-0500 c20012| 2016-04-06T02:52:26.896-0500 D COMMAND [conn17] run command local.$cmd { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:27.367-0500 c20012| 2016-04-06T02:52:26.898-0500 D COMMAND [conn15] run command local.$cmd { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:27.371-0500 c20012| 2016-04-06T02:52:26.902-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:27.371-0500 c20012| 2016-04-06T02:52:26.902-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:27.374-0500 c20012| 2016-04-06T02:52:26.902-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.375-0500 c20012| 2016-04-06T02:52:26.902-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.379-0500 c20012| 2016-04-06T02:52:26.902-0500 D REPL [conn18] Required snapshot optime: { ts: Timestamp 1459929146000|10, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929146000|9, t: 2 }, name-id: "211" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.383-0500 c20012| 2016-04-06T02:52:26.903-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.397-0500 c20012| 2016-04-06T02:52:26.903-0500 D COMMAND [conn18] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:27.398-0500 c20012| 2016-04-06T02:52:26.903-0500 D COMMAND [conn18] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:27.400-0500 c20012| 2016-04-06T02:52:26.904-0500 D REPL [conn18] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.403-0500 c20012| 2016-04-06T02:52:26.904-0500 D REPL [conn18] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|10, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.405-0500 c20012| 2016-04-06T02:52:26.904-0500 D REPL [conn18] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|10, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.420-0500 c20012| 2016-04-06T02:52:26.904-0500 I COMMAND [conn18] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.424-0500 c20012| 2016-04-06T02:52:26.904-0500 I COMMAND [conn17] command local.oplog.rs command: getMore { getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } cursorid:25449496203 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.433-0500 c20012| 2016-04-06T02:52:26.904-0500 I COMMAND [conn11] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-76.0", lastmod: Timestamp 1000|51, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -76.0 }, max: { _id: -75.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-76.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-75.0", lastmod: Timestamp 1000|52, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -75.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-75.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|50 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 10ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.433-0500 c20012| 2016-04-06T02:52:41.707-0500 D NETWORK [conn17] SocketException: remote: 192.168.100.28:37532 error: 9001 socket exception [CLOSED] server [192.168.100.28:37532] [js_test:multi_coll_drop] 2016-04-06T02:53:27.434-0500 c20012| 2016-04-06T02:52:41.707-0500 I NETWORK [conn17] end connection 192.168.100.28:37532 (15 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.436-0500 c20012| 2016-04-06T02:52:41.707-0500 D COMMAND [conn11] run command config.$cmd { findAndModify: "lockpings", query: { _id: "mongovm16:20010:1459929128:185613966" }, update: { $set: { ping: new Date(1459929158371) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.437-0500 c20012| 2016-04-06T02:52:41.707-0500 D QUERY [conn11] Using idhack: { _id: "mongovm16:20010:1459929128:185613966" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.439-0500 c20012| 2016-04-06T02:52:41.710-0500 D REPL [conn11] Required snapshot optime: { ts: Timestamp 1459929161000|1, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929146000|10, t: 2 }, name-id: "212" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.442-0500 c20012| 2016-04-06T02:52:26.904-0500 I COMMAND [conn15] command local.oplog.rs command: getMore { getMore: 22197973872, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } cursorid:22197973872 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.448-0500 c20012| 2016-04-06T02:52:26.909-0500 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:27.453-0500 c20012| 2016-04-06T02:52:27.690-0500 D COMMAND [conn12] run command admin.$cmd { replSetStepDown: 10.0, force: true } [js_test:multi_coll_drop] 2016-04-06T02:53:27.454-0500 c20012| 2016-04-06T02:52:41.717-0500 D COMMAND [conn16] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:27.454-0500 c20012| 2016-04-06T02:52:41.717-0500 D COMMAND [conn12] command: replSetStepDown [js_test:multi_coll_drop] 2016-04-06T02:53:27.456-0500 c20012| 2016-04-06T02:52:28.811-0500 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.456-0500 c20012| 2016-04-06T02:52:41.717-0500 D COMMAND [conn3] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:27.459-0500 c20012| 2016-04-06T02:52:28.811-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1064 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:38.811-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.461-0500 c20012| 2016-04-06T02:52:31.653-0500 D COMMAND [conn7] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929151652), up: 24, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.462-0500 c20012| 2016-04-06T02:52:31.901-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:38055 #19 (17 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.463-0500 c20012| 2016-04-06T02:52:32.631-0500 D COMMAND [conn9] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929152631), up: 25, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.466-0500 *** Stepping down connection to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:27.467-0500 c20012| 2016-04-06T02:52:41.717-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:38056 #20 (17 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.470-0500 c20012| 2016-04-06T02:52:28.812-0500 D COMMAND [conn5] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.471-0500 c20012| 2016-04-06T02:52:41.718-0500 D COMMAND [conn5] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:27.471-0500 c20012| 2016-04-06T02:52:41.718-0500 D COMMAND [conn19] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20011" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.472-0500 c20012| 2016-04-06T02:52:36.810-0500 D COMMAND [conn13] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.472-0500 c20012| 2016-04-06T02:52:33.720-0500 D COMMAND [conn6] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.473-0500 c20012| 2016-04-06T02:52:38.360-0500 D COMMAND [conn10] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.473-0500 c20012| 2016-04-06T02:52:41.718-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:38076 #21 (18 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.476-0500 c20012| 2016-04-06T02:52:41.718-0500 D COMMAND [conn20] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20013" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.479-0500 c20012| 2016-04-06T02:52:41.718-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:38077 #22 (19 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.479-0500 c20012| 2016-04-06T02:52:41.718-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:38078 #23 (20 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.481-0500 c20012| 2016-04-06T02:52:41.718-0500 D COMMAND [conn22] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20011" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.482-0500 c20012| 2016-04-06T02:52:41.718-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:38399 #24 (21 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.483-0500 c20012| 2016-04-06T02:52:41.717-0500 D NETWORK [conn15] SocketException: remote: 192.168.100.28:37470 error: 9001 socket exception [CLOSED] server [192.168.100.28:37470] [js_test:multi_coll_drop] 2016-04-06T02:53:27.485-0500 c20012| 2016-04-06T02:52:41.718-0500 I NETWORK [conn15] end connection 192.168.100.28:37470 (20 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.488-0500 c20012| 2016-04-06T02:52:41.717-0500 I COMMAND [conn12] Attempting to step down in response to replSetStepDown command [js_test:multi_coll_drop] 2016-04-06T02:53:27.489-0500 c20012| 2016-04-06T02:52:41.718-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:38409 #25 (22 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.493-0500 c20012| 2016-04-06T02:52:41.717-0500 D REPL [conn16] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.497-0500 c20012| 2016-04-06T02:52:41.718-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929161000|1, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929146000|10, t: 2 }, name-id: "212" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.497-0500 c20012| 2016-04-06T02:52:41.718-0500 D COMMAND [conn24] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.501-0500 c20012| 2016-04-06T02:52:41.718-0500 D COMMAND [conn23] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20011" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.503-0500 c20012| 2016-04-06T02:52:37.337-0500 D COMMAND [conn8] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.507-0500 c20012| 2016-04-06T02:52:41.718-0500 D COMMAND [conn25] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.508-0500 c20012| 2016-04-06T02:52:41.718-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:38411 #26 (22 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.509-0500 c20012| 2016-04-06T02:52:41.718-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:38412 #27 (23 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.510-0500 c20012| 2016-04-06T02:52:41.718-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:38455 #28 (24 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.511-0500 c20012| 2016-04-06T02:52:41.718-0500 D COMMAND [conn27] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.512-0500 c20012| 2016-04-06T02:52:41.718-0500 D COMMAND [conn26] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.513-0500 c20012| 2016-04-06T02:52:41.718-0500 D COMMAND [conn28] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20010" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.517-0500 c20012| 2016-04-06T02:52:41.717-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1065 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:51.717-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.523-0500 c20012| 2016-04-06T02:52:41.719-0500 D REPL [conn16] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929142000|12, t: 2 } and is durable through: { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.529-0500 c20012| 2016-04-06T02:52:41.719-0500 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.530-0500 c20012| 2016-04-06T02:52:41.719-0500 D QUERY [conn7] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.532-0500 c20012| 2016-04-06T02:52:41.719-0500 D QUERY [conn9] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.537-0500 c20012| 2016-04-06T02:52:41.719-0500 D REPL [conn9] Required snapshot optime: { ts: Timestamp 1459929161000|1, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929146000|10, t: 2 }, name-id: "212" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.543-0500 c20012| 2016-04-06T02:52:41.719-0500 I WRITE [conn9] update config.mongos query: { _id: "mongovm16:20015" } update: { $set: { _id: "mongovm16:20015", ping: new Date(1459929152631), up: 25, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.547-0500 c20012| 2016-04-06T02:52:41.719-0500 D REPL [conn7] Required snapshot optime: { ts: Timestamp 1459929161000|1, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929146000|10, t: 2 }, name-id: "212" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.548-0500 c20012| 2016-04-06T02:52:41.719-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1064 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:27.553-0500 c20012| 2016-04-06T02:52:41.719-0500 I WRITE [conn7] update config.mongos query: { _id: "mongovm16:20014" } update: { $set: { _id: "mongovm16:20014", ping: new Date(1459929151652), up: 24, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.554-0500 c20012| 2016-04-06T02:52:41.719-0500 D QUERY [conn12] received interrupt request for unknown op: 996 known ops: [js_test:multi_coll_drop] 2016-04-06T02:53:27.555-0500 c20012| 2016-04-06T02:52:41.719-0500 D QUERY [conn12] received interrupt request for unknown op: 995 known ops: [js_test:multi_coll_drop] 2016-04-06T02:53:27.556-0500 c20012| 2016-04-06T02:52:41.719-0500 D QUERY [conn12] received interrupt request for unknown op: 993 known ops: [js_test:multi_coll_drop] 2016-04-06T02:53:27.557-0500 c20012| 2016-04-06T02:52:41.719-0500 D QUERY [conn12] received interrupt request for unknown op: 991 known ops: [js_test:multi_coll_drop] 2016-04-06T02:53:27.561-0500 c20012| 2016-04-06T02:52:41.719-0500 D QUERY [conn12] received interrupt request for unknown op: 990 known ops: [js_test:multi_coll_drop] 2016-04-06T02:53:27.562-0500 c20012| 2016-04-06T02:52:41.719-0500 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20011" } numYields:0 reslen:482 locks:{} protocol:op_query 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.563-0500 c20012| 2016-04-06T02:52:41.719-0500 D QUERY [conn12] received interrupt request for unknown op: 988 known ops: [js_test:multi_coll_drop] 2016-04-06T02:53:27.564-0500 c20012| 2016-04-06T02:52:41.719-0500 I COMMAND [conn10] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.565-0500 c20012| 2016-04-06T02:52:41.719-0500 D QUERY [conn12] received interrupt request for unknown op: 981 known ops: [js_test:multi_coll_drop] 2016-04-06T02:53:27.566-0500 c20012| 2016-04-06T02:52:41.719-0500 D QUERY [conn12] received interrupt request for unknown op: 989 known ops: [js_test:multi_coll_drop] 2016-04-06T02:53:27.569-0500 c20012| 2016-04-06T02:52:41.719-0500 D QUERY [conn12] received interrupt request for unknown op: 974 known ops: [js_test:multi_coll_drop] 2016-04-06T02:53:27.571-0500 c20012| 2016-04-06T02:52:41.719-0500 D QUERY [conn12] received interrupt request for unknown op: 978 known ops: [js_test:multi_coll_drop] 2016-04-06T02:53:27.573-0500 c20012| 2016-04-06T02:52:41.719-0500 I COMMAND [conn20] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20013" } numYields:0 reslen:482 locks:{} protocol:op_query 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.574-0500 c20012| 2016-04-06T02:52:41.719-0500 D QUERY [conn12] received interrupt request for unknown op: 992 known ops: [js_test:multi_coll_drop] 2016-04-06T02:53:27.575-0500 c20012| 2016-04-06T02:52:41.719-0500 D QUERY [conn12] received interrupt request for unknown op: 977 known ops: [js_test:multi_coll_drop] 2016-04-06T02:53:27.576-0500 c20012| 2016-04-06T02:52:41.719-0500 D QUERY [conn12] received interrupt request for unknown op: 973 known ops: [js_test:multi_coll_drop] 2016-04-06T02:53:27.578-0500 c20012| 2016-04-06T02:52:41.719-0500 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20011" } numYields:0 reslen:482 locks:{} protocol:op_query 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.581-0500 c20012| 2016-04-06T02:52:41.719-0500 D REPL [conn11] Required snapshot optime: { ts: Timestamp 1459929161000|1, t: 2 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929146000|10, t: 2 }, name-id: "212" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.582-0500 c20012| 2016-04-06T02:52:41.719-0500 D QUERY [conn12] received interrupt request for unknown op: 986 known ops: [js_test:multi_coll_drop] 2016-04-06T02:53:27.583-0500 c20012| 2016-04-06T02:52:41.719-0500 D QUERY [conn12] received interrupt request for unknown op: 983 known ops: [js_test:multi_coll_drop] 2016-04-06T02:53:27.583-0500 c20012| 2016-04-06T02:52:41.719-0500 D QUERY [conn12] received interrupt request for unknown op: 994 known ops: [js_test:multi_coll_drop] 2016-04-06T02:53:27.584-0500 c20012| 2016-04-06T02:52:41.719-0500 D QUERY [conn12] received interrupt request for unknown op: 980 known ops: [js_test:multi_coll_drop] 2016-04-06T02:53:27.585-0500 c20012| 2016-04-06T02:52:41.720-0500 I COMMAND [conn13] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.586-0500 c20012| 2016-04-06T02:52:41.719-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1065 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:27.587-0500 c20012| 2016-04-06T02:52:41.720-0500 D COMMAND [conn22] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.588-0500 c20012| 2016-04-06T02:52:41.720-0500 D COMMAND [conn22] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:27.590-0500 c20012| 2016-04-06T02:52:41.720-0500 D COMMAND [conn20] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.591-0500 c20012| 2016-04-06T02:52:41.720-0500 D COMMAND [conn20] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:27.592-0500 c20012| 2016-04-06T02:52:41.720-0500 I COMMAND [conn6] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.595-0500 c20012| 2016-04-06T02:52:41.720-0500 D NETWORK [conn6] SocketException: remote: 192.168.100.28:36389 error: 9001 socket exception [CLOSED] server [192.168.100.28:36389] [js_test:multi_coll_drop] 2016-04-06T02:53:27.597-0500 c20012| 2016-04-06T02:52:41.720-0500 I NETWORK [conn6] end connection 192.168.100.28:36389 (23 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.606-0500 c20012| 2016-04-06T02:52:41.720-0500 I COMMAND [conn24] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20014" } numYields:0 reslen:482 locks:{} protocol:op_query 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.609-0500 c20012| 2016-04-06T02:52:41.720-0500 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20011" } numYields:0 reslen:482 locks:{} protocol:op_query 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.612-0500 c20012| 2016-04-06T02:52:41.720-0500 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20015" } numYields:0 reslen:482 locks:{} protocol:op_query 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.614-0500 c20012| 2016-04-06T02:52:41.720-0500 I COMMAND [conn8] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.614-0500 c20012| 2016-04-06T02:52:41.720-0500 I REPL [ReplicationExecutor] transition to SECONDARY [js_test:multi_coll_drop] 2016-04-06T02:53:27.617-0500 c20012| 2016-04-06T02:52:41.721-0500 D COMMAND [conn21] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20011" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.619-0500 c20012| 2016-04-06T02:52:41.721-0500 D COMMAND [conn24] run command config.$cmd { findAndModify: "lockpings", query: { _id: "mongovm16:20014:1459929123:-665935931" }, update: { $set: { ping: new Date(1459929156593) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.620-0500 c20012| 2016-04-06T02:52:41.721-0500 D - [conn24] User Assertion: 10107:not master [js_test:multi_coll_drop] 2016-04-06T02:53:27.623-0500 c20012| 2016-04-06T02:52:41.721-0500 D COMMAND [conn26] run command config.$cmd { findAndModify: "lockpings", query: { _id: "mongovm16:20015:1459929127:-1485108316" }, update: { $set: { ping: new Date(1459929157363) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.623-0500 c20012| 2016-04-06T02:52:41.721-0500 D - [conn26] User Assertion: 10107:not master [js_test:multi_coll_drop] 2016-04-06T02:53:27.630-0500 c20012| 2016-04-06T02:52:41.721-0500 I COMMAND [conn11] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "mongovm16:20010:1459929128:185613966" }, update: { $set: { ping: new Date(1459929158371) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ping: new Date(1459929158371) } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:499 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 13ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.632-0500 c20012| 2016-04-06T02:52:41.721-0500 I COMMAND [conn27] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20015" } numYields:0 reslen:482 locks:{} protocol:op_query 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.635-0500 c20012| 2016-04-06T02:52:41.721-0500 I COMMAND [conn20] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 3 } numYields:0 reslen:439 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.636-0500 c20012| 2016-04-06T02:52:41.722-0500 D NETWORK [conn27] Socket recv() Connection reset by peer 192.168.100.28:38412 [js_test:multi_coll_drop] 2016-04-06T02:53:27.641-0500 c20012| 2016-04-06T02:52:41.722-0500 D COMMAND [conn26] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "lockpings", query: { _id: "mongovm16:20015:1459929127:-1485108316" }, update: { $set: { ping: new Date(1459929157363) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 10107 not master [js_test:multi_coll_drop] 2016-04-06T02:53:27.642-0500 c20012| 2016-04-06T02:52:41.722-0500 D NETWORK [conn16] SocketException: remote: 192.168.100.28:37476 error: 9001 socket exception [CLOSED] server [192.168.100.28:37476] [js_test:multi_coll_drop] 2016-04-06T02:53:27.643-0500 c20012| 2016-04-06T02:52:41.722-0500 I NETWORK [conn16] end connection 192.168.100.28:37476 (22 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.663-0500 c20012| 2016-04-06T02:52:41.722-0500 I COMMAND [conn26] command config.$cmd command: findAndModify { findAndModify: "lockpings", query: { _id: "mongovm16:20015:1459929127:-1485108316" }, update: { $set: { ping: new Date(1459929157363) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } exception: not master code:10107 numYields:0 reslen:55 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.664-0500 c20012| 2016-04-06T02:52:41.722-0500 D NETWORK [conn26] Socket say send() Bad file descriptor 192.168.100.28:38411 [js_test:multi_coll_drop] 2016-04-06T02:53:27.669-0500 c20012| 2016-04-06T02:52:41.722-0500 I COMMAND [conn12] command admin.$cmd command: replSetStepDown { replSetStepDown: 10.0, force: true } numYields:0 reslen:82 locks:{ Global: { acquireCount: { r: 1, R: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.669-0500 c20012| 2016-04-06T02:52:41.722-0500 D NETWORK [conn12] Socket say send() Bad file descriptor 192.168.100.28:36863 [js_test:multi_coll_drop] 2016-04-06T02:53:27.671-0500 c20012| 2016-04-06T02:52:41.722-0500 D COMMAND [conn24] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "lockpings", query: { _id: "mongovm16:20014:1459929123:-665935931" }, update: { $set: { ping: new Date(1459929156593) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 10107 not master [js_test:multi_coll_drop] 2016-04-06T02:53:27.676-0500 c20012| 2016-04-06T02:52:41.722-0500 I COMMAND [conn24] command config.$cmd command: findAndModify { findAndModify: "lockpings", query: { _id: "mongovm16:20014:1459929123:-665935931" }, update: { $set: { ping: new Date(1459929156593) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } exception: not master code:10107 numYields:0 reslen:55 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.681-0500 c20012| 2016-04-06T02:52:41.722-0500 I COMMAND [ftdc] serverStatus was very slow: { after basic: 0, after asserts: 0, after connections: 0, after extra_info: 0, after globalLock: 0, after locks: 0, after network: 0, after opcounters: 0, after opcountersRepl: 0, after repl: 2848, after storageEngine: 2848, after tcmalloc: 2848, after wiredTiger: 2848, at end: 2848 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.682-0500 c20012| 2016-04-06T02:52:41.722-0500 D NETWORK [conn24] Socket say send() Bad file descriptor 192.168.100.28:38399 [js_test:multi_coll_drop] 2016-04-06T02:53:27.683-0500 c20012| 2016-04-06T02:52:41.722-0500 I NETWORK [conn26] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [192.168.100.28:38411] [js_test:multi_coll_drop] 2016-04-06T02:53:27.685-0500 c20012| 2016-04-06T02:52:41.722-0500 I COMMAND [conn21] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20011" } numYields:0 reslen:429 locks:{} protocol:op_query 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.688-0500 c20012| 2016-04-06T02:52:41.722-0500 I NETWORK [conn12] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [192.168.100.28:36863] [js_test:multi_coll_drop] 2016-04-06T02:53:27.691-0500 c20012| 2016-04-06T02:52:41.722-0500 D NETWORK [conn8] SocketException: remote: 192.168.100.28:36644 error: 9001 socket exception [CLOSED] server [192.168.100.28:36644] [js_test:multi_coll_drop] 2016-04-06T02:53:27.693-0500 c20012| 2016-04-06T02:52:41.722-0500 I COMMAND [conn22] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } numYields:0 reslen:439 locks:{} protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.695-0500 c20012| 2016-04-06T02:52:41.722-0500 I NETWORK [conn8] end connection 192.168.100.28:36644 (19 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.698-0500 c20012| 2016-04-06T02:52:41.722-0500 D NETWORK [conn1] SocketException: remote: 127.0.0.1:54926 error: 9001 socket exception [CLOSED] server [127.0.0.1:54926] [js_test:multi_coll_drop] 2016-04-06T02:53:27.700-0500 c20012| 2016-04-06T02:52:41.722-0500 D NETWORK [conn11] SocketException: remote: 192.168.100.28:36790 error: 9001 socket exception [CLOSED] server [192.168.100.28:36790] [js_test:multi_coll_drop] 2016-04-06T02:53:27.700-0500 c20012| 2016-04-06T02:52:41.722-0500 I NETWORK [conn1] end connection 127.0.0.1:54926 (18 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.702-0500 c20012| 2016-04-06T02:52:41.722-0500 I NETWORK [conn11] end connection 192.168.100.28:36790 (18 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.708-0500 c20012| 2016-04-06T02:52:41.722-0500 I NETWORK [conn24] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [192.168.100.28:38399] [js_test:multi_coll_drop] 2016-04-06T02:53:27.712-0500 c20012| 2016-04-06T02:52:41.722-0500 D NETWORK [conn10] SocketException: remote: 192.168.100.28:36771 error: 9001 socket exception [CLOSED] server [192.168.100.28:36771] [js_test:multi_coll_drop] 2016-04-06T02:53:27.715-0500 c20012| 2016-04-06T02:52:41.722-0500 I NETWORK [conn10] end connection 192.168.100.28:36771 (15 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.717-0500 c20012| 2016-04-06T02:52:41.722-0500 D NETWORK [conn27] SocketException: remote: 192.168.100.28:38412 error: 9001 socket exception [RECV_ERROR] server [192.168.100.28:38412] [js_test:multi_coll_drop] 2016-04-06T02:53:27.718-0500 c20012| 2016-04-06T02:52:41.722-0500 I NETWORK [conn27] end connection 192.168.100.28:38412 (14 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.721-0500 c20012| 2016-04-06T02:52:41.722-0500 D NETWORK [conn14] SocketException: remote: 192.168.100.28:37469 error: 9001 socket exception [CLOSED] server [192.168.100.28:37469] [js_test:multi_coll_drop] 2016-04-06T02:53:27.722-0500 c20012| 2016-04-06T02:52:41.722-0500 I NETWORK [conn14] end connection 192.168.100.28:37469 (13 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.723-0500 c20012| 2016-04-06T02:52:41.722-0500 D NETWORK [conn21] Socket say send() Bad file descriptor 192.168.100.28:38076 [js_test:multi_coll_drop] 2016-04-06T02:53:27.726-0500 c20012| 2016-04-06T02:52:41.722-0500 I NETWORK [conn21] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [192.168.100.28:38076] [js_test:multi_coll_drop] 2016-04-06T02:53:27.727-0500 c20012| 2016-04-06T02:52:41.722-0500 D NETWORK [conn22] SocketException: remote: 192.168.100.28:38077 error: 9001 socket exception [CLOSED] server [192.168.100.28:38077] [js_test:multi_coll_drop] 2016-04-06T02:53:27.729-0500 c20012| 2016-04-06T02:52:41.722-0500 I NETWORK [conn22] end connection 192.168.100.28:38077 (11 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.731-0500 c20012| 2016-04-06T02:52:41.722-0500 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } numYields:0 reslen:480 locks:{} protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.733-0500 c20012| 2016-04-06T02:52:41.722-0500 D NETWORK [conn3] Socket say send() Bad file descriptor 192.168.100.28:36071 [js_test:multi_coll_drop] 2016-04-06T02:53:27.735-0500 c20012| 2016-04-06T02:52:41.722-0500 I NETWORK [conn3] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [192.168.100.28:36071] [js_test:multi_coll_drop] 2016-04-06T02:53:27.738-0500 c20012| 2016-04-06T02:52:41.722-0500 D NETWORK [conn19] SocketException: remote: 192.168.100.28:38055 error: 9001 socket exception [CLOSED] server [192.168.100.28:38055] [js_test:multi_coll_drop] 2016-04-06T02:53:27.741-0500 c20012| 2016-04-06T02:52:41.722-0500 I NETWORK [conn19] end connection 192.168.100.28:38055 (9 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.744-0500 c20012| 2016-04-06T02:52:41.722-0500 D NETWORK [conn23] SocketException: remote: 192.168.100.28:38078 error: 9001 socket exception [CLOSED] server [192.168.100.28:38078] [js_test:multi_coll_drop] 2016-04-06T02:53:27.746-0500 c20012| 2016-04-06T02:52:41.722-0500 I NETWORK [conn23] end connection 192.168.100.28:38078 (8 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.748-0500 c20012| 2016-04-06T02:52:41.723-0500 D NETWORK [conn18] SocketException: remote: 192.168.100.28:37533 error: 9001 socket exception [CLOSED] server [192.168.100.28:37533] [js_test:multi_coll_drop] 2016-04-06T02:53:27.749-0500 c20012| 2016-04-06T02:52:41.723-0500 I NETWORK [conn18] end connection 192.168.100.28:37533 (7 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.752-0500 c20012| 2016-04-06T02:52:41.723-0500 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20014" } numYields:0 reslen:482 locks:{} protocol:op_query 4ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.753-0500 c20012| 2016-04-06T02:52:41.723-0500 D NETWORK [conn25] Socket say send() Bad file descriptor 192.168.100.28:38409 [js_test:multi_coll_drop] 2016-04-06T02:53:27.756-0500 c20012| 2016-04-06T02:52:41.723-0500 I NETWORK [conn25] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [192.168.100.28:38409] [js_test:multi_coll_drop] 2016-04-06T02:53:27.759-0500 c20012| 2016-04-06T02:52:41.723-0500 I COMMAND [conn28] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20010" } numYields:0 reslen:482 locks:{} protocol:op_query 4ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.765-0500 c20012| 2016-04-06T02:52:41.723-0500 I COMMAND [conn5] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } numYields:0 reslen:480 locks:{} protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.768-0500 c20012| 2016-04-06T02:52:41.723-0500 D NETWORK [conn28] Socket say send() Bad file descriptor 192.168.100.28:38455 [js_test:multi_coll_drop] 2016-04-06T02:53:27.769-0500 c20012| 2016-04-06T02:52:41.723-0500 D NETWORK [conn5] Socket say send() Bad file descriptor 192.168.100.28:36205 [js_test:multi_coll_drop] 2016-04-06T02:53:27.772-0500 c20012| 2016-04-06T02:52:41.723-0500 I NETWORK [conn28] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [192.168.100.28:38455] [js_test:multi_coll_drop] 2016-04-06T02:53:27.773-0500 c20012| 2016-04-06T02:52:41.723-0500 I NETWORK [conn5] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [192.168.100.28:36205] [js_test:multi_coll_drop] 2016-04-06T02:53:27.784-0500 c20012| 2016-04-06T02:52:41.725-0500 D NETWORK [conn13] SocketException: remote: 192.168.100.28:37082 error: 9001 socket exception [CLOSED] server [192.168.100.28:37082] [js_test:multi_coll_drop] 2016-04-06T02:53:27.786-0500 c20012| 2016-04-06T02:52:41.725-0500 I NETWORK [conn13] end connection 192.168.100.28:37082 (3 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.806-0500 c20012| 2016-04-06T02:52:41.726-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1065 finished with response: { ok: 1.0, electionTime: new Date(6270347962317012993), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, opTime: { ts: Timestamp 1459929152000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:27.814-0500 c20012| 2016-04-06T02:52:41.726-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1064 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, opTime: { ts: Timestamp 1459929152000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:27.823-0500 c20012| 2016-04-06T02:52:41.726-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929152000|2, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.840-0500 c20012| 2016-04-06T02:52:41.726-0500 I REPL [ReplicationExecutor] Member mongovm16:20011 is now in state PRIMARY [js_test:multi_coll_drop] 2016-04-06T02:53:27.844-0500 c20012| 2016-04-06T02:52:41.726-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:44.226Z [js_test:multi_coll_drop] 2016-04-06T02:53:27.846-0500 c20012| 2016-04-06T02:52:41.726-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:44.226Z [js_test:multi_coll_drop] 2016-04-06T02:53:27.848-0500 c20012| 2016-04-06T02:52:41.727-0500 I COMMAND [conn9] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929152631), up: 25, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:473 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 9ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.849-0500 c20012| 2016-04-06T02:52:41.727-0500 D NETWORK [conn9] Socket say send() Bad file descriptor 192.168.100.28:36647 [js_test:multi_coll_drop] 2016-04-06T02:53:27.850-0500 c20012| 2016-04-06T02:52:41.728-0500 I NETWORK [conn9] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [192.168.100.28:36647] [js_test:multi_coll_drop] 2016-04-06T02:53:27.859-0500 c20012| 2016-04-06T02:52:41.735-0500 I COMMAND [conn7] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929151652), up: 24, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:473 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 17ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.860-0500 c20012| 2016-04-06T02:52:41.735-0500 D NETWORK [conn7] Socket say send() Bad file descriptor 192.168.100.28:36634 [js_test:multi_coll_drop] 2016-04-06T02:53:27.862-0500 c20012| 2016-04-06T02:52:41.735-0500 I NETWORK [conn7] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [192.168.100.28:36634] [js_test:multi_coll_drop] 2016-04-06T02:53:27.865-0500 c20012| 2016-04-06T02:52:42.589-0500 D REPL [rsBackgroundSync] bgsync fetch queue set to: { ts: Timestamp 1459929161000|3, t: 2 } 1706076688285321939 [js_test:multi_coll_drop] 2016-04-06T02:53:27.866-0500 c20012| 2016-04-06T02:52:42.954-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:38680 #29 (2 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.868-0500 c20012| 2016-04-06T02:52:42.957-0500 D COMMAND [conn29] run command admin.$cmd { isMaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.870-0500 c20012| 2016-04-06T02:52:42.957-0500 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1 } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.874-0500 c20012| 2016-04-06T02:52:43.722-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:38733 #30 (3 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:27.876-0500 c20012| 2016-04-06T02:52:43.722-0500 D COMMAND [conn30] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20011" } [js_test:multi_coll_drop] 2016-04-06T02:53:27.883-0500 c20012| 2016-04-06T02:52:43.722-0500 I COMMAND [conn30] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20011" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.885-0500 c20012| 2016-04-06T02:52:43.723-0500 D COMMAND [conn30] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.885-0500 c20012| 2016-04-06T02:52:43.723-0500 D COMMAND [conn30] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:27.889-0500 c20012| 2016-04-06T02:52:43.723-0500 I COMMAND [conn30] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } numYields:0 reslen:458 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.890-0500 c20012| 2016-04-06T02:52:43.875-0500 D COMMAND [conn29] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.892-0500 c20012| 2016-04-06T02:52:43.875-0500 I COMMAND [conn29] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:27.894-0500 c20012| 2016-04-06T02:52:44.227-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1068 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:54.227-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.897-0500 c20012| 2016-04-06T02:52:44.227-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1069 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:54.227-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.901-0500 c20012| 2016-04-06T02:52:44.227-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1068 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:27.902-0500 c20012| 2016-04-06T02:52:44.227-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1069 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:27.906-0500 c20012| 2016-04-06T02:52:44.228-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1068 finished with response: { ok: 1.0, electionTime: new Date(6270347962317012993), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, opTime: { ts: Timestamp 1459929163000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:27.910-0500 c20012| 2016-04-06T02:52:44.228-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929163000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.911-0500 c20012| 2016-04-06T02:52:44.228-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:46.728Z [js_test:multi_coll_drop] 2016-04-06T02:53:27.914-0500 c20012| 2016-04-06T02:52:44.590-0500 I REPL [ReplicationExecutor] syncing from: mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:27.923-0500 c20012| 2016-04-06T02:52:44.591-0500 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 1071 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:53:14.591-0500 cmd:{ find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:27.925-0500 c20012| 2016-04-06T02:52:44.591-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1071 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:27.928-0500 c20012| 2016-04-06T02:52:44.591-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1071 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1459929117000|1, h: 1169182228640141205, v: 2, op: "n", ns: "", o: { msg: "initiating set" } } ], id: 0, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.930-0500 c20012| 2016-04-06T02:52:44.591-0500 D REPL [rsBackgroundSync] scheduling fetcher to read remote oplog on mongovm16:20011 starting at filter: { ts: { $gte: Timestamp 1459929161000|3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:27.935-0500 c20012| 2016-04-06T02:52:44.591-0500 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 1073 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:49.591-0500 cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929161000|3 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.936-0500 c20012| 2016-04-06T02:52:44.591-0500 D REPL [SyncSourceFeedback] setting syncSourceFeedback to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:27.938-0500 c20012| 2016-04-06T02:52:44.591-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1073 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:27.943-0500 c20012| 2016-04-06T02:52:44.592-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:27.947-0500 c20012| 2016-04-06T02:52:44.592-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1074 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:27.949-0500 c20012| 2016-04-06T02:52:44.592-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1074 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:27.958-0500 c20012| 2016-04-06T02:52:44.592-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1074 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.973-0500 c20012| 2016-04-06T02:52:44.593-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] warning: log line attempted (21kB) over max size (10kB), printing beginning and end ... Request 1073 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1459929161000|3, t: 3, h: 348221258137002286, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20014" }, o: { $set: { ping: new Date(1459929151652), up: 24, waiting: false } } }, { ts: Timestamp 1459929161000|4, t: 3, h: 569718958403941141, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } }, { ts: Timestamp 1459929161000|5, t: 3, h: 7208870335463155550, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20014" }, o: { $set: { ping: new Date(1459929161743), up: 34, waiting: true } } }, { ts: Timestamp 1459929161000|6, t: 3, h: 9145859565647178306, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20015" }, o: { $set: { ping: new Date(1459929161747), up: 34, waiting: true } } }, { ts: Timestamp 1459929161000|7, t: 3, h: 5502916262959992045, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c04965c17830b843f1b1'), state: 2, when: new Date(1459929161772), why: "splitting chunk [{ _id: -75.0 }, { _id: MaxKey }) in multidrop.coll" } } }, { ts: Timestamp 1459929161000|8, t: 3, h: 6949985940899244306, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-75.0", lastmod: Timestamp 1000|53, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -75.0 }, max: { _id: -74.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-75.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-74.0", lastmod: Timestamp 1000|54, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -74.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-74.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } }, { ts: Timestamp 1459929161000|9, t: 3, h: -4617580344049194992, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:41.797-0500-5704c04965c17830b843f1b2", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929161797), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -75.0 }, max: { _id: MaxKey } }, left: { min: { _id: -75.0 }, max: { _id: -74.0 }, lastmod: Timestamp 1000|53, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -74.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|54, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } }, { ts: Timestamp 1459929161000|10, t: 3, h: -6490455652975516690, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } }, { ts: Timestamp 1459929161000|11, t: 3, h: 5945394017863447987, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c04965c17830b843f1b3'), state: 2, when: new Date(1459929161842), why: "splitting chunk [{ _id: -74.0 }, { _id: MaxKey }) in multidrop.coll" } } }, { ts: Timestamp 1459929161000|12, t: 3, h: 4287115959176304978, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-74.0", lastmod: Timestamp 1000|55, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -74.0 }, max: { _id: -73.0 }, shard: "shard0000" }, o2: { _id: "multidrop.col .......... _id_-66.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-65.0", lastmod: Timestamp 1000|72, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -65.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-65.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } }, { ts: Timestamp 1459929163000|2, t: 3, h: -3691712439411572840, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:43.119-0500-5704c04b65c17830b843f1c4", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929163119), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -66.0 }, max: { _id: MaxKey } }, left: { min: { _id: -66.0 }, max: { _id: -65.0 }, lastmod: Timestamp 1000|71, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -65.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|72, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } }, { ts: Timestamp 1459929163000|3, t: 3, h: -5230974407681466498, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } }, { ts: Timestamp 1459929163000|4, t: 3, h: 6336516151299301636, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c04b65c17830b843f1c5'), state: 2, when: new Date(1459929163203), why: "splitting chunk [{ _id: -65.0 }, { _id: MaxKey }) in multidrop.coll" } } }, { ts: Timestamp 1459929163000|5, t: 3, h: -8172355748864553859, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-65.0", lastmod: Timestamp 1000|73, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -65.0 }, max: { _id: -64.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-65.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-64.0", lastmod: Timestamp 1000|74, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -64.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-64.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } }, { ts: Timestamp 1459929163000|6, t: 3, h: -317850286324307218, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:43.260-0500-5704c04b65c17830b843f1c6", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929163260), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -65.0 }, max: { _id: MaxKey } }, left: { min: { _id: -65.0 }, max: { _id: -64.0 }, lastmod: Timestamp 1000|73, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -64.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|74, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } }, { ts: Timestamp 1459929163000|7, t: 3, h: 2232396361430522479, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } }, { ts: Timestamp 1459929163000|8, t: 3, h: -788849406847319887, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c04b65c17830b843f1c7'), state: 2, when: new Date(1459929163335), why: "splitting chunk [{ _id: -64.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 20716408231, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.975-0500 c20012| 2016-04-06T02:52:44.593-0500 D REPL [rsBackgroundSync-0] fetcher read 49 operations from remote oplog starting at ts: Timestamp 1459929161000|3 and ending at ts: Timestamp 1459929163000|8 [js_test:multi_coll_drop] 2016-04-06T02:53:27.978-0500 c20012| 2016-04-06T02:52:44.593-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1077 -- target:mongovm16:20011 db:local cmd:{ killCursors: "oplog.rs", cursors: [ 20716408231 ] } [js_test:multi_coll_drop] 2016-04-06T02:53:27.980-0500 c20012| 2016-04-06T02:52:44.593-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1077 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:27.985-0500 c20012| 2016-04-06T02:52:44.593-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1077 finished with response: { cursorsKilled: [ 20716408231 ], cursorsNotFound: [], cursorsAlive: [], cursorsUnknown: [], ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:27.989-0500 c20012| 2016-04-06T02:52:44.593-0500 D REPL [rsBackgroundSync] fetcher stopped reading remote oplog on mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:27.992-0500 c20012| 2016-04-06T02:52:44.593-0500 I REPL [rsBackgroundSync] Starting rollback due to OplogStartMissing: our last op time fetched: { ts: Timestamp 1459929161000|3, t: 2 }. source's GTE: { ts: Timestamp 1459929161000|3, t: 3 } hashes: (1706076688285321939/348221258137002286) [js_test:multi_coll_drop] 2016-04-06T02:53:27.994-0500 c20012| 2016-04-06T02:52:44.593-0500 I REPL [rsBackgroundSync] beginning rollback [js_test:multi_coll_drop] 2016-04-06T02:53:27.995-0500 c20012| 2016-04-06T02:52:44.593-0500 I REPL [rsBackgroundSync] rollback 0 [js_test:multi_coll_drop] 2016-04-06T02:53:27.996-0500 c20012| 2016-04-06T02:52:44.593-0500 I REPL [ReplicationExecutor] transition to ROLLBACK [js_test:multi_coll_drop] 2016-04-06T02:53:27.997-0500 c20012| 2016-04-06T02:52:44.594-0500 I REPL [rsBackgroundSync] rollback 1 [js_test:multi_coll_drop] 2016-04-06T02:53:27.998-0500 c20012| 2016-04-06T02:52:44.594-0500 D NETWORK [conn29] SocketException: remote: 192.168.100.28:38680 error: 9001 socket exception [CLOSED] server [192.168.100.28:38680] [js_test:multi_coll_drop] 2016-04-06T02:53:27.999-0500 c20012| 2016-04-06T02:52:44.594-0500 I NETWORK [conn29] end connection 192.168.100.28:38680 (2 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:28.003-0500 c20012| 2016-04-06T02:52:44.594-0500 D NETWORK [conn20] SocketException: remote: 192.168.100.28:38056 error: 9001 socket exception [CLOSED] server [192.168.100.28:38056] [js_test:multi_coll_drop] 2016-04-06T02:53:28.004-0500 c20012| 2016-04-06T02:52:44.594-0500 I NETWORK [conn20] end connection 192.168.100.28:38056 (2 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:28.011-0500 c20012| 2016-04-06T02:52:44.594-0500 D NETWORK [conn30] SocketException: remote: 192.168.100.28:38733 error: 9001 socket exception [CLOSED] server [192.168.100.28:38733] [js_test:multi_coll_drop] 2016-04-06T02:53:28.012-0500 c20012| 2016-04-06T02:52:44.594-0500 I NETWORK [conn30] end connection 192.168.100.28:38733 (0 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:28.014-0500 c20012| 2016-04-06T02:52:44.594-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:53:28.014-0500 c20012| 2016-04-06T02:52:44.594-0500 D NETWORK [rsBackgroundSync] connected to server mongovm16:20011 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:53:28.015-0500 c20012| 2016-04-06T02:52:44.594-0500 I REPL [rsBackgroundSync] rollback 2 FindCommonPoint [js_test:multi_coll_drop] 2016-04-06T02:53:28.015-0500 c20012| 2016-04-06T02:52:44.595-0500 I REPL [rsBackgroundSync] rollback our last optime: Apr 6 02:52:41:3 [js_test:multi_coll_drop] 2016-04-06T02:53:28.015-0500 c20012| 2016-04-06T02:52:44.595-0500 I REPL [rsBackgroundSync] rollback their last optime: Apr 6 02:52:43:8 [js_test:multi_coll_drop] 2016-04-06T02:53:28.016-0500 c20012| 2016-04-06T02:52:44.595-0500 I REPL [rsBackgroundSync] rollback diff in end of log times: -2 seconds [js_test:multi_coll_drop] 2016-04-06T02:53:28.017-0500 c20012| 2016-04-06T02:52:44.595-0500 I REPL [rsBackgroundSync] rollback 3 fixup [js_test:multi_coll_drop] 2016-04-06T02:53:28.017-0500 c20012| 2016-04-06T02:52:44.595-0500 I REPL [rsBackgroundSync] rollback 3.5 [js_test:multi_coll_drop] 2016-04-06T02:53:28.017-0500 c20012| 2016-04-06T02:52:44.595-0500 I REPL [rsBackgroundSync] rollback 4 n:2 [js_test:multi_coll_drop] 2016-04-06T02:53:28.018-0500 c20012| 2016-04-06T02:52:44.595-0500 I REPL [rsBackgroundSync] minvalid={ ts: Timestamp 1459929163000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.020-0500 c20012| 2016-04-06T02:52:44.595-0500 D QUERY [rsBackgroundSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:28.021-0500 c20012| 2016-04-06T02:52:44.596-0500 I REPL [rsBackgroundSync] rollback 4.6 [js_test:multi_coll_drop] 2016-04-06T02:53:28.022-0500 s20015| 2016-04-06T02:53:11.723-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 99 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:28.023-0500 s20014| 2016-04-06T02:53:14.665-0500 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:28.031-0500 c20013| 2016-04-06T02:52:12.082-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1007 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:22.082-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.032-0500 c20013| 2016-04-06T02:52:12.082-0500 I COMMAND [conn5] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.034-0500 c20013| 2016-04-06T02:52:12.085-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1007 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:28.037-0500 c20013| 2016-04-06T02:52:12.085-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1007 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:28.040-0500 c20013| 2016-04-06T02:52:12.085-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:14.085Z [js_test:multi_coll_drop] 2016-04-06T02:53:28.044-0500 c20013| 2016-04-06T02:52:12.165-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1009 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:22.165-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.045-0500 c20012| 2016-04-06T02:52:44.596-0500 I REPL [rsBackgroundSync] rollback 4.7 [js_test:multi_coll_drop] 2016-04-06T02:53:28.046-0500 c20012| 2016-04-06T02:52:44.596-0500 D QUERY [rsBackgroundSync] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:53:28.047-0500 c20012| 2016-04-06T02:52:44.596-0500 D QUERY [rsBackgroundSync] Using idhack: { _id: "mongovm16:20010:1459929128:185613966" } [js_test:multi_coll_drop] 2016-04-06T02:53:28.050-0500 c20011| 2016-04-06T02:52:42.325-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:28.051-0500 c20011| 2016-04-06T02:52:42.325-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:28.054-0500 c20011| 2016-04-06T02:52:42.325-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.056-0500 c20011| 2016-04-06T02:52:42.325-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|12, t: 3 } and is durable through: { ts: Timestamp 1459929162000|12, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.057-0500 c20011| 2016-04-06T02:52:42.325-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|12, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.065-0500 c20011| 2016-04-06T02:52:42.325-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.072-0500 c20011| 2016-04-06T02:52:42.326-0500 I COMMAND [conn40] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c04a65c17830b843f1bb'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929162313), why: "splitting chunk [{ _id: -70.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c04a65c17830b843f1bb'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929162313), why: "splitting chunk [{ _id: -70.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 12ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.076-0500 c20011| 2016-04-06T02:52:42.327-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|11, t: 3 } } cursorid:19853084149 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 10ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.081-0500 c20011| 2016-04-06T02:52:42.328-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|12, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:28.085-0500 c20011| 2016-04-06T02:52:42.329-0500 D COMMAND [conn40] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|62 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|12, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.089-0500 c20011| 2016-04-06T02:52:42.329-0500 D COMMAND [conn40] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|12, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:28.092-0500 c20011| 2016-04-06T02:52:42.329-0500 D COMMAND [conn40] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|62 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|12, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.095-0500 c20011| 2016-04-06T02:52:42.329-0500 D QUERY [conn40] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:28.099-0500 c20011| 2016-04-06T02:52:42.329-0500 I COMMAND [conn40] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|62 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|12, t: 3 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.105-0500 c20011| 2016-04-06T02:52:42.330-0500 D COMMAND [conn40] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-70.0", lastmod: Timestamp 1000|63, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -70.0 }, max: { _id: -69.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-70.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-69.0", lastmod: Timestamp 1000|64, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -69.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-69.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|62 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.108-0500 c20011| 2016-04-06T02:52:42.330-0500 D QUERY [conn40] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:28.110-0500 c20011| 2016-04-06T02:52:42.331-0500 D QUERY [conn40] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:28.114-0500 c20011| 2016-04-06T02:52:42.331-0500 I COMMAND [conn40] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.116-0500 c20011| 2016-04-06T02:52:42.331-0500 D QUERY [conn40] Using idhack: { _id: "multidrop.coll-_id_-70.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:28.117-0500 c20011| 2016-04-06T02:52:42.331-0500 D QUERY [conn40] Using idhack: { _id: "multidrop.coll-_id_-69.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:28.122-0500 c20011| 2016-04-06T02:52:42.332-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|12, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.123-0500 c20011| 2016-04-06T02:52:42.334-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|12, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:28.126-0500 c20011| 2016-04-06T02:52:42.335-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:28.128-0500 c20011| 2016-04-06T02:52:42.335-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:28.130-0500 c20011| 2016-04-06T02:52:42.335-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.133-0500 c20011| 2016-04-06T02:52:42.335-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|13, t: 3 } and is durable through: { ts: Timestamp 1459929162000|12, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.138-0500 c20011| 2016-04-06T02:52:42.336-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.143-0500 c20011| 2016-04-06T02:52:42.343-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929162000|13, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|12, t: 3 }, name-id: "227" } [js_test:multi_coll_drop] 2016-04-06T02:53:28.146-0500 c20011| 2016-04-06T02:52:42.347-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:28.147-0500 c20011| 2016-04-06T02:52:42.347-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:28.151-0500 c20011| 2016-04-06T02:52:42.347-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.153-0500 c20011| 2016-04-06T02:52:42.347-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|13, t: 3 } and is durable through: { ts: Timestamp 1459929162000|13, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.155-0500 c20011| 2016-04-06T02:52:42.347-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|13, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.158-0500 c20011| 2016-04-06T02:52:42.348-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.164-0500 c20011| 2016-04-06T02:52:42.348-0500 I COMMAND [conn40] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-70.0", lastmod: Timestamp 1000|63, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -70.0 }, max: { _id: -69.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-70.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-69.0", lastmod: Timestamp 1000|64, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -69.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-69.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|62 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 17ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.174-0500 c20011| 2016-04-06T02:52:42.348-0500 D COMMAND [conn40] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:42.348-0500-5704c04a65c17830b843f1bc", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162348), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -70.0 }, max: { _id: MaxKey } }, left: { min: { _id: -70.0 }, max: { _id: -69.0 }, lastmod: Timestamp 1000|63, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -69.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|64, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.181-0500 c20011| 2016-04-06T02:52:42.349-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|12, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 14ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.186-0500 c20011| 2016-04-06T02:52:42.352-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|13, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:28.188-0500 c20011| 2016-04-06T02:52:42.353-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929162000|14, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|13, t: 3 }, name-id: "228" } [js_test:multi_coll_drop] 2016-04-06T02:53:28.197-0500 c20011| 2016-04-06T02:52:42.356-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|14, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:28.198-0500 c20011| 2016-04-06T02:52:42.356-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:28.204-0500 c20011| 2016-04-06T02:52:42.357-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.206-0500 c20011| 2016-04-06T02:52:42.357-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|14, t: 3 } and is durable through: { ts: Timestamp 1459929162000|13, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.209-0500 c20011| 2016-04-06T02:52:42.357-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929162000|14, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|13, t: 3 }, name-id: "228" } [js_test:multi_coll_drop] 2016-04-06T02:53:28.219-0500 c20011| 2016-04-06T02:52:42.357-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|14, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.228-0500 c20011| 2016-04-06T02:52:42.361-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|14, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|14, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:28.230-0500 c20011| 2016-04-06T02:52:42.361-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:28.234-0500 c20011| 2016-04-06T02:52:42.361-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.236-0500 c20011| 2016-04-06T02:52:42.361-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|14, t: 3 } and is durable through: { ts: Timestamp 1459929162000|14, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.236-0500 c20011| 2016-04-06T02:52:42.361-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|14, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.240-0500 c20011| 2016-04-06T02:52:42.361-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|14, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|14, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.247-0500 c20011| 2016-04-06T02:52:42.367-0500 I COMMAND [conn40] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:42.348-0500-5704c04a65c17830b843f1bc", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162348), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -70.0 }, max: { _id: MaxKey } }, left: { min: { _id: -70.0 }, max: { _id: -69.0 }, lastmod: Timestamp 1000|63, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -69.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|64, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 18ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.250-0500 2016-04-06T02:53:14.694-0500 I NETWORK [thread2] trying reconnect to mongovm16:20011 (192.168.100.28) failed [js_test:multi_coll_drop] 2016-04-06T02:53:28.253-0500 c20011| 2016-04-06T02:52:42.368-0500 D COMMAND [conn40] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c04a65c17830b843f1bb') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.254-0500 c20011| 2016-04-06T02:52:42.368-0500 D QUERY [conn40] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:28.259-0500 c20011| 2016-04-06T02:52:42.368-0500 D QUERY [conn40] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c04a65c17830b843f1bb') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.261-0500 c20011| 2016-04-06T02:52:42.390-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|13, t: 3 } } cursorid:19853084149 numYields:1 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 37ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.264-0500 c20011| 2016-04-06T02:52:42.393-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|14, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:28.270-0500 c20011| 2016-04-06T02:52:42.395-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|14, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|15, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:28.270-0500 c20011| 2016-04-06T02:52:42.395-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:28.274-0500 c20011| 2016-04-06T02:52:42.395-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.280-0500 c20011| 2016-04-06T02:52:42.395-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|15, t: 3 } and is durable through: { ts: Timestamp 1459929162000|14, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.284-0500 c20011| 2016-04-06T02:52:42.395-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|14, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|15, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.289-0500 c20011| 2016-04-06T02:52:42.400-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929162000|15, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|14, t: 3 }, name-id: "229" } [js_test:multi_coll_drop] 2016-04-06T02:53:28.296-0500 c20011| 2016-04-06T02:52:42.407-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|15, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|15, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:28.298-0500 c20011| 2016-04-06T02:52:42.407-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:28.301-0500 c20011| 2016-04-06T02:52:42.407-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.306-0500 c20011| 2016-04-06T02:52:42.407-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|15, t: 3 } and is durable through: { ts: Timestamp 1459929162000|15, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.308-0500 c20011| 2016-04-06T02:52:42.407-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|15, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.314-0500 c20011| 2016-04-06T02:52:42.407-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|15, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|15, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.320-0500 c20011| 2016-04-06T02:52:42.410-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|14, t: 3 } } cursorid:19853084149 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 16ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.324-0500 c20011| 2016-04-06T02:52:42.410-0500 I COMMAND [conn40] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c04a65c17830b843f1bb') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 42ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.329-0500 c20011| 2016-04-06T02:52:42.411-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|15, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:28.334-0500 c20011| 2016-04-06T02:52:42.424-0500 D COMMAND [conn36] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|62 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|15, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.336-0500 c20011| 2016-04-06T02:52:42.425-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|15, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:28.338-0500 c20011| 2016-04-06T02:52:42.425-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|62 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|15, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.340-0500 c20011| 2016-04-06T02:52:42.425-0500 D QUERY [conn36] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:28.345-0500 c20011| 2016-04-06T02:52:42.425-0500 I COMMAND [conn36] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|62 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|15, t: 3 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:732 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.347-0500 c20011| 2016-04-06T02:52:42.428-0500 D COMMAND [conn36] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|15, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.349-0500 c20011| 2016-04-06T02:52:42.428-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|15, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:28.351-0500 c20011| 2016-04-06T02:52:42.428-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|15, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.353-0500 c20011| 2016-04-06T02:52:42.428-0500 D QUERY [conn36] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:28.356-0500 c20011| 2016-04-06T02:52:42.436-0500 I COMMAND [conn36] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|15, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.360-0500 c20011| 2016-04-06T02:52:42.440-0500 D COMMAND [conn40] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c04a65c17830b843f1bd'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929162436), why: "splitting chunk [{ _id: -69.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.361-0500 c20011| 2016-04-06T02:52:42.440-0500 D QUERY [conn40] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:28.363-0500 c20011| 2016-04-06T02:52:42.440-0500 D QUERY [conn40] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:28.365-0500 c20011| 2016-04-06T02:52:42.440-0500 D QUERY [conn40] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.371-0500 c20011| 2016-04-06T02:52:42.442-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|15, t: 3 } } cursorid:19853084149 numYields:1 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 30ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.384-0500 c20011| 2016-04-06T02:52:42.446-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|15, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:28.389-0500 c20011| 2016-04-06T02:52:42.452-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|15, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:28.390-0500 c20011| 2016-04-06T02:52:42.452-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:28.391-0500 c20011| 2016-04-06T02:52:42.453-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.395-0500 c20011| 2016-04-06T02:52:42.453-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|16, t: 3 } and is durable through: { ts: Timestamp 1459929162000|15, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.399-0500 c20011| 2016-04-06T02:52:42.453-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|15, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.406-0500 c20011| 2016-04-06T02:52:42.475-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:28.407-0500 c20011| 2016-04-06T02:52:42.475-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:28.409-0500 c20011| 2016-04-06T02:52:42.475-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.412-0500 c20011| 2016-04-06T02:52:42.475-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|16, t: 3 } and is durable through: { ts: Timestamp 1459929162000|16, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.417-0500 c20011| 2016-04-06T02:52:42.475-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.418-0500 c20011| 2016-04-06T02:52:42.482-0500 D REPL [conn40] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|16, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.423-0500 c20011| 2016-04-06T02:52:42.487-0500 I COMMAND [conn40] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c04a65c17830b843f1bd'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929162436), why: "splitting chunk [{ _id: -69.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c04a65c17830b843f1bd'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929162436), why: "splitting chunk [{ _id: -69.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 46ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.428-0500 c20011| 2016-04-06T02:52:42.487-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|15, t: 3 } } cursorid:19853084149 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 41ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.433-0500 c20011| 2016-04-06T02:52:42.489-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|16, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:28.438-0500 c20011| 2016-04-06T02:52:42.490-0500 D COMMAND [conn40] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|16, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.441-0500 c20011| 2016-04-06T02:52:42.490-0500 D COMMAND [conn40] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|16, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:28.447-0500 c20011| 2016-04-06T02:52:42.490-0500 D COMMAND [conn40] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|16, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.449-0500 c20011| 2016-04-06T02:52:42.490-0500 D QUERY [conn40] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:28.458-0500 c20011| 2016-04-06T02:52:42.491-0500 I COMMAND [conn40] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|16, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:512 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.463-0500 c20011| 2016-04-06T02:52:42.494-0500 D COMMAND [conn40] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-69.0", lastmod: Timestamp 1000|65, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -69.0 }, max: { _id: -68.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-69.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-68.0", lastmod: Timestamp 1000|66, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -68.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-68.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|64 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.466-0500 c20011| 2016-04-06T02:52:42.494-0500 D QUERY [conn40] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:28.468-0500 c20011| 2016-04-06T02:52:42.494-0500 D QUERY [conn40] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:28.473-0500 c20011| 2016-04-06T02:52:42.494-0500 I COMMAND [conn40] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.475-0500 c20011| 2016-04-06T02:52:42.494-0500 D QUERY [conn40] Using idhack: { _id: "multidrop.coll-_id_-69.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:28.477-0500 c20011| 2016-04-06T02:52:42.494-0500 D QUERY [conn40] Using idhack: { _id: "multidrop.coll-_id_-68.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:28.479-0500 c20011| 2016-04-06T02:52:42.495-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|16, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.481-0500 c20011| 2016-04-06T02:52:42.497-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|16, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:28.482-0500 c20011| 2016-04-06T02:52:42.507-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929162000|17, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|16, t: 3 }, name-id: "231" } [js_test:multi_coll_drop] 2016-04-06T02:53:28.487-0500 c20011| 2016-04-06T02:52:42.517-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:28.487-0500 c20011| 2016-04-06T02:52:42.517-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:28.491-0500 c20011| 2016-04-06T02:52:42.517-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.492-0500 c20011| 2016-04-06T02:52:42.517-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|17, t: 3 } and is durable through: { ts: Timestamp 1459929162000|16, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.495-0500 c20011| 2016-04-06T02:52:42.517-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929162000|17, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|16, t: 3 }, name-id: "231" } [js_test:multi_coll_drop] 2016-04-06T02:53:28.498-0500 c20011| 2016-04-06T02:52:42.517-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.501-0500 c20011| 2016-04-06T02:52:42.525-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:28.502-0500 c20011| 2016-04-06T02:52:42.525-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:28.505-0500 c20011| 2016-04-06T02:52:42.525-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.508-0500 c20011| 2016-04-06T02:52:42.525-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|17, t: 3 } and is durable through: { ts: Timestamp 1459929162000|17, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.510-0500 c20011| 2016-04-06T02:52:42.525-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|17, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.518-0500 c20011| 2016-04-06T02:52:42.525-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.537-0500 c20011| 2016-04-06T02:52:42.525-0500 I COMMAND [conn40] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-69.0", lastmod: Timestamp 1000|65, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -69.0 }, max: { _id: -68.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-69.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-68.0", lastmod: Timestamp 1000|66, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -68.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-68.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|64 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 31ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.543-0500 c20011| 2016-04-06T02:52:42.526-0500 D COMMAND [conn40] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:42.526-0500-5704c04a65c17830b843f1be", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162526), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -69.0 }, max: { _id: MaxKey } }, left: { min: { _id: -69.0 }, max: { _id: -68.0 }, lastmod: Timestamp 1000|65, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -68.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|66, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.546-0500 c20011| 2016-04-06T02:52:42.530-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|16, t: 3 } } cursorid:19853084149 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 32ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.558-0500 c20011| 2016-04-06T02:52:42.530-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|17, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:28.564-0500 c20011| 2016-04-06T02:52:42.531-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|17, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.566-0500 c20011| 2016-04-06T02:52:42.534-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|17, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:28.570-0500 c20011| 2016-04-06T02:52:42.538-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|18, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:28.571-0500 c20011| 2016-04-06T02:52:42.538-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:28.575-0500 c20011| 2016-04-06T02:52:42.538-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.580-0500 c20011| 2016-04-06T02:52:42.538-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|18, t: 3 } and is durable through: { ts: Timestamp 1459929162000|17, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.581-0500 c20011| 2016-04-06T02:52:42.538-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|18, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.584-0500 c20011| 2016-04-06T02:52:42.551-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929162000|18, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|17, t: 3 }, name-id: "232" } [js_test:multi_coll_drop] 2016-04-06T02:53:28.602-0500 c20011| 2016-04-06T02:52:42.631-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|18, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|18, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:28.604-0500 c20011| 2016-04-06T02:52:42.631-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:28.620-0500 c20011| 2016-04-06T02:52:42.631-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.633-0500 c20011| 2016-04-06T02:52:42.631-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|18, t: 3 } and is durable through: { ts: Timestamp 1459929162000|18, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.636-0500 c20011| 2016-04-06T02:52:42.631-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|18, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.654-0500 c20011| 2016-04-06T02:52:42.631-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|18, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|18, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.660-0500 c20011| 2016-04-06T02:52:42.632-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|17, t: 3 } } cursorid:19853084149 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 97ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.666-0500 c20011| 2016-04-06T02:52:42.633-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|18, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:28.673-0500 c20011| 2016-04-06T02:52:42.633-0500 I COMMAND [conn40] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:42.526-0500-5704c04a65c17830b843f1be", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162526), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -69.0 }, max: { _id: MaxKey } }, left: { min: { _id: -69.0 }, max: { _id: -68.0 }, lastmod: Timestamp 1000|65, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -68.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|66, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 107ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.676-0500 c20011| 2016-04-06T02:52:42.634-0500 D COMMAND [conn40] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c04a65c17830b843f1bd') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.677-0500 c20011| 2016-04-06T02:52:42.634-0500 D QUERY [conn40] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:28.681-0500 c20011| 2016-04-06T02:52:42.634-0500 D QUERY [conn40] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c04a65c17830b843f1bd') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.683-0500 c20011| 2016-04-06T02:52:42.634-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|18, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.695-0500 c20011| 2016-04-06T02:52:42.637-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|18, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:28.705-0500 c20011| 2016-04-06T02:52:42.638-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|18, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|19, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:28.707-0500 c20011| 2016-04-06T02:52:42.638-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:28.710-0500 c20011| 2016-04-06T02:52:42.638-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.712-0500 c20011| 2016-04-06T02:52:42.638-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|19, t: 3 } and is durable through: { ts: Timestamp 1459929162000|18, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.714-0500 c20011| 2016-04-06T02:52:42.638-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|18, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|19, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.717-0500 c20011| 2016-04-06T02:52:42.645-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929162000|19, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|18, t: 3 }, name-id: "233" } [js_test:multi_coll_drop] 2016-04-06T02:53:28.720-0500 c20011| 2016-04-06T02:52:42.697-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|19, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|19, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:28.721-0500 c20011| 2016-04-06T02:52:42.697-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:28.728-0500 c20011| 2016-04-06T02:52:42.697-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.730-0500 c20011| 2016-04-06T02:52:42.697-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|19, t: 3 } and is durable through: { ts: Timestamp 1459929162000|19, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.731-0500 c20011| 2016-04-06T02:52:42.697-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|19, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.734-0500 c20011| 2016-04-06T02:52:42.698-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|19, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|19, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.739-0500 c20011| 2016-04-06T02:52:42.700-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|18, t: 3 } } cursorid:19853084149 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 63ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.744-0500 c20011| 2016-04-06T02:52:42.700-0500 I COMMAND [conn40] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c04a65c17830b843f1bd') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 66ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.745-0500 c20011| 2016-04-06T02:52:42.701-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|19, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:28.747-0500 c20011| 2016-04-06T02:52:42.704-0500 D COMMAND [conn36] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|64 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|19, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.751-0500 c20011| 2016-04-06T02:52:42.704-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|19, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:28.756-0500 c20011| 2016-04-06T02:52:42.704-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|64 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|19, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.757-0500 c20011| 2016-04-06T02:52:42.705-0500 D QUERY [conn36] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:28.759-0500 c20011| 2016-04-06T02:52:42.706-0500 I COMMAND [conn36] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|64 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|19, t: 3 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:732 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.761-0500 c20011| 2016-04-06T02:52:42.712-0500 D COMMAND [conn36] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|19, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.763-0500 c20011| 2016-04-06T02:52:42.712-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|19, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:28.764-0500 c20011| 2016-04-06T02:52:42.712-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|19, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.765-0500 c20011| 2016-04-06T02:52:42.712-0500 D QUERY [conn36] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:28.768-0500 c20011| 2016-04-06T02:52:42.712-0500 I COMMAND [conn36] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|19, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.774-0500 c20011| 2016-04-06T02:52:42.713-0500 D COMMAND [conn40] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c04a65c17830b843f1bf'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929162713), why: "splitting chunk [{ _id: -68.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.775-0500 c20011| 2016-04-06T02:52:42.713-0500 D QUERY [conn40] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:28.780-0500 c20011| 2016-04-06T02:52:42.713-0500 D QUERY [conn40] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:28.781-0500 c20011| 2016-04-06T02:52:42.713-0500 D QUERY [conn40] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.787-0500 c20011| 2016-04-06T02:52:42.714-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|19, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 12ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.788-0500 c20011| 2016-04-06T02:52:42.717-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|19, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:28.794-0500 c20011| 2016-04-06T02:52:42.719-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|19, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|20, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:28.796-0500 c20011| 2016-04-06T02:52:42.719-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:28.797-0500 c20011| 2016-04-06T02:52:42.719-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.799-0500 c20011| 2016-04-06T02:52:42.719-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|20, t: 3 } and is durable through: { ts: Timestamp 1459929162000|19, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.802-0500 c20011| 2016-04-06T02:52:42.719-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|19, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|20, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.810-0500 c20011| 2016-04-06T02:52:42.735-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|20, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|20, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:28.811-0500 c20011| 2016-04-06T02:52:42.735-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:28.814-0500 c20011| 2016-04-06T02:52:42.735-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.816-0500 c20011| 2016-04-06T02:52:42.735-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|20, t: 3 } and is durable through: { ts: Timestamp 1459929162000|20, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.819-0500 c20011| 2016-04-06T02:52:42.735-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|20, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|20, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.820-0500 c20011| 2016-04-06T02:52:42.736-0500 D REPL [conn40] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|20, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.834-0500 c20011| 2016-04-06T02:52:42.737-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|19, t: 3 } } cursorid:19853084149 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 19ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.840-0500 c20011| 2016-04-06T02:52:42.737-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|20, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:28.849-0500 c20011| 2016-04-06T02:52:42.738-0500 I COMMAND [conn40] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c04a65c17830b843f1bf'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929162713), why: "splitting chunk [{ _id: -68.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c04a65c17830b843f1bf'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929162713), why: "splitting chunk [{ _id: -68.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 25ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.852-0500 c20011| 2016-04-06T02:52:42.740-0500 D COMMAND [conn40] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|20, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.854-0500 c20011| 2016-04-06T02:52:42.740-0500 D COMMAND [conn40] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|20, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:28.858-0500 c20011| 2016-04-06T02:52:42.740-0500 D COMMAND [conn40] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|20, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.858-0500 c20011| 2016-04-06T02:52:42.740-0500 D QUERY [conn40] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:28.862-0500 c20011| 2016-04-06T02:52:42.740-0500 I COMMAND [conn40] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|20, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:512 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.869-0500 c20011| 2016-04-06T02:52:42.744-0500 D COMMAND [conn40] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|66 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|20, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.871-0500 c20011| 2016-04-06T02:52:42.744-0500 D COMMAND [conn40] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|20, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:28.876-0500 c20011| 2016-04-06T02:52:42.744-0500 D COMMAND [conn40] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|66 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|20, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.878-0500 c20011| 2016-04-06T02:52:42.744-0500 D QUERY [conn40] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:28.895-0500 c20011| 2016-04-06T02:52:42.745-0500 I COMMAND [conn40] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|66 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|20, t: 3 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.898-0500 c20011| 2016-04-06T02:52:42.754-0500 D COMMAND [conn40] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-68.0", lastmod: Timestamp 1000|67, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -68.0 }, max: { _id: -67.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-68.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-67.0", lastmod: Timestamp 1000|68, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -67.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-67.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|66 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.899-0500 c20011| 2016-04-06T02:52:42.754-0500 D QUERY [conn40] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:28.901-0500 c20011| 2016-04-06T02:52:42.754-0500 D QUERY [conn40] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:28.902-0500 c20011| 2016-04-06T02:52:42.754-0500 I COMMAND [conn40] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.904-0500 c20011| 2016-04-06T02:52:42.754-0500 D QUERY [conn40] Using idhack: { _id: "multidrop.coll-_id_-68.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:28.905-0500 c20011| 2016-04-06T02:52:42.755-0500 D QUERY [conn40] Using idhack: { _id: "multidrop.coll-_id_-67.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:28.909-0500 c20011| 2016-04-06T02:52:42.759-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|20, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 21ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.921-0500 c20011| 2016-04-06T02:52:42.762-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|20, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:28.924-0500 c20011| 2016-04-06T02:52:42.763-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|20, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:28.925-0500 c20011| 2016-04-06T02:52:42.763-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:28.941-0500 c20011| 2016-04-06T02:52:42.764-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.947-0500 c20011| 2016-04-06T02:52:42.764-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|21, t: 3 } and is durable through: { ts: Timestamp 1459929162000|20, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.952-0500 c20011| 2016-04-06T02:52:42.764-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|20, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.956-0500 c20011| 2016-04-06T02:52:42.774-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929162000|21, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|20, t: 3 }, name-id: "235" } [js_test:multi_coll_drop] 2016-04-06T02:53:28.959-0500 c20011| 2016-04-06T02:52:42.774-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:28.960-0500 c20011| 2016-04-06T02:52:42.774-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:28.967-0500 c20011| 2016-04-06T02:52:42.774-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.972-0500 c20011| 2016-04-06T02:52:42.774-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|21, t: 3 } and is durable through: { ts: Timestamp 1459929162000|21, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.973-0500 c20011| 2016-04-06T02:52:42.774-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|21, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:28.976-0500 c20011| 2016-04-06T02:52:42.774-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.979-0500 c20011| 2016-04-06T02:52:42.776-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|20, t: 3 } } cursorid:19853084149 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 13ms [js_test:multi_coll_drop] 2016-04-06T02:53:28.980-0500 c20011| 2016-04-06T02:52:42.776-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|21, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:28.989-0500 c20011| 2016-04-06T02:52:42.777-0500 I COMMAND [conn40] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-68.0", lastmod: Timestamp 1000|67, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -68.0 }, max: { _id: -67.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-68.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-67.0", lastmod: Timestamp 1000|68, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -67.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-67.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|66 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 23ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.036-0500 c20011| 2016-04-06T02:52:42.780-0500 D COMMAND [conn40] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:42.780-0500-5704c04a65c17830b843f1c0", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162780), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -68.0 }, max: { _id: MaxKey } }, left: { min: { _id: -68.0 }, max: { _id: -67.0 }, lastmod: Timestamp 1000|67, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -67.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|68, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.068-0500 c20011| 2016-04-06T02:52:42.781-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|21, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.074-0500 c20011| 2016-04-06T02:52:42.783-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|21, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:29.100-0500 c20011| 2016-04-06T02:52:42.788-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|22, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:29.100-0500 c20011| 2016-04-06T02:52:42.788-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:29.105-0500 c20011| 2016-04-06T02:52:42.788-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.105-0500 2016-04-06T02:53:15.676-0500 I NETWORK [thread2] reconnect mongovm16:20011 (192.168.100.28) ok [js_test:multi_coll_drop] 2016-04-06T02:53:29.117-0500 c20011| 2016-04-06T02:52:42.788-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|22, t: 3 } and is durable through: { ts: Timestamp 1459929162000|21, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.137-0500 c20011| 2016-04-06T02:52:42.788-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|22, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.140-0500 c20013| 2016-04-06T02:52:12.166-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1009 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.143-0500 c20013| 2016-04-06T02:52:12.784-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:29.149-0500 c20013| 2016-04-06T02:52:12.784-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1010 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, appliedOpTime: { ts: Timestamp 1459929117000|1, t: -1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:29.150-0500 c20013| 2016-04-06T02:52:12.784-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1010 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.150-0500 c20013| 2016-04-06T02:52:14.044-0500 D COMMAND [conn7] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.150-0500 c20013| 2016-04-06T02:52:14.044-0500 D COMMAND [conn7] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:29.151-0500 c20013| 2016-04-06T02:52:14.045-0500 I COMMAND [conn7] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.157-0500 c20013| 2016-04-06T02:52:14.046-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1010 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.157-0500 c20013| 2016-04-06T02:52:14.046-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1006 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.158-0500 c20013| 2016-04-06T02:52:14.046-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to execute command: RemoteCommand 1009 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:22.165-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } reason: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:29.159-0500 c20013| 2016-04-06T02:52:14.046-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1009 finished with response: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:29.160-0500 c20013| 2016-04-06T02:52:14.046-0500 I REPL [ReplicationExecutor] Error in heartbeat request to mongovm16:20011; HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:29.161-0500 c20013| 2016-04-06T02:52:14.046-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:14.046Z [js_test:multi_coll_drop] 2016-04-06T02:53:29.162-0500 c20012| 2016-04-06T02:52:44.596-0500 D QUERY [rsBackgroundSync] Using idhack: query: { _id: "mongovm16:20014" } sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:53:29.163-0500 c20012| 2016-04-06T02:52:44.596-0500 D QUERY [rsBackgroundSync] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:29.165-0500 c20012| 2016-04-06T02:52:44.596-0500 D QUERY [rsBackgroundSync] Using idhack: query: { _id: "mongovm16:20015" } sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:53:29.166-0500 c20012| 2016-04-06T02:52:44.596-0500 D QUERY [rsBackgroundSync] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:29.167-0500 c20012| 2016-04-06T02:52:44.596-0500 I REPL [rsBackgroundSync] rollback 5 d:0 u:3 [js_test:multi_coll_drop] 2016-04-06T02:53:29.167-0500 c20012| 2016-04-06T02:52:44.596-0500 I REPL [rsBackgroundSync] rollback 6 [js_test:multi_coll_drop] 2016-04-06T02:53:29.167-0500 c20012| 2016-04-06T02:52:44.596-0500 D REPL [rsBackgroundSync] rollback truncate oplog after Apr 6 02:52:26:a [js_test:multi_coll_drop] 2016-04-06T02:53:29.168-0500 c20012| 2016-04-06T02:52:44.596-0500 D QUERY [rsBackgroundSync] Running query: query: {} sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:53:29.169-0500 c20012| 2016-04-06T02:52:44.596-0500 D QUERY [rsBackgroundSync] Collection admin.system.roles does not exist. Using EOF plan: query: {} sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:53:29.171-0500 c20012| 2016-04-06T02:52:44.596-0500 I COMMAND [rsBackgroundSync] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:20 locks:{ Global: { acquireCount: { r: 14, w: 5, W: 1 } }, Database: { acquireCount: { r: 4, w: 1, W: 4 } }, Collection: { acquireCount: { r: 3 } }, oplog: { acquireCount: { R: 1, W: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.173-0500 c20012| 2016-04-06T02:52:44.596-0500 I REPL [rsBackgroundSync] rollback done [js_test:multi_coll_drop] 2016-04-06T02:53:29.173-0500 c20012| 2016-04-06T02:52:44.596-0500 I REPL [rsBackgroundSync] rollback finished [js_test:multi_coll_drop] 2016-04-06T02:53:29.179-0500 c20012| 2016-04-06T02:52:44.596-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:29.181-0500 c20012| 2016-04-06T02:52:44.596-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1079 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:29.182-0500 c20012| 2016-04-06T02:52:44.596-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1079 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.182-0500 c20012| 2016-04-06T02:52:44.597-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1079 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.183-0500 c20012| 2016-04-06T02:52:44.608-0500 I REPL [ReplicationExecutor] transition to RECOVERING [js_test:multi_coll_drop] 2016-04-06T02:53:29.187-0500 c20012| 2016-04-06T02:52:44.608-0500 D REPL [rsBackgroundSync] bgsync fetch queue set to: { ts: Timestamp 1459929146000|10, t: 2 } 8129632561130330747 [js_test:multi_coll_drop] 2016-04-06T02:53:29.187-0500 c20012| 2016-04-06T02:52:44.609-0500 I REPL [ReplicationExecutor] syncing from: mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:29.192-0500 c20012| 2016-04-06T02:52:44.610-0500 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 1081 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:14.610-0500 cmd:{ find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:29.195-0500 c20012| 2016-04-06T02:52:44.610-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:29.198-0500 c20012| 2016-04-06T02:52:44.610-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1082 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:29.199-0500 c20012| 2016-04-06T02:52:45.616-0500 I REPL [ReplicationExecutor] transition to SECONDARY [js_test:multi_coll_drop] 2016-04-06T02:53:29.199-0500 c20012| 2016-04-06T02:52:45.724-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:38798 #31 (1 connection now open) [js_test:multi_coll_drop] 2016-04-06T02:53:29.200-0500 c20012| 2016-04-06T02:52:45.725-0500 D COMMAND [conn31] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20011" } [js_test:multi_coll_drop] 2016-04-06T02:53:29.202-0500 c20012| 2016-04-06T02:52:45.725-0500 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20011" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.204-0500 c20012| 2016-04-06T02:52:45.725-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.204-0500 c20012| 2016-04-06T02:52:45.725-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:29.242-0500 c20012| 2016-04-06T02:52:45.725-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.262-0500 c20012| 2016-04-06T02:52:46.728-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1083 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:56.728-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.263-0500 c20012| 2016-04-06T02:52:46.729-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1083 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.266-0500 c20012| 2016-04-06T02:52:46.730-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1083 finished with response: { ok: 1.0, electionTime: new Date(6270347962317012993), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, opTime: { ts: Timestamp 1459929163000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:29.268-0500 c20012| 2016-04-06T02:52:46.730-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:48.730Z [js_test:multi_coll_drop] 2016-04-06T02:53:29.270-0500 c20012| 2016-04-06T02:52:47.097-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter failed to prepare update command with status: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:29.271-0500 c20012| 2016-04-06T02:52:47.097-0500 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to mongovm16:20011: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:29.274-0500 c20012| 2016-04-06T02:52:47.097-0500 D REPL [SyncSourceFeedback] The replication progress command (replSetUpdatePosition) failed and will be retried: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:29.274-0500 c20012| 2016-04-06T02:52:47.725-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.276-0500 c20012| 2016-04-06T02:52:47.725-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:29.279-0500 c20012| 2016-04-06T02:52:47.726-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.282-0500 c20012| 2016-04-06T02:52:48.732-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1085 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:58.732-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.284-0500 c20012| 2016-04-06T02:52:48.732-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1085 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.290-0500 c20012| 2016-04-06T02:52:48.733-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1085 finished with response: { ok: 1.0, electionTime: new Date(6270347962317012993), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, opTime: { ts: Timestamp 1459929163000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:29.291-0500 c20012| 2016-04-06T02:52:48.733-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:50.733Z [js_test:multi_coll_drop] 2016-04-06T02:53:29.294-0500 c20012| 2016-04-06T02:52:49.726-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.298-0500 c20012| 2016-04-06T02:52:49.726-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:29.301-0500 c20012| 2016-04-06T02:52:49.726-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.304-0500 c20012| 2016-04-06T02:52:50.733-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1087 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:00.733-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.305-0500 c20012| 2016-04-06T02:52:50.733-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1087 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.307-0500 c20012| 2016-04-06T02:52:50.733-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1087 finished with response: { ok: 1.0, electionTime: new Date(6270347962317012993), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, opTime: { ts: Timestamp 1459929163000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:29.309-0500 c20012| 2016-04-06T02:52:50.734-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:52.734Z [js_test:multi_coll_drop] 2016-04-06T02:53:29.310-0500 c20012| 2016-04-06T02:52:51.726-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.311-0500 c20012| 2016-04-06T02:52:51.726-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:29.312-0500 c20012| 2016-04-06T02:52:51.728-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } numYields:0 reslen:489 locks:{} protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.317-0500 c20012| 2016-04-06T02:52:52.734-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1089 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:02.734-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.318-0500 c20012| 2016-04-06T02:52:52.734-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1089 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.324-0500 c20012| 2016-04-06T02:52:52.735-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1089 finished with response: { ok: 1.0, electionTime: new Date(6270347962317012993), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929171000|2, t: 3 }, opTime: { ts: Timestamp 1459929171000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:29.325-0500 c20012| 2016-04-06T02:52:52.735-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:54.735Z [js_test:multi_coll_drop] 2016-04-06T02:53:29.326-0500 c20012| 2016-04-06T02:52:53.714-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:39105 #32 (2 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:29.327-0500 c20012| 2016-04-06T02:52:53.714-0500 D COMMAND [conn32] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:29.330-0500 c20012| 2016-04-06T02:52:53.715-0500 I COMMAND [conn32] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20014" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.330-0500 c20012| 2016-04-06T02:52:53.715-0500 D COMMAND [conn32] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.332-0500 c20012| 2016-04-06T02:52:53.715-0500 I COMMAND [conn32] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.333-0500 c20012| 2016-04-06T02:52:53.715-0500 D COMMAND [conn32] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.336-0500 c20012| 2016-04-06T02:52:53.715-0500 I COMMAND [conn32] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.338-0500 c20012| 2016-04-06T02:52:53.730-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.339-0500 c20012| 2016-04-06T02:52:53.730-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:29.349-0500 c20012| 2016-04-06T02:52:53.730-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.354-0500 c20012| 2016-04-06T02:52:54.227-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1069 timed out, adjusted timeout after getting connection from pool was 10000ms, op was id: 9, states: [ UNINITIALIZED, IN_PROGRESS ], start_time: 2016-04-06T02:52:44.227-0500, request: RemoteCommand 1069 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:54.227-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.368-0500 c20012| 2016-04-06T02:52:54.227-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Operation timing out; original request was: RemoteCommand 1069 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:54.227-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.373-0500 c20012| 2016-04-06T02:52:54.227-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to execute command: RemoteCommand 1069 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:54.227-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } reason: ExceededTimeLimit: Operation timed out, request was RemoteCommand 1069 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:54.227-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.376-0500 c20012| 2016-04-06T02:52:54.228-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1069 finished with response: ExceededTimeLimit: Operation timed out, request was RemoteCommand 1069 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:54.227-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.382-0500 c20012| 2016-04-06T02:52:54.228-0500 I REPL [ReplicationExecutor] Error in heartbeat request to mongovm16:20013; ExceededTimeLimit: Operation timed out, request was RemoteCommand 1069 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:54.227-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.385-0500 c20011| 2016-04-06T02:52:42.799-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929162000|22, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|21, t: 3 }, name-id: "236" } [js_test:multi_coll_drop] 2016-04-06T02:53:29.389-0500 c20011| 2016-04-06T02:52:42.804-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|22, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|22, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:29.391-0500 c20011| 2016-04-06T02:52:42.804-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:29.396-0500 c20011| 2016-04-06T02:52:42.804-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.400-0500 c20011| 2016-04-06T02:52:42.804-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|22, t: 3 } and is durable through: { ts: Timestamp 1459929162000|22, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.401-0500 c20011| 2016-04-06T02:52:42.804-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|22, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.409-0500 c20011| 2016-04-06T02:52:42.804-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|22, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|22, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.415-0500 c20011| 2016-04-06T02:52:42.805-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|21, t: 3 } } cursorid:19853084149 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 21ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.420-0500 c20011| 2016-04-06T02:52:42.805-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|22, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:29.421-0500 c20013| 2016-04-06T02:52:14.046-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1014 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:22.165-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.422-0500 c20013| 2016-04-06T02:52:14.047-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:29.423-0500 c20013| 2016-04-06T02:52:14.047-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1016 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:19.047-0500 cmd:{ getMore: 17466612721, collection: "oplog.rs", maxTimeMS: 2500, term: 1, lastKnownCommittedOpTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:29.423-0500 c20013| 2016-04-06T02:52:14.047-0500 I ASIO [rsBackgroundSync-0] dropping unhealthy pooled connection to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.424-0500 c20013| 2016-04-06T02:52:14.047-0500 I ASIO [rsBackgroundSync-0] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:53:29.426-0500 c20013| 2016-04-06T02:52:14.047-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.428-0500 c20013| 2016-04-06T02:52:14.047-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.432-0500 c20013| 2016-04-06T02:52:14.047-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1015 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.433-0500 c20013| 2016-04-06T02:52:14.047-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:50369 #12 (8 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:29.433-0500 c20013| 2016-04-06T02:52:14.050-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1017 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.436-0500 c20013| 2016-04-06T02:52:14.051-0500 D COMMAND [conn12] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20010" } [js_test:multi_coll_drop] 2016-04-06T02:53:29.437-0500 c20013| 2016-04-06T02:52:14.051-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20010" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.437-0500 c20013| 2016-04-06T02:52:14.051-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.439-0500 c20013| 2016-04-06T02:52:14.051-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.440-0500 c20013| 2016-04-06T02:52:14.051-0500 I ASIO [NetworkInterfaceASIO-BGSync-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.442-0500 c20013| 2016-04-06T02:52:14.051-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1017 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:29.444-0500 c20013| 2016-04-06T02:52:14.051-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1016 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.446-0500 c20013| 2016-04-06T02:52:14.051-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.447-0500 c20013| 2016-04-06T02:52:14.051-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.448-0500 c20013| 2016-04-06T02:52:14.052-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.448-0500 c20013| 2016-04-06T02:52:14.052-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1015 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:29.449-0500 c20013| 2016-04-06T02:52:14.052-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1014 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.450-0500 c20013| 2016-04-06T02:52:14.052-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1014 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 1, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:29.451-0500 c20013| 2016-04-06T02:52:14.052-0500 I REPL [ReplicationExecutor] Member mongovm16:20011 is now in state SECONDARY [js_test:multi_coll_drop] 2016-04-06T02:53:29.452-0500 c20013| 2016-04-06T02:52:14.052-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:16.052Z [js_test:multi_coll_drop] 2016-04-06T02:53:29.454-0500 c20013| 2016-04-06T02:52:14.082-0500 D COMMAND [conn5] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.456-0500 c20013| 2016-04-06T02:52:14.082-0500 D COMMAND [conn5] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:29.459-0500 c20013| 2016-04-06T02:52:14.083-0500 I COMMAND [conn5] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } numYields:0 reslen:470 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.460-0500 c20013| 2016-04-06T02:52:14.085-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1019 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:24.085-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.461-0500 c20013| 2016-04-06T02:52:14.085-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1019 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:29.465-0500 c20013| 2016-04-06T02:52:14.085-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1019 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 1, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:29.467-0500 c20013| 2016-04-06T02:52:14.085-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:16.085Z [js_test:multi_coll_drop] 2016-04-06T02:53:29.467-0500 c20013| 2016-04-06T02:52:14.553-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.468-0500 c20013| 2016-04-06T02:52:14.553-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.469-0500 c20013| 2016-04-06T02:52:15.054-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.473-0500 c20013| 2016-04-06T02:52:15.055-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.475-0500 c20013| 2016-04-06T02:52:15.556-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.475-0500 c20013| 2016-04-06T02:52:15.556-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.475-0500 c20013| 2016-04-06T02:52:15.851-0500 D COMMAND [conn11] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.477-0500 c20013| 2016-04-06T02:52:15.851-0500 I COMMAND [conn11] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.480-0500 c20013| 2016-04-06T02:52:16.047-0500 D COMMAND [conn7] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.481-0500 c20013| 2016-04-06T02:52:16.047-0500 D COMMAND [conn7] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:29.485-0500 c20013| 2016-04-06T02:52:16.047-0500 I COMMAND [conn7] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } numYields:0 reslen:470 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.489-0500 c20013| 2016-04-06T02:52:16.052-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1021 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:26.052-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.492-0500 c20013| 2016-04-06T02:52:16.052-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1021 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.494-0500 c20012| 2016-04-06T02:52:54.228-0500 D REPL [ReplicationExecutor] setDownValues: heartbeat response failed for member _id:2, msg: Operation timed out, request was RemoteCommand 1069 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:54.227-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.495-0500 c20012| 2016-04-06T02:52:54.228-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:56.228Z [js_test:multi_coll_drop] 2016-04-06T02:53:29.500-0500 c20012| 2016-04-06T02:52:54.735-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1092 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:04.735-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.501-0500 c20012| 2016-04-06T02:52:54.736-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1092 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.503-0500 c20012| 2016-04-06T02:52:54.736-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1092 finished with response: { ok: 1.0, electionTime: new Date(6270347962317012993), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929171000|2, t: 3 }, opTime: { ts: Timestamp 1459929171000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:29.503-0500 c20012| 2016-04-06T02:52:54.737-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:56.737Z [js_test:multi_coll_drop] 2016-04-06T02:53:29.504-0500 c20012| 2016-04-06T02:52:55.733-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.504-0500 c20012| 2016-04-06T02:52:55.733-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:29.512-0500 c20012| 2016-04-06T02:52:55.733-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.514-0500 c20012| 2016-04-06T02:52:56.228-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1094 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:06.228-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.515-0500 c20012| 2016-04-06T02:52:56.228-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:29.518-0500 c20012| 2016-04-06T02:52:56.230-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1095 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:29.521-0500 c20012| 2016-04-06T02:52:56.714-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:39318 #33 (3 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:29.522-0500 c20012| 2016-04-06T02:52:56.714-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:39319 #34 (4 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:29.545-0500 c20012| 2016-04-06T02:52:56.714-0500 D COMMAND [conn33] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20010" } [js_test:multi_coll_drop] 2016-04-06T02:53:29.548-0500 c20012| 2016-04-06T02:52:56.715-0500 D COMMAND [conn34] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:29.549-0500 c20012| 2016-04-06T02:52:56.715-0500 I COMMAND [conn34] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20015" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.553-0500 c20012| 2016-04-06T02:52:56.715-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20010" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.553-0500 c20012| 2016-04-06T02:52:56.715-0500 D COMMAND [conn34] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.556-0500 ReplSetTest Could not call ismaster on node connection to mongovm16:20012: Error: error doing query: failed: network error while attempting to run command 'ismaster' on host 'mongovm16:20012' [js_test:multi_coll_drop] 2016-04-06T02:53:29.556-0500 c20012| 2016-04-06T02:52:56.715-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.560-0500 c20012| 2016-04-06T02:52:56.715-0500 I COMMAND [conn34] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.561-0500 c20012| 2016-04-06T02:52:56.715-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.562-0500 c20012| 2016-04-06T02:52:56.715-0500 D COMMAND [conn34] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.565-0500 c20012| 2016-04-06T02:52:56.715-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.568-0500 c20012| 2016-04-06T02:52:56.715-0500 I COMMAND [conn34] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.572-0500 c20012| 2016-04-06T02:52:56.715-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.577-0500 c20012| 2016-04-06T02:52:56.737-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1096 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:06.737-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.578-0500 c20012| 2016-04-06T02:52:56.737-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1096 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.580-0500 c20012| 2016-04-06T02:52:56.738-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1096 finished with response: { ok: 1.0, electionTime: new Date(6270347962317012993), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929171000|2, t: 3 }, opTime: { ts: Timestamp 1459929171000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:29.581-0500 c20012| 2016-04-06T02:52:56.738-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:58.738Z [js_test:multi_coll_drop] 2016-04-06T02:53:29.581-0500 c20012| 2016-04-06T02:52:56.740-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:39320 #35 (5 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:29.583-0500 c20012| 2016-04-06T02:52:56.740-0500 D COMMAND [conn35] run command admin.$cmd { isMaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.584-0500 c20012| 2016-04-06T02:52:56.740-0500 I COMMAND [conn35] command admin.$cmd command: isMaster { isMaster: 1 } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.585-0500 c20012| 2016-04-06T02:52:56.740-0500 D COMMAND [conn35] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.586-0500 c20012| 2016-04-06T02:52:56.740-0500 I COMMAND [conn35] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.596-0500 c20012| 2016-04-06T02:52:57.284-0500 D - [PeriodicTaskRunner] cleaning up unused lock buckets of the global lock manager [js_test:multi_coll_drop] 2016-04-06T02:53:29.608-0500 c20012| 2016-04-06T02:52:57.733-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.610-0500 c20012| 2016-04-06T02:52:57.733-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:29.611-0500 c20012| 2016-04-06T02:52:57.733-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.618-0500 c20012| 2016-04-06T02:52:58.741-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1098 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:08.741-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.619-0500 c20012| 2016-04-06T02:52:58.741-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1098 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.619-0500 c20012| 2016-04-06T02:52:58.741-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1098 finished with response: { ok: 1.0, electionTime: new Date(6270347962317012993), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929171000|2, t: 3 }, opTime: { ts: Timestamp 1459929171000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:29.621-0500 c20012| 2016-04-06T02:52:58.743-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:00.743Z [js_test:multi_coll_drop] 2016-04-06T02:53:29.629-0500 c20013| 2016-04-06T02:52:16.053-0500 D COMMAND [conn11] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.648-0500 c20013| 2016-04-06T02:52:16.053-0500 I COMMAND [conn11] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.649-0500 c20013| 2016-04-06T02:52:16.054-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1021 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 1, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:29.650-0500 c20013| 2016-04-06T02:52:16.054-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:18.054Z [js_test:multi_coll_drop] 2016-04-06T02:53:29.650-0500 c20013| 2016-04-06T02:52:16.057-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.651-0500 c20013| 2016-04-06T02:52:16.057-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.656-0500 c20013| 2016-04-06T02:52:16.084-0500 D COMMAND [conn5] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.659-0500 c20013| 2016-04-06T02:52:16.085-0500 D COMMAND [conn5] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:29.672-0500 c20013| 2016-04-06T02:52:16.085-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1023 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:26.085-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.676-0500 c20013| 2016-04-06T02:52:16.085-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1023 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:29.680-0500 c20013| 2016-04-06T02:52:16.085-0500 I COMMAND [conn5] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } numYields:0 reslen:470 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.682-0500 c20013| 2016-04-06T02:52:16.086-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1023 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 1, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:29.683-0500 c20013| 2016-04-06T02:52:16.088-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:18.088Z [js_test:multi_coll_drop] 2016-04-06T02:53:29.683-0500 c20013| 2016-04-06T02:52:16.254-0500 D COMMAND [conn11] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.684-0500 c20013| 2016-04-06T02:52:16.254-0500 I COMMAND [conn11] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.685-0500 c20013| 2016-04-06T02:52:16.455-0500 D COMMAND [conn11] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.687-0500 c20013| 2016-04-06T02:52:16.455-0500 I COMMAND [conn11] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.694-0500 c20013| 2016-04-06T02:52:16.546-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:29.701-0500 c20013| 2016-04-06T02:52:16.546-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1025 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:29.701-0500 c20013| 2016-04-06T02:52:16.546-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] dropping unhealthy pooled connection to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.704-0500 c20013| 2016-04-06T02:52:16.546-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] dropping unhealthy pooled connection to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.705-0500 c20013| 2016-04-06T02:52:16.546-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:53:29.706-0500 c20013| 2016-04-06T02:52:16.546-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.707-0500 c20013| 2016-04-06T02:52:16.546-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1026 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.711-0500 c20013| 2016-04-06T02:52:16.546-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.711-0500 c20013| 2016-04-06T02:52:16.546-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1026 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:29.713-0500 c20013| 2016-04-06T02:52:16.546-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1025 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.724-0500 c20011| 2016-04-06T02:52:42.806-0500 I COMMAND [conn40] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:42.780-0500-5704c04a65c17830b843f1c0", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162780), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -68.0 }, max: { _id: MaxKey } }, left: { min: { _id: -68.0 }, max: { _id: -67.0 }, lastmod: Timestamp 1000|67, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -67.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|68, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 25ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.729-0500 c20011| 2016-04-06T02:52:42.806-0500 D COMMAND [conn40] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c04a65c17830b843f1bf') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.734-0500 c20011| 2016-04-06T02:52:42.806-0500 D QUERY [conn40] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:29.745-0500 c20011| 2016-04-06T02:52:42.806-0500 D QUERY [conn40] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c04a65c17830b843f1bf') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.755-0500 c20011| 2016-04-06T02:52:42.807-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|22, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.757-0500 c20011| 2016-04-06T02:52:42.811-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|22, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:29.759-0500 c20011| 2016-04-06T02:52:42.811-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|22, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|23, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:29.759-0500 c20011| 2016-04-06T02:52:42.811-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:29.761-0500 c20011| 2016-04-06T02:52:42.811-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.766-0500 c20011| 2016-04-06T02:52:42.811-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|23, t: 3 } and is durable through: { ts: Timestamp 1459929162000|22, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.769-0500 c20011| 2016-04-06T02:52:42.811-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|22, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|23, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.771-0500 c20011| 2016-04-06T02:52:42.822-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929162000|23, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|22, t: 3 }, name-id: "237" } [js_test:multi_coll_drop] 2016-04-06T02:53:29.774-0500 c20011| 2016-04-06T02:52:42.823-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|23, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|23, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:29.775-0500 c20011| 2016-04-06T02:52:42.823-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:29.777-0500 c20011| 2016-04-06T02:52:42.823-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.787-0500 c20011| 2016-04-06T02:52:42.823-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|23, t: 3 } and is durable through: { ts: Timestamp 1459929162000|23, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.788-0500 c20011| 2016-04-06T02:52:42.823-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|23, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.798-0500 c20011| 2016-04-06T02:52:42.823-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|23, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|23, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.800-0500 c20011| 2016-04-06T02:52:42.823-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|22, t: 3 } } cursorid:19853084149 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 12ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.803-0500 c20012| 2016-04-06T02:52:59.734-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.804-0500 c20012| 2016-04-06T02:52:59.734-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:29.810-0500 c20012| 2016-04-06T02:52:59.734-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.819-0500 c20012| 2016-04-06T02:53:00.743-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1100 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:10.743-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.822-0500 c20012| 2016-04-06T02:53:00.744-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1100 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.829-0500 c20012| 2016-04-06T02:53:00.744-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1100 finished with response: { ok: 1.0, electionTime: new Date(6270347962317012993), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929171000|2, t: 3 }, opTime: { ts: Timestamp 1459929171000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:29.829-0500 c20012| 2016-04-06T02:53:00.744-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:02.744Z [js_test:multi_coll_drop] 2016-04-06T02:53:29.831-0500 c20012| 2016-04-06T02:53:01.737-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.831-0500 c20012| 2016-04-06T02:53:01.737-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:29.835-0500 c20012| 2016-04-06T02:53:01.737-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.836-0500 c20012| 2016-04-06T02:53:02.744-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1102 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:12.744-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.837-0500 c20012| 2016-04-06T02:53:02.744-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1102 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.838-0500 c20012| 2016-04-06T02:53:02.744-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1102 finished with response: { ok: 1.0, electionTime: new Date(6270347962317012993), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929171000|2, t: 3 }, opTime: { ts: Timestamp 1459929171000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:29.839-0500 c20012| 2016-04-06T02:53:02.744-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:04.744Z [js_test:multi_coll_drop] 2016-04-06T02:53:29.840-0500 c20013| 2016-04-06T02:52:16.547-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1025 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.840-0500 c20013| 2016-04-06T02:52:16.552-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1016 finished with response: { cursor: { nextBatch: [], id: 17466612721, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.840-0500 c20013| 2016-04-06T02:52:16.552-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:29.842-0500 c20013| 2016-04-06T02:52:16.552-0500 D REPL [rsBackgroundSync-0] Cancelling oplog query because we have to choose a sync source. Current source: mongovm16:20011, OpTime{ ts: Timestamp 1459929130000|10, t: 1 }, hasSyncSource:0 [js_test:multi_coll_drop] 2016-04-06T02:53:29.846-0500 c20013| 2016-04-06T02:52:16.552-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1029 -- target:mongovm16:20011 db:local cmd:{ killCursors: "oplog.rs", cursors: [ 17466612721 ] } [js_test:multi_coll_drop] 2016-04-06T02:53:29.849-0500 c20013| 2016-04-06T02:52:16.552-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1029 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.850-0500 c20013| 2016-04-06T02:52:16.552-0500 D REPL [rsBackgroundSync] fetcher stopped reading remote oplog on mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.851-0500 c20013| 2016-04-06T02:52:16.552-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1029 finished with response: { cursorsKilled: [ 17466612721 ], cursorsNotFound: [], cursorsAlive: [], cursorsUnknown: [], ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.853-0500 c20013| 2016-04-06T02:52:16.552-0500 I REPL [ReplicationExecutor] could not find member to sync from [js_test:multi_coll_drop] 2016-04-06T02:53:29.855-0500 c20012| 2016-04-06T02:53:03.716-0500 D COMMAND [conn32] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.857-0500 c20012| 2016-04-06T02:53:03.716-0500 I COMMAND [conn32] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.859-0500 c20012| 2016-04-06T02:53:03.737-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.860-0500 c20012| 2016-04-06T02:53:03.737-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:29.861-0500 c20013| 2016-04-06T02:52:16.552-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:16.552Z [js_test:multi_coll_drop] 2016-04-06T02:53:29.861-0500 c20013| 2016-04-06T02:52:16.552-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:16.552Z [js_test:multi_coll_drop] 2016-04-06T02:53:29.864-0500 c20013| 2016-04-06T02:52:16.552-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1031 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:26.552-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.866-0500 c20013| 2016-04-06T02:52:16.552-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1032 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:26.552-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.867-0500 c20013| 2016-04-06T02:52:16.552-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1031 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:29.868-0500 c20013| 2016-04-06T02:52:16.552-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1032 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:29.869-0500 c20013| 2016-04-06T02:52:16.553-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1031 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 1, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:29.872-0500 c20013| 2016-04-06T02:52:16.553-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1032 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 1, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:29.872-0500 c20013| 2016-04-06T02:52:16.553-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:19.053Z [js_test:multi_coll_drop] 2016-04-06T02:53:29.874-0500 c20013| 2016-04-06T02:52:16.553-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:19.053Z [js_test:multi_coll_drop] 2016-04-06T02:53:29.876-0500 c20013| 2016-04-06T02:52:16.555-0500 D COMMAND [conn5] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.877-0500 c20013| 2016-04-06T02:52:16.555-0500 D COMMAND [conn5] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:29.882-0500 c20013| 2016-04-06T02:52:16.557-0500 I COMMAND [conn5] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } numYields:0 reslen:439 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.882-0500 c20013| 2016-04-06T02:52:16.558-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.883-0500 c20013| 2016-04-06T02:52:16.558-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.883-0500 c20013| 2016-04-06T02:52:16.659-0500 D COMMAND [conn11] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.886-0500 c20013| 2016-04-06T02:52:16.659-0500 I COMMAND [conn11] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.887-0500 c20013| 2016-04-06T02:52:16.860-0500 D COMMAND [conn11] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.887-0500 c20013| 2016-04-06T02:52:16.860-0500 I COMMAND [conn11] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.889-0500 c20013| 2016-04-06T02:52:17.059-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.891-0500 c20013| 2016-04-06T02:52:17.059-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.892-0500 c20013| 2016-04-06T02:52:17.061-0500 D COMMAND [conn11] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.895-0500 c20012| 2016-04-06T02:53:03.739-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } numYields:0 reslen:489 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.897-0500 c20013| 2016-04-06T02:52:17.061-0500 I COMMAND [conn11] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.899-0500 c20013| 2016-04-06T02:52:17.202-0500 D COMMAND [conn9] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.900-0500 c20013| 2016-04-06T02:52:17.202-0500 I COMMAND [conn9] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.900-0500 c20013| 2016-04-06T02:52:17.262-0500 D COMMAND [conn11] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.902-0500 c20013| 2016-04-06T02:52:17.262-0500 I COMMAND [conn11] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.906-0500 c20013| 2016-04-06T02:52:17.335-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:50568 #13 (9 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:29.908-0500 c20012| 2016-04-06T02:53:04.659-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:39612 #36 (6 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:29.910-0500 c20012| 2016-04-06T02:53:04.659-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:39614 #37 (7 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:29.911-0500 c20012| 2016-04-06T02:53:04.659-0500 D COMMAND [conn37] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20013" } [js_test:multi_coll_drop] 2016-04-06T02:53:29.912-0500 c20012| 2016-04-06T02:53:04.659-0500 D COMMAND [conn36] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20013" } [js_test:multi_coll_drop] 2016-04-06T02:53:29.913-0500 c20012| 2016-04-06T02:53:04.659-0500 I COMMAND [conn37] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20013" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.914-0500 c20013| 2016-04-06T02:52:17.336-0500 D COMMAND [conn13] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:29.916-0500 c20013| 2016-04-06T02:52:17.336-0500 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20015" } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.918-0500 c20013| 2016-04-06T02:52:17.336-0500 D COMMAND [conn13] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.922-0500 c20011| 2016-04-06T02:52:42.823-0500 I COMMAND [conn40] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c04a65c17830b843f1bf') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 16ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.924-0500 c20012| 2016-04-06T02:53:04.659-0500 I COMMAND [conn36] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20013" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.926-0500 c20012| 2016-04-06T02:53:04.661-0500 D COMMAND [conn37] run command admin.$cmd { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 3, candidateIndex: 2, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929163000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:29.928-0500 c20012| 2016-04-06T02:53:04.661-0500 D COMMAND [conn37] command: replSetRequestVotes [js_test:multi_coll_drop] 2016-04-06T02:53:29.930-0500 c20012| 2016-04-06T02:53:04.661-0500 D QUERY [conn37] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:29.931-0500 c20012| 2016-04-06T02:53:04.661-0500 D COMMAND [conn36] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.932-0500 c20012| 2016-04-06T02:53:04.661-0500 D COMMAND [conn36] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:29.934-0500 c20012| 2016-04-06T02:53:04.661-0500 I COMMAND [conn37] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 3, candidateIndex: 2, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929163000|8, t: 3 } } numYields:0 reslen:143 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.936-0500 c20012| 2016-04-06T02:53:04.662-0500 I COMMAND [conn36] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 3 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.940-0500 c20012| 2016-04-06T02:53:04.662-0500 D COMMAND [conn36] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.941-0500 c20012| 2016-04-06T02:53:04.662-0500 D COMMAND [conn36] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:29.944-0500 c20012| 2016-04-06T02:53:04.662-0500 D COMMAND [conn37] run command admin.$cmd { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 4, candidateIndex: 2, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929163000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:29.945-0500 c20012| 2016-04-06T02:53:04.663-0500 D COMMAND [conn37] command: replSetRequestVotes [js_test:multi_coll_drop] 2016-04-06T02:53:29.947-0500 c20012| 2016-04-06T02:53:04.663-0500 I COMMAND [conn36] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.949-0500 c20012| 2016-04-06T02:53:04.665-0500 D COMMAND [conn34] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.950-0500 c20012| 2016-04-06T02:53:04.665-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:29.952-0500 c20012| 2016-04-06T02:53:04.665-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1082 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:29.954-0500 c20012| 2016-04-06T02:53:04.665-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1094 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:29.955-0500 c20012| 2016-04-06T02:53:04.665-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:29.959-0500 c20012| 2016-04-06T02:53:04.665-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1095 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:29.960-0500 c20012| 2016-04-06T02:53:04.665-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1081 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:29.963-0500 c20012| 2016-04-06T02:53:04.666-0500 D QUERY [conn37] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:29.969-0500 c20012| 2016-04-06T02:53:04.666-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1081 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1459929117000|1, h: 1169182228640141205, v: 2, op: "n", ns: "", o: { msg: "initiating set" } } ], id: 0, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.972-0500 c20012| 2016-04-06T02:53:04.666-0500 I COMMAND [conn37] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 4, candidateIndex: 2, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929163000|8, t: 3 } } numYields:0 reslen:143 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.974-0500 c20012| 2016-04-06T02:53:04.666-0500 D REPL [rsBackgroundSync] scheduling fetcher to read remote oplog on mongovm16:20013 starting at filter: { ts: { $gte: Timestamp 1459929146000|10 } } [js_test:multi_coll_drop] 2016-04-06T02:53:29.977-0500 c20012| 2016-04-06T02:53:04.666-0500 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 1105 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:09.666-0500 cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929146000|10 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:29.977-0500 c20012| 2016-04-06T02:53:04.666-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:29.979-0500 c20012| 2016-04-06T02:53:04.666-0500 D REPL [SyncSourceFeedback] setting syncSourceFeedback to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:29.981-0500 c20012| 2016-04-06T02:53:04.666-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1094 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 4, primaryId: 0, durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, opTime: { ts: Timestamp 1459929163000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:29.986-0500 c20012| 2016-04-06T02:53:04.666-0500 I COMMAND [conn34] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:29.987-0500 c20012| 2016-04-06T02:53:04.666-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1106 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:29.991-0500 c20012| 2016-04-06T02:53:04.666-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:29.999-0500 c20012| 2016-04-06T02:53:04.666-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1108 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:30.000-0500 c20012| 2016-04-06T02:53:04.666-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:30.000-0500 c20012| 2016-04-06T02:53:04.666-0500 I REPL [ReplicationExecutor] Member mongovm16:20013 is now in state SECONDARY [js_test:multi_coll_drop] 2016-04-06T02:53:30.000-0500 c20012| 2016-04-06T02:53:04.666-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:06.666Z [js_test:multi_coll_drop] 2016-04-06T02:53:30.000-0500 c20012| 2016-04-06T02:53:04.666-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1109 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:30.001-0500 c20012| 2016-04-06T02:53:04.666-0500 D COMMAND [conn37] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:30.001-0500 c20012| 2016-04-06T02:53:04.667-0500 D COMMAND [conn37] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:30.001-0500 c20012| 2016-04-06T02:53:04.667-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:30.002-0500 c20012| 2016-04-06T02:53:04.667-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1109 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:30.002-0500 c20012| 2016-04-06T02:53:04.667-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1108 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:30.002-0500 c20012| 2016-04-06T02:53:04.667-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:39620 #38 (8 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:30.002-0500 c20012| 2016-04-06T02:53:04.667-0500 D COMMAND [conn38] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20010" } [js_test:multi_coll_drop] 2016-04-06T02:53:30.016-0500 c20012| 2016-04-06T02:53:04.667-0500 I COMMAND [conn38] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20010" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:30.016-0500 c20012| 2016-04-06T02:53:04.668-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1108 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:30.020-0500 c20012| 2016-04-06T02:53:04.668-0500 I COMMAND [conn37] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } numYields:0 reslen:509 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:30.021-0500 c20012| 2016-04-06T02:53:04.668-0500 D COMMAND [conn38] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|74 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|8, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:30.023-0500 c20012| 2016-04-06T02:53:04.668-0500 D REPL [conn38] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929163000|8, t: 3 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929146000|10, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:30.023-0500 c20012| 2016-04-06T02:53:04.668-0500 D REPL [conn38] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999984μs [js_test:multi_coll_drop] 2016-04-06T02:53:30.024-0500 c20012| 2016-04-06T02:53:04.670-0500 I ASIO [NetworkInterfaceASIO-BGSync-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:30.024-0500 c20012| 2016-04-06T02:53:04.670-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1106 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:30.024-0500 c20012| 2016-04-06T02:53:04.670-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1105 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:30.040-0500 c20012| 2016-04-06T02:53:04.670-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] warning: log line attempted (23kB) over max size (10kB), printing beginning and end ... Request 1105 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1459929146000|10, t: 2, h: 8129632561130330747, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-76.0", lastmod: Timestamp 1000|51, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -76.0 }, max: { _id: -75.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-76.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-75.0", lastmod: Timestamp 1000|52, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -75.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-75.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } }, { ts: Timestamp 1459929152000|2, t: 3, h: -6846298690708567284, v: 2, op: "n", ns: "", o: { msg: "new primary" } }, { ts: Timestamp 1459929161000|1, t: 3, h: 724532987243091218, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20015" }, o: { $set: { ping: new Date(1459929152631), up: 25, waiting: false } } }, { ts: Timestamp 1459929161000|2, t: 3, h: -8330485042973896426, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:41.710-0500-5704c04965c17830b843f1b0", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929161710), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -76.0 }, max: { _id: MaxKey } }, left: { min: { _id: -76.0 }, max: { _id: -75.0 }, lastmod: Timestamp 1000|51, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -75.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|52, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } }, { ts: Timestamp 1459929161000|3, t: 3, h: 348221258137002286, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20014" }, o: { $set: { ping: new Date(1459929151652), up: 24, waiting: false } } }, { ts: Timestamp 1459929161000|4, t: 3, h: 569718958403941141, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } }, { ts: Timestamp 1459929161000|5, t: 3, h: 7208870335463155550, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20014" }, o: { $set: { ping: new Date(1459929161743), up: 34, waiting: true } } }, { ts: Timestamp 1459929161000|6, t: 3, h: 9145859565647178306, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20015" }, o: { $set: { ping: new Date(1459929161747), up: 34, waiting: true } } }, { ts: Timestamp 1459929161000|7, t: 3, h: 5502916262959992045, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c04965c17830b843f1b1'), state: 2, when: new Date(1459929161772), why: "splitting chunk [{ _id: -75.0 }, { _id: MaxKey }) in multidrop.coll" } } }, { ts: Timestamp 1459929161000|8, t: 3, h: 6949985940899244306, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-75.0", lastmod: Timestamp 1000|53, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -75.0 }, max: { _id: -74.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-75.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-74.0", lastmod: Timestamp 1000|54, lastmodEpoch: ObjectId( .......... _id_-66.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-65.0", lastmod: Timestamp 1000|72, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -65.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-65.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } }, { ts: Timestamp 1459929163000|2, t: 3, h: -3691712439411572840, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:43.119-0500-5704c04b65c17830b843f1c4", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929163119), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -66.0 }, max: { _id: MaxKey } }, left: { min: { _id: -66.0 }, max: { _id: -65.0 }, lastmod: Timestamp 1000|71, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -65.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|72, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } }, { ts: Timestamp 1459929163000|3, t: 3, h: -5230974407681466498, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } }, { ts: Timestamp 1459929163000|4, t: 3, h: 6336516151299301636, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c04b65c17830b843f1c5'), state: 2, when: new Date(1459929163203), why: "splitting chunk [{ _id: -65.0 }, { _id: MaxKey }) in multidrop.coll" } } }, { ts: Timestamp 1459929163000|5, t: 3, h: -8172355748864553859, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-65.0", lastmod: Timestamp 1000|73, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -65.0 }, max: { _id: -64.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-65.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-64.0", lastmod: Timestamp 1000|74, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -64.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-64.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } }, { ts: Timestamp 1459929163000|6, t: 3, h: -317850286324307218, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:43.260-0500-5704c04b65c17830b843f1c6", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929163260), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -65.0 }, max: { _id: MaxKey } }, left: { min: { _id: -65.0 }, max: { _id: -64.0 }, lastmod: Timestamp 1000|73, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -64.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|74, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } }, { ts: Timestamp 1459929163000|7, t: 3, h: 2232396361430522479, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } }, { ts: Timestamp 1459929163000|8, t: 3, h: -788849406847319887, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c04b65c17830b843f1c7'), state: 2, when: new Date(1459929163335), why: "splitting chunk [{ _id: -64.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 22887452903, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:30.043-0500 c20012| 2016-04-06T02:53:04.671-0500 D REPL [rsBackgroundSync-0] fetcher read 53 operations from remote oplog starting at ts: Timestamp 1459929146000|10 and ending at ts: Timestamp 1459929163000|8 [js_test:multi_coll_drop] 2016-04-06T02:53:30.045-0500 c20012| 2016-04-06T02:53:04.671-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:30.045-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.046-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.048-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.049-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.049-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.050-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.050-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.052-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.053-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.055-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.056-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.056-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.058-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.059-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.061-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.062-0500 c20012| 2016-04-06T02:53:04.672-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:30.066-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.066-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.068-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.071-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.075-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.079-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.082-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.082-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.084-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.085-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.087-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.088-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.089-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.090-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.091-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.094-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.100-0500 c20012| 2016-04-06T02:53:04.672-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.126-0500 c20012| 2016-04-06T02:53:04.673-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1112 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:09.673-0500 cmd:{ getMore: 22887452903, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:30.128-0500 c20012| 2016-04-06T02:53:04.673-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1112 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:30.141-0500 c20012| 2016-04-06T02:53:04.673-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:30.141-0500 c20012| 2016-04-06T02:53:04.673-0500 D REPL [conn38] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29994595μs [js_test:multi_coll_drop] 2016-04-06T02:53:30.149-0500 c20012| 2016-04-06T02:53:04.673-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:30.156-0500 c20012| 2016-04-06T02:53:04.673-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1113 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:30.166-0500 c20012| 2016-04-06T02:53:04.673-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1113 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:30.167-0500 c20012| 2016-04-06T02:53:04.674-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1113 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:30.168-0500 c20012| 2016-04-06T02:53:04.674-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:30.170-0500 c20012| 2016-04-06T02:53:04.674-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.172-0500 c20012| 2016-04-06T02:53:04.674-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.172-0500 c20012| 2016-04-06T02:53:04.674-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.173-0500 c20012| 2016-04-06T02:53:04.674-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.174-0500 c20012| 2016-04-06T02:53:04.674-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.174-0500 c20012| 2016-04-06T02:53:04.674-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.176-0500 c20012| 2016-04-06T02:53:04.674-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.182-0500 c20012| 2016-04-06T02:53:04.674-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.197-0500 c20012| 2016-04-06T02:53:04.674-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.200-0500 c20012| 2016-04-06T02:53:04.674-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.201-0500 c20012| 2016-04-06T02:53:04.674-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.207-0500 c20012| 2016-04-06T02:53:04.674-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.209-0500 c20012| 2016-04-06T02:53:04.674-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.212-0500 c20012| 2016-04-06T02:53:04.674-0500 D REPL [rsSync] replication batch size is 7 [js_test:multi_coll_drop] 2016-04-06T02:53:30.215-0500 c20012| 2016-04-06T02:53:04.674-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:30.217-0500 c20012| 2016-04-06T02:53:04.674-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:30.220-0500 c20012| 2016-04-06T02:53:04.674-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.226-0500 c20012| 2016-04-06T02:53:04.674-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:30.229-0500 c20012| 2016-04-06T02:53:04.674-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:30.230-0500 c20012| 2016-04-06T02:53:04.675-0500 D QUERY [repl writer worker 3] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:30.231-0500 c20012| 2016-04-06T02:53:04.675-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.233-0500 c20012| 2016-04-06T02:53:04.675-0500 D QUERY [repl writer worker 3] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:30.235-0500 c20012| 2016-04-06T02:53:04.675-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.239-0500 c20012| 2016-04-06T02:53:04.675-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.244-0500 c20012| 2016-04-06T02:53:04.675-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.245-0500 c20012| 2016-04-06T02:53:04.675-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.246-0500 c20012| 2016-04-06T02:53:04.675-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.246-0500 c20012| 2016-04-06T02:53:04.675-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.252-0500 c20012| 2016-04-06T02:53:04.675-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.253-0500 c20012| 2016-04-06T02:53:04.675-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.257-0500 c20012| 2016-04-06T02:53:04.675-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.262-0500 c20012| 2016-04-06T02:53:04.675-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.263-0500 c20012| 2016-04-06T02:53:04.675-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.263-0500 c20012| 2016-04-06T02:53:04.675-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.264-0500 c20012| 2016-04-06T02:53:04.675-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.265-0500 c20012| 2016-04-06T02:53:04.675-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.268-0500 c20012| 2016-04-06T02:53:04.675-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.281-0500 c20012| 2016-04-06T02:53:04.675-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.283-0500 c20012| 2016-04-06T02:53:04.675-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.285-0500 c20012| 2016-04-06T02:53:04.676-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:30.286-0500 c20012| 2016-04-06T02:53:04.676-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:30.289-0500 c20012| 2016-04-06T02:53:04.676-0500 D REPL [conn38] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29991644μs [js_test:multi_coll_drop] 2016-04-06T02:53:30.290-0500 c20012| 2016-04-06T02:53:04.676-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.292-0500 c20012| 2016-04-06T02:53:04.676-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.293-0500 c20012| 2016-04-06T02:53:04.676-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.297-0500 c20012| 2016-04-06T02:53:04.676-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.299-0500 c20012| 2016-04-06T02:53:04.676-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.301-0500 c20012| 2016-04-06T02:53:04.676-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.306-0500 c20012| 2016-04-06T02:53:04.677-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.306-0500 c20012| 2016-04-06T02:53:04.677-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.307-0500 c20012| 2016-04-06T02:53:04.677-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.308-0500 c20012| 2016-04-06T02:53:04.677-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.309-0500 c20012| 2016-04-06T02:53:04.677-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.311-0500 c20012| 2016-04-06T02:53:04.677-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.312-0500 c20012| 2016-04-06T02:53:04.677-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.312-0500 c20012| 2016-04-06T02:53:04.677-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:30.312-0500 c20012| 2016-04-06T02:53:04.677-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.314-0500 c20012| 2016-04-06T02:53:04.677-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-75.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:30.314-0500 c20012| 2016-04-06T02:53:04.677-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.319-0500 c20012| 2016-04-06T02:53:04.677-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.319-0500 c20012| 2016-04-06T02:53:04.677-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-74.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:30.320-0500 c20012| 2016-04-06T02:53:04.677-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.322-0500 c20012| 2016-04-06T02:53:04.677-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.323-0500 c20012| 2016-04-06T02:53:04.677-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.324-0500 c20012| 2016-04-06T02:53:04.677-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.326-0500 c20012| 2016-04-06T02:53:04.677-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.328-0500 c20012| 2016-04-06T02:53:04.677-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.330-0500 c20012| 2016-04-06T02:53:04.677-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.335-0500 c20012| 2016-04-06T02:53:04.677-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.346-0500 c20012| 2016-04-06T02:53:04.677-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.355-0500 c20012| 2016-04-06T02:53:04.677-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.361-0500 c20012| 2016-04-06T02:53:04.677-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.364-0500 c20012| 2016-04-06T02:53:04.677-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.376-0500 c20012| 2016-04-06T02:53:04.677-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.395-0500 c20012| 2016-04-06T02:53:04.677-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.395-0500 c20012| 2016-04-06T02:53:04.677-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.396-0500 c20012| 2016-04-06T02:53:04.677-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.399-0500 c20012| 2016-04-06T02:53:04.677-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:30.402-0500 c20012| 2016-04-06T02:53:04.678-0500 D REPL [conn38] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29990351μs [js_test:multi_coll_drop] 2016-04-06T02:53:30.408-0500 c20012| 2016-04-06T02:53:04.680-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:30.410-0500 c20012| 2016-04-06T02:53:04.680-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1115 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:30.412-0500 c20012| 2016-04-06T02:53:04.680-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1115 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:30.413-0500 c20012| 2016-04-06T02:53:04.680-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1115 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:30.422-0500 c20012| 2016-04-06T02:53:04.680-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:30.432-0500 c20012| 2016-04-06T02:53:04.680-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1116 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:30.435-0500 c20012| 2016-04-06T02:53:04.680-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:30.435-0500 c20012| 2016-04-06T02:53:04.680-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1116 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:30.439-0500 c20012| 2016-04-06T02:53:04.680-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1116 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:30.441-0500 c20012| 2016-04-06T02:53:04.680-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1117 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:30.442-0500 c20012| 2016-04-06T02:53:04.681-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:30.444-0500 c20012| 2016-04-06T02:53:04.681-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.445-0500 c20012| 2016-04-06T02:53:04.681-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.446-0500 c20012| 2016-04-06T02:53:04.681-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.448-0500 c20012| 2016-04-06T02:53:04.681-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.454-0500 c20012| 2016-04-06T02:53:04.681-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.455-0500 c20012| 2016-04-06T02:53:04.681-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.455-0500 c20012| 2016-04-06T02:53:04.681-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.456-0500 c20012| 2016-04-06T02:53:04.681-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.461-0500 c20012| 2016-04-06T02:53:04.681-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.463-0500 c20012| 2016-04-06T02:53:04.681-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.464-0500 c20012| 2016-04-06T02:53:04.681-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.466-0500 c20012| 2016-04-06T02:53:04.681-0500 D REPL [rsSync] replication batch size is 3 [js_test:multi_coll_drop] 2016-04-06T02:53:30.469-0500 c20012| 2016-04-06T02:53:04.681-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.469-0500 c20012| 2016-04-06T02:53:04.681-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.469-0500 c20012| 2016-04-06T02:53:04.681-0500 D QUERY [repl writer worker 12] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:30.472-0500 c20012| 2016-04-06T02:53:04.681-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.473-0500 c20012| 2016-04-06T02:53:04.681-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.474-0500 c20012| 2016-04-06T02:53:04.681-0500 D QUERY [repl writer worker 12] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:30.477-0500 c20012| 2016-04-06T02:53:04.681-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.478-0500 c20012| 2016-04-06T02:53:04.682-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.478-0500 c20012| 2016-04-06T02:53:04.682-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.479-0500 c20012| 2016-04-06T02:53:04.682-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.481-0500 c20012| 2016-04-06T02:53:04.682-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.482-0500 c20012| 2016-04-06T02:53:04.682-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.483-0500 c20012| 2016-04-06T02:53:04.682-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.490-0500 c20012| 2016-04-06T02:53:04.682-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.494-0500 c20012| 2016-04-06T02:53:04.682-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.497-0500 c20012| 2016-04-06T02:53:04.682-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.497-0500 c20012| 2016-04-06T02:53:04.682-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.504-0500 c20012| 2016-04-06T02:53:04.682-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.520-0500 c20012| 2016-04-06T02:53:04.682-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.522-0500 c20012| 2016-04-06T02:53:04.682-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.529-0500 c20012| 2016-04-06T02:53:04.682-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.530-0500 c20012| 2016-04-06T02:53:04.682-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.531-0500 c20012| 2016-04-06T02:53:04.684-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:30.532-0500 c20012| 2016-04-06T02:53:04.684-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1117 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:30.532-0500 c20012| 2016-04-06T02:53:04.684-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.533-0500 c20012| 2016-04-06T02:53:04.685-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:30.535-0500 c20012| 2016-04-06T02:53:04.685-0500 D REPL [conn38] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29983112μs [js_test:multi_coll_drop] 2016-04-06T02:53:30.537-0500 c20012| 2016-04-06T02:53:04.685-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:30.538-0500 c20012| 2016-04-06T02:53:04.685-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.538-0500 c20012| 2016-04-06T02:53:04.685-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.541-0500 c20012| 2016-04-06T02:53:04.685-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.542-0500 c20012| 2016-04-06T02:53:04.685-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.543-0500 c20012| 2016-04-06T02:53:04.685-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.544-0500 c20012| 2016-04-06T02:53:04.685-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.544-0500 c20012| 2016-04-06T02:53:04.685-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.548-0500 c20012| 2016-04-06T02:53:04.685-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.548-0500 c20012| 2016-04-06T02:53:04.685-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.549-0500 c20012| 2016-04-06T02:53:04.685-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.554-0500 c20012| 2016-04-06T02:53:04.685-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:30.555-0500 c20012| 2016-04-06T02:53:04.685-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.557-0500 c20012| 2016-04-06T02:53:04.685-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.559-0500 c20012| 2016-04-06T02:53:04.685-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1120 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:30.560-0500 c20012| 2016-04-06T02:53:04.685-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.562-0500 c20012| 2016-04-06T02:53:04.685-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:30.563-0500 c20012| 2016-04-06T02:53:04.685-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll-_id_-74.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:30.563-0500 c20012| 2016-04-06T02:53:04.685-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.565-0500 c20012| 2016-04-06T02:53:04.685-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.565-0500 c20012| 2016-04-06T02:53:04.686-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll-_id_-73.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:30.567-0500 c20012| 2016-04-06T02:53:04.685-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1120 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:30.568-0500 c20012| 2016-04-06T02:53:04.686-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.570-0500 c20012| 2016-04-06T02:53:04.686-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.571-0500 c20012| 2016-04-06T02:53:04.686-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.572-0500 c20012| 2016-04-06T02:53:04.686-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.572-0500 c20012| 2016-04-06T02:53:04.686-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.573-0500 c20012| 2016-04-06T02:53:04.686-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.573-0500 c20012| 2016-04-06T02:53:04.686-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1120 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:30.573-0500 c20012| 2016-04-06T02:53:04.686-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.575-0500 c20012| 2016-04-06T02:53:04.686-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.577-0500 c20012| 2016-04-06T02:53:04.686-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.583-0500 c20012| 2016-04-06T02:53:04.686-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:30.588-0500 c20012| 2016-04-06T02:53:04.686-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1121 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:30.589-0500 c20012| 2016-04-06T02:53:04.686-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1121 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:30.589-0500 c20012| 2016-04-06T02:53:04.686-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.590-0500 c20012| 2016-04-06T02:53:04.686-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.591-0500 c20012| 2016-04-06T02:53:04.686-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.593-0500 c20012| 2016-04-06T02:53:04.686-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.594-0500 c20012| 2016-04-06T02:53:04.686-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.595-0500 c20012| 2016-04-06T02:53:04.686-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1121 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:30.596-0500 c20012| 2016-04-06T02:53:04.686-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.597-0500 c20012| 2016-04-06T02:53:04.686-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.599-0500 c20012| 2016-04-06T02:53:04.687-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.601-0500 c20012| 2016-04-06T02:53:04.687-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:30.602-0500 c20012| 2016-04-06T02:53:04.687-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:30.605-0500 c20012| 2016-04-06T02:53:04.687-0500 D REPL [conn38] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29980676μs [js_test:multi_coll_drop] 2016-04-06T02:53:30.605-0500 c20012| 2016-04-06T02:53:04.687-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.607-0500 c20012| 2016-04-06T02:53:04.687-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.608-0500 c20012| 2016-04-06T02:53:04.687-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.608-0500 c20012| 2016-04-06T02:53:04.687-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.610-0500 c20012| 2016-04-06T02:53:04.687-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.610-0500 c20012| 2016-04-06T02:53:04.687-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.611-0500 c20012| 2016-04-06T02:53:04.687-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.613-0500 c20012| 2016-04-06T02:53:04.687-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.613-0500 c20012| 2016-04-06T02:53:04.688-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.614-0500 c20012| 2016-04-06T02:53:04.688-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.614-0500 c20012| 2016-04-06T02:53:04.688-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.615-0500 c20012| 2016-04-06T02:53:04.688-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.615-0500 c20012| 2016-04-06T02:53:04.688-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.616-0500 c20012| 2016-04-06T02:53:04.688-0500 D REPL [rsSync] replication batch size is 3 [js_test:multi_coll_drop] 2016-04-06T02:53:30.616-0500 c20012| 2016-04-06T02:53:04.688-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.620-0500 c20012| 2016-04-06T02:53:04.688-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|12, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:30.621-0500 c20012| 2016-04-06T02:53:04.688-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.621-0500 c20012| 2016-04-06T02:53:04.688-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:30.623-0500 c20012| 2016-04-06T02:53:04.688-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1124 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|12, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:30.655-0500 c20012| 2016-04-06T02:53:04.688-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1124 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:30.656-0500 c20012| 2016-04-06T02:53:04.688-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.671-0500 c20012| 2016-04-06T02:53:04.688-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:30.673-0500 c20012| 2016-04-06T02:53:04.688-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.675-0500 c20012| 2016-04-06T02:53:04.688-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.679-0500 c20012| 2016-04-06T02:53:04.688-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.682-0500 c20012| 2016-04-06T02:53:04.688-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1124 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:30.684-0500 c20012| 2016-04-06T02:53:04.688-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.686-0500 c20012| 2016-04-06T02:53:04.688-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.688-0500 c20012| 2016-04-06T02:53:04.688-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.691-0500 c20012| 2016-04-06T02:53:04.688-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.691-0500 c20012| 2016-04-06T02:53:04.688-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.692-0500 c20012| 2016-04-06T02:53:04.688-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.693-0500 c20012| 2016-04-06T02:53:04.688-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.695-0500 c20012| 2016-04-06T02:53:04.688-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.697-0500 c20012| 2016-04-06T02:53:04.688-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.697-0500 c20012| 2016-04-06T02:53:04.688-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.698-0500 c20012| 2016-04-06T02:53:04.688-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.700-0500 c20012| 2016-04-06T02:53:04.688-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.702-0500 c20012| 2016-04-06T02:53:04.688-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.705-0500 c20012| 2016-04-06T02:53:04.689-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:30.711-0500 c20012| 2016-04-06T02:53:04.689-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|15, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:30.716-0500 c20012| 2016-04-06T02:53:04.689-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1126 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|15, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:30.719-0500 c20012| 2016-04-06T02:53:04.689-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1126 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:30.730-0500 c20012| 2016-04-06T02:53:04.689-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1126 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:30.736-0500 c20012| 2016-04-06T02:53:04.689-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:30.738-0500 c20012| 2016-04-06T02:53:04.689-0500 D REPL [conn38] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29978979μs [js_test:multi_coll_drop] 2016-04-06T02:53:30.738-0500 c20012| 2016-04-06T02:53:04.689-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.740-0500 c20012| 2016-04-06T02:53:04.689-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.740-0500 c20012| 2016-04-06T02:53:04.689-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.740-0500 c20012| 2016-04-06T02:53:04.689-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.748-0500 c20012| 2016-04-06T02:53:04.689-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.750-0500 c20012| 2016-04-06T02:53:04.689-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.751-0500 c20012| 2016-04-06T02:53:04.689-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.752-0500 c20012| 2016-04-06T02:53:04.689-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.753-0500 c20012| 2016-04-06T02:53:04.689-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.754-0500 c20012| 2016-04-06T02:53:04.689-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.756-0500 c20012| 2016-04-06T02:53:04.689-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.758-0500 c20012| 2016-04-06T02:53:04.689-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.758-0500 c20012| 2016-04-06T02:53:04.689-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:30.759-0500 c20012| 2016-04-06T02:53:04.689-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.760-0500 c20012| 2016-04-06T02:53:04.690-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-73.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:30.761-0500 c20012| 2016-04-06T02:53:04.690-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.763-0500 c20012| 2016-04-06T02:53:04.690-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.767-0500 c20012| 2016-04-06T02:53:04.690-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-72.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:30.770-0500 c20012| 2016-04-06T02:53:04.690-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.780-0500 c20012| 2016-04-06T02:53:04.690-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.780-0500 c20012| 2016-04-06T02:53:04.690-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.781-0500 c20012| 2016-04-06T02:53:04.690-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.782-0500 c20012| 2016-04-06T02:53:04.690-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.784-0500 c20012| 2016-04-06T02:53:04.690-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.797-0500 c20012| 2016-04-06T02:53:04.690-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.798-0500 c20012| 2016-04-06T02:53:04.690-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.801-0500 c20012| 2016-04-06T02:53:04.690-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.801-0500 c20012| 2016-04-06T02:53:04.690-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.805-0500 c20012| 2016-04-06T02:53:04.690-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.805-0500 c20012| 2016-04-06T02:53:04.690-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.807-0500 c20012| 2016-04-06T02:53:04.690-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.813-0500 c20012| 2016-04-06T02:53:04.690-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.814-0500 c20012| 2016-04-06T02:53:04.690-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.815-0500 c20012| 2016-04-06T02:53:04.690-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.816-0500 c20012| 2016-04-06T02:53:04.690-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.817-0500 c20012| 2016-04-06T02:53:04.690-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:30.823-0500 c20012| 2016-04-06T02:53:04.690-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:30.826-0500 c20012| 2016-04-06T02:53:04.690-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1128 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:30.827-0500 c20012| 2016-04-06T02:53:04.690-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:30.827-0500 c20012| 2016-04-06T02:53:04.690-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1128 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:30.828-0500 c20012| 2016-04-06T02:53:04.690-0500 D REPL [conn38] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29977599μs [js_test:multi_coll_drop] 2016-04-06T02:53:30.829-0500 c20012| 2016-04-06T02:53:04.690-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.831-0500 c20012| 2016-04-06T02:53:04.690-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1128 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:30.832-0500 c20012| 2016-04-06T02:53:04.690-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.832-0500 c20012| 2016-04-06T02:53:04.690-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.833-0500 c20012| 2016-04-06T02:53:04.691-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.833-0500 c20012| 2016-04-06T02:53:04.691-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.835-0500 c20012| 2016-04-06T02:53:04.691-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.836-0500 c20012| 2016-04-06T02:53:04.691-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.836-0500 c20012| 2016-04-06T02:53:04.691-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.838-0500 c20012| 2016-04-06T02:53:04.691-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.839-0500 c20012| 2016-04-06T02:53:04.691-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.839-0500 c20012| 2016-04-06T02:53:04.691-0500 D REPL [rsSync] replication batch size is 3 [js_test:multi_coll_drop] 2016-04-06T02:53:30.840-0500 c20012| 2016-04-06T02:53:04.691-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.840-0500 c20012| 2016-04-06T02:53:04.691-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.840-0500 c20012| 2016-04-06T02:53:04.691-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.841-0500 c20012| 2016-04-06T02:53:04.691-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.842-0500 c20012| 2016-04-06T02:53:04.691-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.844-0500 c20012| 2016-04-06T02:53:04.691-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.845-0500 c20012| 2016-04-06T02:53:04.691-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:30.846-0500 c20012| 2016-04-06T02:53:04.691-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:30.847-0500 c20012| 2016-04-06T02:53:04.691-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.848-0500 c20012| 2016-04-06T02:53:04.691-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.854-0500 c20012| 2016-04-06T02:53:04.691-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:30.855-0500 c20012| 2016-04-06T02:53:04.691-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.863-0500 c20012| 2016-04-06T02:53:04.691-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1130 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:30.866-0500 c20012| 2016-04-06T02:53:04.691-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1130 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:30.867-0500 c20012| 2016-04-06T02:53:04.691-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.869-0500 c20012| 2016-04-06T02:53:04.691-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.870-0500 c20012| 2016-04-06T02:53:04.691-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.873-0500 c20012| 2016-04-06T02:53:04.692-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.874-0500 c20012| 2016-04-06T02:53:04.692-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.875-0500 c20012| 2016-04-06T02:53:04.692-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.877-0500 c20012| 2016-04-06T02:53:04.692-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1130 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:30.879-0500 c20012| 2016-04-06T02:53:04.692-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.886-0500 c20012| 2016-04-06T02:53:04.692-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.887-0500 c20012| 2016-04-06T02:53:04.692-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.888-0500 c20012| 2016-04-06T02:53:04.692-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.888-0500 c20012| 2016-04-06T02:53:04.692-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.889-0500 c20012| 2016-04-06T02:53:04.692-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.891-0500 c20012| 2016-04-06T02:53:04.692-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.893-0500 c20012| 2016-04-06T02:53:04.692-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:30.895-0500 c20012| 2016-04-06T02:53:04.692-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:30.895-0500 c20012| 2016-04-06T02:53:04.692-0500 D REPL [conn38] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29975637μs [js_test:multi_coll_drop] 2016-04-06T02:53:30.899-0500 c20012| 2016-04-06T02:53:04.692-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:30.907-0500 c20012| 2016-04-06T02:53:04.692-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1132 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:30.910-0500 c20012| 2016-04-06T02:53:04.692-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1132 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:30.910-0500 c20012| 2016-04-06T02:53:04.692-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.912-0500 c20012| 2016-04-06T02:53:04.692-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.912-0500 c20012| 2016-04-06T02:53:04.692-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.913-0500 c20012| 2016-04-06T02:53:04.692-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.916-0500 c20012| 2016-04-06T02:53:04.692-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1132 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:30.917-0500 c20012| 2016-04-06T02:53:04.692-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.926-0500 c20012| 2016-04-06T02:53:04.693-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:30.930-0500 c20012| 2016-04-06T02:53:04.693-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1133 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:30.933-0500 c20012| 2016-04-06T02:53:04.693-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1133 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:30.933-0500 c20012| 2016-04-06T02:53:04.693-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.934-0500 c20012| 2016-04-06T02:53:04.693-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.936-0500 c20012| 2016-04-06T02:53:04.693-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.938-0500 c20012| 2016-04-06T02:53:04.693-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.938-0500 c20012| 2016-04-06T02:53:04.693-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.939-0500 c20012| 2016-04-06T02:53:04.693-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.941-0500 c20012| 2016-04-06T02:53:04.693-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.941-0500 c20012| 2016-04-06T02:53:04.693-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:30.943-0500 c20012| 2016-04-06T02:53:04.693-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.948-0500 c20012| 2016-04-06T02:53:04.693-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll-_id_-72.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:30.948-0500 c20012| 2016-04-06T02:53:04.693-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.949-0500 c20012| 2016-04-06T02:53:04.693-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1133 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:30.952-0500 c20012| 2016-04-06T02:53:04.693-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.955-0500 c20012| 2016-04-06T02:53:04.693-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.956-0500 c20012| 2016-04-06T02:53:04.693-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll-_id_-71.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:30.962-0500 c20012| 2016-04-06T02:53:04.693-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:30.966-0500 c20012| 2016-04-06T02:53:04.693-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1135 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:30.967-0500 c20012| 2016-04-06T02:53:04.693-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1135 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:30.968-0500 c20012| 2016-04-06T02:53:04.693-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.971-0500 c20012| 2016-04-06T02:53:04.693-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.971-0500 c20012| 2016-04-06T02:53:04.693-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.975-0500 c20012| 2016-04-06T02:53:04.693-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.975-0500 c20012| 2016-04-06T02:53:04.693-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.977-0500 c20012| 2016-04-06T02:53:04.693-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1135 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:30.977-0500 c20012| 2016-04-06T02:53:04.693-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.979-0500 c20012| 2016-04-06T02:53:04.693-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.982-0500 c20012| 2016-04-06T02:53:04.693-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.983-0500 c20012| 2016-04-06T02:53:04.693-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.985-0500 c20012| 2016-04-06T02:53:04.693-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.986-0500 c20012| 2016-04-06T02:53:04.693-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.987-0500 c20012| 2016-04-06T02:53:04.694-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.987-0500 c20012| 2016-04-06T02:53:04.694-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.988-0500 c20012| 2016-04-06T02:53:04.694-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.989-0500 c20012| 2016-04-06T02:53:04.694-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.992-0500 c20012| 2016-04-06T02:53:04.694-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:30.994-0500 c20012| 2016-04-06T02:53:04.694-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:31.002-0500 c20012| 2016-04-06T02:53:04.694-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:31.006-0500 c20012| 2016-04-06T02:53:04.694-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:31.006-0500 c20012| 2016-04-06T02:53:04.694-0500 D REPL [conn38] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29973838μs [js_test:multi_coll_drop] 2016-04-06T02:53:31.008-0500 c20012| 2016-04-06T02:53:04.694-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1138 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:31.008-0500 c20012| 2016-04-06T02:53:04.694-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1138 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:31.009-0500 c20012| 2016-04-06T02:53:04.694-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.016-0500 c20012| 2016-04-06T02:53:04.694-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.016-0500 c20012| 2016-04-06T02:53:04.694-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.019-0500 c20012| 2016-04-06T02:53:04.694-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1138 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.022-0500 c20012| 2016-04-06T02:53:04.694-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.024-0500 c20012| 2016-04-06T02:53:04.694-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.025-0500 c20012| 2016-04-06T02:53:04.694-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.026-0500 c20012| 2016-04-06T02:53:04.694-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.026-0500 c20012| 2016-04-06T02:53:04.695-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.028-0500 c20012| 2016-04-06T02:53:04.695-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.029-0500 c20012| 2016-04-06T02:53:04.695-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.030-0500 c20012| 2016-04-06T02:53:04.695-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.030-0500 c20012| 2016-04-06T02:53:04.695-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.030-0500 c20012| 2016-04-06T02:53:04.695-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.031-0500 c20012| 2016-04-06T02:53:04.695-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.033-0500 c20012| 2016-04-06T02:53:04.695-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.038-0500 c20012| 2016-04-06T02:53:04.695-0500 D REPL [rsSync] replication batch size is 3 [js_test:multi_coll_drop] 2016-04-06T02:53:31.041-0500 c20012| 2016-04-06T02:53:04.695-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.042-0500 c20012| 2016-04-06T02:53:04.695-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:31.043-0500 c20012| 2016-04-06T02:53:04.695-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:31.044-0500 c20012| 2016-04-06T02:53:04.695-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.047-0500 c20012| 2016-04-06T02:53:04.695-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.048-0500 c20012| 2016-04-06T02:53:04.695-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.048-0500 c20012| 2016-04-06T02:53:04.695-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.049-0500 c20012| 2016-04-06T02:53:04.695-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.053-0500 c20012| 2016-04-06T02:53:04.695-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.059-0500 s20015| 2016-04-06T02:53:16.717-0500 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:31.063-0500 s20015| 2016-04-06T02:53:16.717-0500 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:31.065-0500 s20015| 2016-04-06T02:53:16.717-0500 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.100.28:20013, no events [js_test:multi_coll_drop] 2016-04-06T02:53:31.069-0500 s20015| 2016-04-06T02:53:18.271-0500 D ASIO [Balancer] startCommand: RemoteCommand 100 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:48.271-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929198271), up: 71, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.071-0500 s20015| 2016-04-06T02:53:18.271-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 100 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:31.073-0500 c20013| 2016-04-06T02:52:17.336-0500 I COMMAND [conn13] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.073-0500 c20013| 2016-04-06T02:52:17.336-0500 D COMMAND [conn13] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.075-0500 c20013| 2016-04-06T02:52:17.336-0500 I COMMAND [conn13] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.079-0500 c20013| 2016-04-06T02:52:17.438-0500 D COMMAND [conn13] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.080-0500 c20013| 2016-04-06T02:52:17.438-0500 I COMMAND [conn13] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.080-0500 c20013| 2016-04-06T02:52:17.462-0500 D COMMAND [conn11] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.081-0500 c20013| 2016-04-06T02:52:17.463-0500 I COMMAND [conn11] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.082-0500 c20013| 2016-04-06T02:52:17.560-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.084-0500 c20013| 2016-04-06T02:52:17.560-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.086-0500 c20013| 2016-04-06T02:52:17.663-0500 D COMMAND [conn11] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.088-0500 c20013| 2016-04-06T02:52:17.663-0500 I COMMAND [conn11] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.090-0500 c20013| 2016-04-06T02:52:17.703-0500 D COMMAND [conn9] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.092-0500 c20013| 2016-04-06T02:52:17.703-0500 I COMMAND [conn9] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.093-0500 c20013| 2016-04-06T02:52:17.864-0500 D COMMAND [conn11] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.094-0500 c20013| 2016-04-06T02:52:17.864-0500 I COMMAND [conn11] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.094-0500 c20013| 2016-04-06T02:52:17.939-0500 D COMMAND [conn13] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.096-0500 c20013| 2016-04-06T02:52:17.939-0500 I COMMAND [conn13] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.096-0500 c20013| 2016-04-06T02:52:18.061-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.099-0500 c20013| 2016-04-06T02:52:18.061-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.099-0500 c20013| 2016-04-06T02:52:18.065-0500 D COMMAND [conn11] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.101-0500 c20013| 2016-04-06T02:52:18.065-0500 I COMMAND [conn11] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.102-0500 c20013| 2016-04-06T02:52:18.204-0500 D COMMAND [conn9] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.104-0500 c20013| 2016-04-06T02:52:18.204-0500 I COMMAND [conn9] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.106-0500 c20013| 2016-04-06T02:52:18.266-0500 D COMMAND [conn11] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.106-0500 c20013| 2016-04-06T02:52:18.266-0500 I COMMAND [conn11] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.107-0500 c20013| 2016-04-06T02:52:18.359-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.108-0500 c20013| 2016-04-06T02:52:18.359-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.108-0500 c20013| 2016-04-06T02:52:18.440-0500 D COMMAND [conn13] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.112-0500 c20013| 2016-04-06T02:52:18.440-0500 I COMMAND [conn13] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.113-0500 c20013| 2016-04-06T02:52:18.467-0500 D COMMAND [conn11] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.117-0500 c20013| 2016-04-06T02:52:18.467-0500 I COMMAND [conn11] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.118-0500 c20013| 2016-04-06T02:52:18.547-0500 D COMMAND [conn7] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.119-0500 c20013| 2016-04-06T02:52:18.547-0500 D COMMAND [conn7] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:31.121-0500 c20013| 2016-04-06T02:52:18.547-0500 I COMMAND [conn7] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 1 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.121-0500 c20013| 2016-04-06T02:52:18.562-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.122-0500 c20013| 2016-04-06T02:52:18.562-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.123-0500 c20013| 2016-04-06T02:52:18.668-0500 D COMMAND [conn11] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.128-0500 c20013| 2016-04-06T02:52:18.668-0500 I COMMAND [conn11] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.130-0500 c20013| 2016-04-06T02:52:18.706-0500 D COMMAND [conn9] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.133-0500 c20013| 2016-04-06T02:52:18.707-0500 I COMMAND [conn9] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.135-0500 c20013| 2016-04-06T02:52:18.869-0500 D COMMAND [conn11] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.136-0500 c20013| 2016-04-06T02:52:18.869-0500 I COMMAND [conn11] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.139-0500 c20013| 2016-04-06T02:52:18.941-0500 D COMMAND [conn13] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.143-0500 c20013| 2016-04-06T02:52:18.941-0500 I COMMAND [conn13] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.147-0500 c20013| 2016-04-06T02:52:19.047-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter failed to prepare update command with status: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:31.149-0500 c20013| 2016-04-06T02:52:19.047-0500 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to mongovm16:20011: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:31.151-0500 c20013| 2016-04-06T02:52:19.047-0500 D REPL [SyncSourceFeedback] The replication progress command (replSetUpdatePosition) failed and will be retried: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:31.154-0500 c20013| 2016-04-06T02:52:19.053-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1035 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:29.053-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.160-0500 c20013| 2016-04-06T02:52:19.053-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1036 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:29.053-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.164-0500 c20013| 2016-04-06T02:52:19.053-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1035 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:31.175-0500 c20013| 2016-04-06T02:52:19.053-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1036 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:31.180-0500 c20013| 2016-04-06T02:52:19.053-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1035 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 1, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:31.181-0500 c20013| 2016-04-06T02:52:19.053-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:21.553Z [js_test:multi_coll_drop] 2016-04-06T02:53:31.183-0500 c20013| 2016-04-06T02:52:19.063-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.184-0500 c20013| 2016-04-06T02:52:19.066-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.189-0500 c20013| 2016-04-06T02:52:19.066-0500 D COMMAND [conn5] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.190-0500 c20013| 2016-04-06T02:52:19.066-0500 D COMMAND [conn5] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:31.195-0500 c20013| 2016-04-06T02:52:19.069-0500 I COMMAND [conn5] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 1 } numYields:0 reslen:439 locks:{} protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.199-0500 c20013| 2016-04-06T02:52:19.076-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1036 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 1, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:31.202-0500 c20013| 2016-04-06T02:52:19.076-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:21.576Z [js_test:multi_coll_drop] 2016-04-06T02:53:31.202-0500 c20013| 2016-04-06T02:52:19.076-0500 D COMMAND [conn11] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.206-0500 c20013| 2016-04-06T02:52:19.076-0500 I COMMAND [conn11] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.209-0500 c20013| 2016-04-06T02:52:19.140-0500 D COMMAND [conn5] run command admin.$cmd { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 1, candidateIndex: 1, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:31.210-0500 c20013| 2016-04-06T02:52:19.140-0500 D COMMAND [conn5] command: replSetRequestVotes [js_test:multi_coll_drop] 2016-04-06T02:53:31.212-0500 c20013| 2016-04-06T02:52:19.140-0500 D QUERY [conn5] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:31.218-0500 c20013| 2016-04-06T02:52:19.141-0500 I COMMAND [conn5] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 1, candidateIndex: 1, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929130000|10, t: 1 } } numYields:0 reslen:143 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.225-0500 c20013| 2016-04-06T02:52:19.142-0500 D COMMAND [conn5] run command admin.$cmd { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 2, candidateIndex: 1, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:31.226-0500 c20013| 2016-04-06T02:52:19.142-0500 D COMMAND [conn5] command: replSetRequestVotes [js_test:multi_coll_drop] 2016-04-06T02:53:31.230-0500 c20013| 2016-04-06T02:52:19.142-0500 D QUERY [conn5] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:31.236-0500 c20013| 2016-04-06T02:52:19.142-0500 I COMMAND [conn5] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 2, candidateIndex: 1, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929130000|10, t: 1 } } numYields:0 reslen:143 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.242-0500 c20013| 2016-04-06T02:52:19.142-0500 D NETWORK [conn5] SocketException: remote: 192.168.100.28:49469 error: 9001 socket exception [CLOSED] server [192.168.100.28:49469] [js_test:multi_coll_drop] 2016-04-06T02:53:31.247-0500 c20013| 2016-04-06T02:52:19.142-0500 I NETWORK [conn5] end connection 192.168.100.28:49469 (8 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:31.248-0500 c20013| 2016-04-06T02:52:19.143-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:50633 #14 (9 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:31.257-0500 c20013| 2016-04-06T02:52:19.143-0500 D COMMAND [conn14] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20012" } [js_test:multi_coll_drop] 2016-04-06T02:53:31.275-0500 c20013| 2016-04-06T02:52:19.143-0500 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20012" } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.285-0500 c20013| 2016-04-06T02:52:19.143-0500 D COMMAND [conn14] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.285-0500 c20013| 2016-04-06T02:52:19.143-0500 D COMMAND [conn14] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:31.294-0500 c20013| 2016-04-06T02:52:19.144-0500 I COMMAND [conn14] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 2 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.295-0500 c20013| 2016-04-06T02:52:19.208-0500 D COMMAND [conn9] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.298-0500 c20013| 2016-04-06T02:52:19.208-0500 I COMMAND [conn9] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.300-0500 c20013| 2016-04-06T02:52:19.277-0500 D COMMAND [conn11] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.303-0500 c20013| 2016-04-06T02:52:19.277-0500 I COMMAND [conn11] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.303-0500 c20013| 2016-04-06T02:52:19.442-0500 D COMMAND [conn13] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.304-0500 c20013| 2016-04-06T02:52:19.442-0500 I COMMAND [conn13] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.304-0500 c20013| 2016-04-06T02:52:19.478-0500 D COMMAND [conn11] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.307-0500 c20013| 2016-04-06T02:52:19.478-0500 I COMMAND [conn11] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.307-0500 c20013| 2016-04-06T02:52:19.576-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.308-0500 c20013| 2016-04-06T02:52:19.576-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.308-0500 c20013| 2016-04-06T02:52:19.679-0500 D COMMAND [conn11] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.309-0500 c20013| 2016-04-06T02:52:19.679-0500 I COMMAND [conn11] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.313-0500 c20013| 2016-04-06T02:52:19.943-0500 D COMMAND [conn13] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.314-0500 c20013| 2016-04-06T02:52:19.943-0500 I COMMAND [conn13] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.315-0500 c20013| 2016-04-06T02:52:21.048-0500 D COMMAND [conn7] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.316-0500 c20013| 2016-04-06T02:52:21.048-0500 D COMMAND [conn7] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:31.318-0500 c20013| 2016-04-06T02:52:21.048-0500 I COMMAND [conn7] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.320-0500 c20013| 2016-04-06T02:52:21.144-0500 D COMMAND [conn14] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.321-0500 c20013| 2016-04-06T02:52:21.144-0500 D COMMAND [conn14] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:31.325-0500 c20013| 2016-04-06T02:52:21.144-0500 I COMMAND [conn14] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 2 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.328-0500 c20013| 2016-04-06T02:52:21.553-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1039 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:31.553-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.329-0500 c20013| 2016-04-06T02:52:21.553-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1039 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:31.331-0500 c20013| 2016-04-06T02:52:21.554-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1039 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 2, primaryId: 1, durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, opTime: { ts: Timestamp 1459929130000|10, t: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:31.335-0500 c20013| 2016-04-06T02:52:21.554-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:24.054Z [js_test:multi_coll_drop] 2016-04-06T02:53:31.339-0500 c20013| 2016-04-06T02:52:21.576-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1041 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:31.576-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.341-0500 c20013| 2016-04-06T02:52:21.576-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1041 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:31.345-0500 c20013| 2016-04-06T02:52:21.576-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1041 finished with response: { ok: 1.0, electionTime: new Date(6270347906482438145), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 2, primaryId: 1, durableOpTime: { ts: Timestamp 1459929139000|5, t: 2 }, opTime: { ts: Timestamp 1459929139000|5, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:31.347-0500 c20013| 2016-04-06T02:52:21.576-0500 I REPL [ReplicationExecutor] Member mongovm16:20012 is now in state PRIMARY [js_test:multi_coll_drop] 2016-04-06T02:53:31.348-0500 c20013| 2016-04-06T02:52:21.576-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:24.076Z [js_test:multi_coll_drop] 2016-04-06T02:53:31.349-0500 c20013| 2016-04-06T02:52:21.642-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:50742 #15 (10 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:31.351-0500 c20013| 2016-04-06T02:52:21.642-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:50743 #16 (11 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:31.353-0500 c20013| 2016-04-06T02:52:21.642-0500 D COMMAND [conn15] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20010" } [js_test:multi_coll_drop] 2016-04-06T02:53:31.359-0500 c20013| 2016-04-06T02:52:21.642-0500 D COMMAND [conn16] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:31.360-0500 c20013| 2016-04-06T02:52:21.643-0500 I COMMAND [conn16] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20015" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.367-0500 c20013| 2016-04-06T02:52:21.643-0500 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20010" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.368-0500 c20013| 2016-04-06T02:52:21.643-0500 D COMMAND [conn16] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929139000|5, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.372-0500 c20013| 2016-04-06T02:52:21.643-0500 D REPL [conn16] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929139000|5, t: 2 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.372-0500 c20013| 2016-04-06T02:52:21.643-0500 D REPL [conn16] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999985μs [js_test:multi_coll_drop] 2016-04-06T02:53:31.374-0500 c20013| 2016-04-06T02:52:21.645-0500 D COMMAND [conn15] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929139000|5, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.376-0500 c20013| 2016-04-06T02:52:21.645-0500 D REPL [conn15] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929139000|5, t: 2 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929130000|10, t: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.378-0500 c20013| 2016-04-06T02:52:21.645-0500 D REPL [conn15] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999986μs [js_test:multi_coll_drop] 2016-04-06T02:53:31.379-0500 c20013| 2016-04-06T02:52:22.554-0500 I REPL [ReplicationExecutor] syncing from: mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:31.385-0500 c20013| 2016-04-06T02:52:22.554-0500 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 1043 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:52.554-0500 cmd:{ find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:31.388-0500 c20013| 2016-04-06T02:52:22.554-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1043 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:31.389-0500 c20013| 2016-04-06T02:52:22.554-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1043 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1459929117000|1, h: 1169182228640141205, v: 2, op: "n", ns: "", o: { msg: "initiating set" } } ], id: 0, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.390-0500 c20013| 2016-04-06T02:52:22.554-0500 D REPL [rsBackgroundSync] scheduling fetcher to read remote oplog on mongovm16:20012 starting at filter: { ts: { $gte: Timestamp 1459929130000|10 } } [js_test:multi_coll_drop] 2016-04-06T02:53:31.391-0500 c20013| 2016-04-06T02:52:22.554-0500 D REPL [SyncSourceFeedback] setting syncSourceFeedback to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:31.397-0500 c20013| 2016-04-06T02:52:22.554-0500 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 1045 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.554-0500 cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929130000|10 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.398-0500 c20013| 2016-04-06T02:52:22.554-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:31.398-0500 c20013| 2016-04-06T02:52:22.555-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1046 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:31.409-0500 c20013| 2016-04-06T02:52:22.555-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:31.420-0500 c20013| 2016-04-06T02:52:22.555-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1047 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:31.421-0500 c20013| 2016-04-06T02:52:22.555-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:31.421-0500 c20013| 2016-04-06T02:52:22.556-0500 I ASIO [NetworkInterfaceASIO-BGSync-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:31.424-0500 c20013| 2016-04-06T02:52:22.556-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1046 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:31.424-0500 c20013| 2016-04-06T02:52:22.556-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1045 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:31.427-0500 s20014| 2016-04-06T02:53:18.273-0500 D ASIO [Balancer] startCommand: RemoteCommand 417 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:48.273-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929198273), up: 71, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.427-0500 c20012| 2016-04-06T02:53:04.695-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.431-0500 c20012| 2016-04-06T02:53:04.695-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.433-0500 c20012| 2016-04-06T02:53:04.695-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.433-0500 c20011| 2016-04-06T02:52:42.838-0500 D COMMAND [conn36] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|23, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.444-0500 c20011| 2016-04-06T02:52:42.838-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|23, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:31.450-0500 c20011| 2016-04-06T02:52:42.838-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|23, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.452-0500 c20011| 2016-04-06T02:52:42.838-0500 D QUERY [conn36] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:31.458-0500 c20012| 2016-04-06T02:53:04.695-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:31.460-0500 c20012| 2016-04-06T02:53:04.695-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.462-0500 c20012| 2016-04-06T02:53:04.695-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.466-0500 c20011| 2016-04-06T02:52:42.838-0500 I COMMAND [conn36] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|23, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.474-0500 c20013| 2016-04-06T02:52:22.557-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1045 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1459929130000|10, t: 1, h: 3135197531614568333, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } }, { ts: Timestamp 1459929139000|2, t: 2, h: -9164491805014394944, v: 2, op: "n", ns: "", o: { msg: "new primary" } }, { ts: Timestamp 1459929139000|3, t: 2, h: -3935544630640156266, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c03365c17830b843f1a5'), state: 2, when: new Date(1459929139585), why: "splitting chunk [{ _id: -81.0 }, { _id: MaxKey }) in multidrop.coll" } } }, { ts: Timestamp 1459929139000|4, t: 2, h: -8260193851631985048, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20014" }, o: { $set: { ping: new Date(1459929137199), up: 10, waiting: false } } }, { ts: Timestamp 1459929139000|5, t: 2, h: 666054914550689290, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20015" }, o: { $set: { ping: new Date(1459929137435), up: 10, waiting: false } } }, { ts: Timestamp 1459929141000|1, t: 2, h: 1487969004901916751, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20014" }, o: { $set: { ping: new Date(1459929141645), up: 14, waiting: true } } } ], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.477-0500 c20013| 2016-04-06T02:52:22.557-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929141000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.479-0500 c20013| 2016-04-06T02:52:22.557-0500 D REPL [rsBackgroundSync-0] fetcher read 6 operations from remote oplog starting at ts: Timestamp 1459929130000|10 and ending at ts: Timestamp 1459929141000|1 [js_test:multi_coll_drop] 2016-04-06T02:53:31.479-0500 c20013| 2016-04-06T02:52:22.557-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1048 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:31.482-0500 c20013| 2016-04-06T02:52:22.557-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:31.486-0500 c20013| 2016-04-06T02:52:22.557-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.486-0500 c20013| 2016-04-06T02:52:22.557-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.492-0500 c20013| 2016-04-06T02:52:22.557-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.493-0500 c20013| 2016-04-06T02:52:22.557-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.499-0500 c20011| 2016-04-06T02:52:42.840-0500 D COMMAND [conn40] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c04a65c17830b843f1c1'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929162840), why: "splitting chunk [{ _id: -67.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.500-0500 c20011| 2016-04-06T02:52:42.840-0500 D QUERY [conn40] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:31.502-0500 c20011| 2016-04-06T02:52:42.840-0500 D QUERY [conn40] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:31.504-0500 c20011| 2016-04-06T02:52:42.840-0500 D QUERY [conn40] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.507-0500 c20011| 2016-04-06T02:52:42.840-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|23, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:31.509-0500 c20011| 2016-04-06T02:52:42.843-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|23, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.513-0500 c20011| 2016-04-06T02:52:42.846-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|23, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:31.519-0500 c20011| 2016-04-06T02:52:42.856-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|23, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:31.519-0500 c20011| 2016-04-06T02:52:42.856-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:31.524-0500 c20011| 2016-04-06T02:52:42.856-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.526-0500 c20011| 2016-04-06T02:52:42.856-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|24, t: 3 } and is durable through: { ts: Timestamp 1459929162000|23, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.533-0500 c20013| 2016-04-06T02:52:22.557-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.534-0500 c20013| 2016-04-06T02:52:22.557-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.547-0500 c20013| 2016-04-06T02:52:22.557-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.552-0500 c20013| 2016-04-06T02:52:22.557-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.553-0500 c20013| 2016-04-06T02:52:22.558-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.555-0500 c20013| 2016-04-06T02:52:22.557-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.568-0500 c20013| 2016-04-06T02:52:22.558-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:31.569-0500 c20013| 2016-04-06T02:52:22.558-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.571-0500 c20011| 2016-04-06T02:52:42.857-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|23, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.575-0500 c20011| 2016-04-06T02:52:42.862-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929162000|24, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|23, t: 3 }, name-id: "238" } [js_test:multi_coll_drop] 2016-04-06T02:53:31.580-0500 c20011| 2016-04-06T02:52:42.864-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:31.582-0500 c20011| 2016-04-06T02:52:42.864-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:31.590-0500 c20011| 2016-04-06T02:52:42.864-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.593-0500 c20011| 2016-04-06T02:52:42.864-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|24, t: 3 } and is durable through: { ts: Timestamp 1459929162000|24, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.595-0500 c20011| 2016-04-06T02:52:42.864-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|24, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.601-0500 c20011| 2016-04-06T02:52:42.864-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.609-0500 c20011| 2016-04-06T02:52:42.865-0500 I COMMAND [conn40] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c04a65c17830b843f1c1'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929162840), why: "splitting chunk [{ _id: -67.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c04a65c17830b843f1c1'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929162840), why: "splitting chunk [{ _id: -67.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 24ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.613-0500 c20011| 2016-04-06T02:52:42.865-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|23, t: 3 } } cursorid:19853084149 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 18ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.618-0500 c20011| 2016-04-06T02:52:42.866-0500 D COMMAND [conn40] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|68 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|24, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.621-0500 c20011| 2016-04-06T02:52:42.866-0500 D COMMAND [conn40] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|24, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:31.625-0500 c20011| 2016-04-06T02:52:42.866-0500 D COMMAND [conn40] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|68 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|24, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.626-0500 c20011| 2016-04-06T02:52:42.866-0500 D QUERY [conn40] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:31.635-0500 c20011| 2016-04-06T02:52:42.866-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|24, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:31.643-0500 c20011| 2016-04-06T02:52:42.870-0500 I COMMAND [conn40] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|68 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|24, t: 3 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.651-0500 c20011| 2016-04-06T02:52:42.870-0500 D COMMAND [conn40] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-67.0", lastmod: Timestamp 1000|69, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -67.0 }, max: { _id: -66.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-67.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-66.0", lastmod: Timestamp 1000|70, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -66.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-66.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|68 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.652-0500 c20011| 2016-04-06T02:52:42.870-0500 D QUERY [conn40] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:31.655-0500 c20011| 2016-04-06T02:52:42.870-0500 D QUERY [conn40] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:31.657-0500 c20013| 2016-04-06T02:52:22.558-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1048 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:31.659-0500 c20013| 2016-04-06T02:52:22.558-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.664-0500 c20013| 2016-04-06T02:52:22.558-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1047 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:31.666-0500 c20013| 2016-04-06T02:52:22.558-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.669-0500 c20013| 2016-04-06T02:52:22.558-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:31.670-0500 c20013| 2016-04-06T02:52:22.558-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.670-0500 c20013| 2016-04-06T02:52:22.558-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.672-0500 c20013| 2016-04-06T02:52:22.558-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.674-0500 c20013| 2016-04-06T02:52:22.558-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.677-0500 c20013| 2016-04-06T02:52:22.558-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.678-0500 c20013| 2016-04-06T02:52:22.558-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.678-0500 c20013| 2016-04-06T02:52:22.558-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1047 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.680-0500 c20013| 2016-04-06T02:52:22.558-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.681-0500 c20013| 2016-04-06T02:52:22.558-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.684-0500 c20013| 2016-04-06T02:52:22.558-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.688-0500 c20013| 2016-04-06T02:52:22.558-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.693-0500 c20013| 2016-04-06T02:52:22.558-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.703-0500 c20013| 2016-04-06T02:52:22.558-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.715-0500 c20013| 2016-04-06T02:52:22.558-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.744-0500 c20013| 2016-04-06T02:52:22.558-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.747-0500 c20013| 2016-04-06T02:52:22.558-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.747-0500 c20013| 2016-04-06T02:52:22.558-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.753-0500 c20011| 2016-04-06T02:52:42.871-0500 I COMMAND [conn40] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.754-0500 c20011| 2016-04-06T02:52:42.871-0500 D QUERY [conn40] Using idhack: { _id: "multidrop.coll-_id_-67.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:31.755-0500 c20011| 2016-04-06T02:52:42.871-0500 D QUERY [conn40] Using idhack: { _id: "multidrop.coll-_id_-66.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:31.759-0500 c20011| 2016-04-06T02:52:42.876-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|24, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 9ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.761-0500 c20013| 2016-04-06T02:52:22.558-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:31.761-0500 s20014| 2016-04-06T02:53:18.273-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 417 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:31.764-0500 c20011| 2016-04-06T02:52:42.876-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929162000|25, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|24, t: 3 }, name-id: "239" } [js_test:multi_coll_drop] 2016-04-06T02:53:31.768-0500 c20011| 2016-04-06T02:52:42.878-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:31.770-0500 c20011| 2016-04-06T02:52:42.878-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:31.773-0500 c20011| 2016-04-06T02:52:42.878-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.777-0500 c20011| 2016-04-06T02:52:42.878-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|25, t: 3 } and is durable through: { ts: Timestamp 1459929162000|24, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.778-0500 c20011| 2016-04-06T02:52:42.878-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929162000|25, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|24, t: 3 }, name-id: "239" } [js_test:multi_coll_drop] 2016-04-06T02:53:31.793-0500 c20011| 2016-04-06T02:52:42.878-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.798-0500 c20011| 2016-04-06T02:52:42.881-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|24, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:31.809-0500 c20011| 2016-04-06T02:52:42.894-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:31.811-0500 c20011| 2016-04-06T02:52:42.894-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:31.812-0500 c20011| 2016-04-06T02:52:42.894-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.820-0500 c20011| 2016-04-06T02:52:42.894-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|25, t: 3 } and is durable through: { ts: Timestamp 1459929162000|25, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.822-0500 c20011| 2016-04-06T02:52:42.894-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|25, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.840-0500 c20011| 2016-04-06T02:52:42.894-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.845-0500 c20011| 2016-04-06T02:52:42.894-0500 I COMMAND [conn40] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-67.0", lastmod: Timestamp 1000|69, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -67.0 }, max: { _id: -66.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-67.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-66.0", lastmod: Timestamp 1000|70, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -66.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-66.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|68 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 23ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.854-0500 c20011| 2016-04-06T02:52:42.894-0500 D COMMAND [conn40] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:42.894-0500-5704c04a65c17830b843f1c2", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162894), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -67.0 }, max: { _id: MaxKey } }, left: { min: { _id: -67.0 }, max: { _id: -66.0 }, lastmod: Timestamp 1000|69, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -66.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|70, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.865-0500 c20011| 2016-04-06T02:52:42.895-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|24, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 13ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.868-0500 c20011| 2016-04-06T02:52:42.900-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|25, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:31.871-0500 c20011| 2016-04-06T02:52:42.901-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929162000|26, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|25, t: 3 }, name-id: "240" } [js_test:multi_coll_drop] 2016-04-06T02:53:31.875-0500 c20011| 2016-04-06T02:52:42.903-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|26, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:31.876-0500 c20011| 2016-04-06T02:52:42.903-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:31.879-0500 c20011| 2016-04-06T02:52:42.903-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.881-0500 c20011| 2016-04-06T02:52:42.903-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|26, t: 3 } and is durable through: { ts: Timestamp 1459929162000|25, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.883-0500 c20011| 2016-04-06T02:52:42.903-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929162000|26, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|25, t: 3 }, name-id: "240" } [js_test:multi_coll_drop] 2016-04-06T02:53:31.887-0500 c20011| 2016-04-06T02:52:42.903-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|26, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.889-0500 c20011| 2016-04-06T02:52:42.908-0500 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.890-0500 c20011| 2016-04-06T02:52:42.908-0500 D COMMAND [conn28] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:31.892-0500 c20011| 2016-04-06T02:52:42.910-0500 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 3 } numYields:0 reslen:480 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.893-0500 c20011| 2016-04-06T02:52:42.910-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|26, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|26, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:31.894-0500 c20011| 2016-04-06T02:52:42.910-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:31.896-0500 c20011| 2016-04-06T02:52:42.910-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.896-0500 c20011| 2016-04-06T02:52:42.910-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|26, t: 3 } and is durable through: { ts: Timestamp 1459929162000|26, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.897-0500 c20011| 2016-04-06T02:52:42.910-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|26, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.901-0500 c20011| 2016-04-06T02:52:42.910-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|26, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|26, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.905-0500 c20011| 2016-04-06T02:52:42.911-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|25, t: 3 } } cursorid:19853084149 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 11ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.910-0500 c20011| 2016-04-06T02:52:42.911-0500 I COMMAND [conn40] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:42.894-0500-5704c04a65c17830b843f1c2", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162894), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -67.0 }, max: { _id: MaxKey } }, left: { min: { _id: -67.0 }, max: { _id: -66.0 }, lastmod: Timestamp 1000|69, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -66.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|70, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 16ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.915-0500 c20011| 2016-04-06T02:52:42.912-0500 D COMMAND [conn40] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c04a65c17830b843f1c1') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.917-0500 c20011| 2016-04-06T02:52:42.912-0500 D QUERY [conn40] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:31.918-0500 c20011| 2016-04-06T02:52:42.912-0500 D QUERY [conn40] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c04a65c17830b843f1c1') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.920-0500 c20011| 2016-04-06T02:52:42.913-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|26, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:31.924-0500 c20011| 2016-04-06T02:52:42.920-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|26, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.926-0500 c20011| 2016-04-06T02:52:42.923-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|26, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|27, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:31.928-0500 c20011| 2016-04-06T02:52:42.923-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:31.930-0500 c20011| 2016-04-06T02:52:42.923-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.932-0500 c20011| 2016-04-06T02:52:42.923-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|27, t: 3 } and is durable through: { ts: Timestamp 1459929162000|26, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.942-0500 c20011| 2016-04-06T02:52:42.923-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|26, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|27, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.952-0500 c20011| 2016-04-06T02:52:42.923-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|26, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:31.954-0500 c20011| 2016-04-06T02:52:42.934-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929162000|27, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|26, t: 3 }, name-id: "241" } [js_test:multi_coll_drop] 2016-04-06T02:53:31.959-0500 c20011| 2016-04-06T02:52:42.936-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|27, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|27, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:31.960-0500 c20011| 2016-04-06T02:52:42.936-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:31.961-0500 c20011| 2016-04-06T02:52:42.936-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.967-0500 c20011| 2016-04-06T02:52:42.936-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|27, t: 3 } and is durable through: { ts: Timestamp 1459929162000|27, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.971-0500 c20011| 2016-04-06T02:52:42.936-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|27, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:31.986-0500 c20011| 2016-04-06T02:52:42.936-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|27, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|27, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:31.991-0500 c20011| 2016-04-06T02:52:42.936-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|26, t: 3 } } cursorid:19853084149 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 13ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.027-0500 c20011| 2016-04-06T02:52:42.937-0500 I COMMAND [conn40] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c04a65c17830b843f1c1') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 24ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.028-0500 c20011| 2016-04-06T02:52:42.937-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|27, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:32.032-0500 c20011| 2016-04-06T02:52:42.942-0500 D COMMAND [conn36] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|68 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|27, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.037-0500 c20011| 2016-04-06T02:52:42.942-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|27, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:32.040-0500 c20011| 2016-04-06T02:52:42.942-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|68 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|27, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.041-0500 c20011| 2016-04-06T02:52:42.942-0500 D QUERY [conn36] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:32.047-0500 c20011| 2016-04-06T02:52:42.944-0500 I COMMAND [conn36] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|68 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|27, t: 3 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:732 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.050-0500 c20011| 2016-04-06T02:52:42.954-0500 D COMMAND [conn40] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c04a65c17830b843f1c3'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929162952), why: "splitting chunk [{ _id: -66.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.054-0500 c20011| 2016-04-06T02:52:42.954-0500 D QUERY [conn40] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:32.057-0500 c20011| 2016-04-06T02:52:42.954-0500 D QUERY [conn40] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:32.060-0500 c20011| 2016-04-06T02:52:42.954-0500 D QUERY [conn40] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.068-0500 c20011| 2016-04-06T02:52:42.958-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|27, t: 3 } } cursorid:19853084149 numYields:1 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 21ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.323-0500 c20011| 2016-04-06T02:52:42.961-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|27, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:32.324-0500 c20011| 2016-04-06T02:52:42.961-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:32.332-0500 c20011| 2016-04-06T02:52:42.961-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.334-0500 c20011| 2016-04-06T02:52:42.961-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|28, t: 3 } and is durable through: { ts: Timestamp 1459929162000|27, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.338-0500 c20011| 2016-04-06T02:52:42.961-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|27, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:32.342-0500 c20011| 2016-04-06T02:52:42.961-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|27, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.344-0500 c20011| 2016-04-06T02:52:43.029-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929162000|28, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|27, t: 3 }, name-id: "242" } [js_test:multi_coll_drop] 2016-04-06T02:53:32.354-0500 c20011| 2016-04-06T02:52:43.046-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:32.355-0500 c20011| 2016-04-06T02:52:43.046-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:32.360-0500 c20011| 2016-04-06T02:52:43.046-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.362-0500 c20011| 2016-04-06T02:52:43.046-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|28, t: 3 } and is durable through: { ts: Timestamp 1459929162000|28, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.363-0500 c20011| 2016-04-06T02:52:43.046-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|28, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.367-0500 c20011| 2016-04-06T02:52:43.046-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.369-0500 c20011| 2016-04-06T02:52:43.047-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|27, t: 3 } } cursorid:19853084149 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 85ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.372-0500 c20011| 2016-04-06T02:52:43.047-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|28, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:32.381-0500 c20011| 2016-04-06T02:52:43.047-0500 I COMMAND [conn40] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c04a65c17830b843f1c3'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929162952), why: "splitting chunk [{ _id: -66.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c04a65c17830b843f1c3'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929162952), why: "splitting chunk [{ _id: -66.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 93ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.384-0500 c20011| 2016-04-06T02:52:43.055-0500 D COMMAND [conn40] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|28, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.386-0500 c20011| 2016-04-06T02:52:43.055-0500 D COMMAND [conn40] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|28, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:32.391-0500 c20011| 2016-04-06T02:52:43.055-0500 D COMMAND [conn40] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|28, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.395-0500 c20011| 2016-04-06T02:52:43.055-0500 D QUERY [conn40] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:32.399-0500 c20011| 2016-04-06T02:52:43.055-0500 I COMMAND [conn40] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|28, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:512 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.400-0500 c20011| 2016-04-06T02:52:43.056-0500 D COMMAND [conn40] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|70 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|28, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.403-0500 c20011| 2016-04-06T02:52:43.056-0500 D COMMAND [conn40] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|28, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:32.404-0500 c20011| 2016-04-06T02:52:43.056-0500 D COMMAND [conn40] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|70 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|28, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.408-0500 c20011| 2016-04-06T02:52:43.056-0500 D QUERY [conn40] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:32.410-0500 c20011| 2016-04-06T02:52:43.057-0500 I COMMAND [conn40] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|70 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|28, t: 3 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.416-0500 c20011| 2016-04-06T02:52:43.083-0500 D COMMAND [conn40] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-66.0", lastmod: Timestamp 1000|71, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -66.0 }, max: { _id: -65.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-66.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-65.0", lastmod: Timestamp 1000|72, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -65.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-65.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|70 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.418-0500 c20011| 2016-04-06T02:52:43.083-0500 D QUERY [conn40] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:32.419-0500 c20011| 2016-04-06T02:52:43.083-0500 D QUERY [conn40] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:32.422-0500 c20011| 2016-04-06T02:52:43.083-0500 I COMMAND [conn40] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.423-0500 c20011| 2016-04-06T02:52:43.083-0500 D QUERY [conn40] Using idhack: { _id: "multidrop.coll-_id_-66.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:32.424-0500 c20011| 2016-04-06T02:52:43.083-0500 D QUERY [conn40] Using idhack: { _id: "multidrop.coll-_id_-65.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:32.435-0500 c20011| 2016-04-06T02:52:43.083-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|28, t: 3 } } cursorid:19853084149 numYields:1 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 36ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.438-0500 c20011| 2016-04-06T02:52:43.087-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|28, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:32.449-0500 c20011| 2016-04-06T02:52:43.091-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|1, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:32.449-0500 c20011| 2016-04-06T02:52:43.091-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:32.464-0500 c20011| 2016-04-06T02:52:43.091-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.485-0500 c20011| 2016-04-06T02:52:43.091-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|1, t: 3 } and is durable through: { ts: Timestamp 1459929162000|28, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.490-0500 c20011| 2016-04-06T02:52:43.091-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|1, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.491-0500 c20011| 2016-04-06T02:52:43.106-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929163000|1, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929162000|28, t: 3 }, name-id: "243" } [js_test:multi_coll_drop] 2016-04-06T02:53:32.498-0500 c20011| 2016-04-06T02:52:43.116-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|1, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|1, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:32.498-0500 c20011| 2016-04-06T02:52:43.116-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:32.501-0500 c20011| 2016-04-06T02:52:43.116-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.503-0500 c20011| 2016-04-06T02:52:43.116-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|1, t: 3 } and is durable through: { ts: Timestamp 1459929163000|1, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.506-0500 c20011| 2016-04-06T02:52:43.116-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929163000|1, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.514-0500 c20011| 2016-04-06T02:52:43.116-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|1, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|1, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.518-0500 c20011| 2016-04-06T02:52:43.118-0500 I COMMAND [conn40] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-66.0", lastmod: Timestamp 1000|71, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -66.0 }, max: { _id: -65.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-66.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-65.0", lastmod: Timestamp 1000|72, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -65.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-65.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|70 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 35ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.523-0500 c20011| 2016-04-06T02:52:43.119-0500 D COMMAND [conn40] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:43.119-0500-5704c04b65c17830b843f1c4", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929163119), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -66.0 }, max: { _id: MaxKey } }, left: { min: { _id: -66.0 }, max: { _id: -65.0 }, lastmod: Timestamp 1000|71, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -65.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|72, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.527-0500 c20011| 2016-04-06T02:52:43.119-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|28, t: 3 } } cursorid:19853084149 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 32ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.530-0500 c20011| 2016-04-06T02:52:43.120-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|1, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:32.537-0500 c20011| 2016-04-06T02:52:43.122-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|1, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.545-0500 c20011| 2016-04-06T02:52:43.126-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|1, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:32.545-0500 c20011| 2016-04-06T02:52:43.128-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929163000|2, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|1, t: 3 }, name-id: "244" } [js_test:multi_coll_drop] 2016-04-06T02:53:32.553-0500 c20011| 2016-04-06T02:52:43.131-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|1, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:32.554-0500 c20011| 2016-04-06T02:52:43.131-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:32.557-0500 c20011| 2016-04-06T02:52:43.131-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.563-0500 c20011| 2016-04-06T02:52:43.131-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|2, t: 3 } and is durable through: { ts: Timestamp 1459929163000|1, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.565-0500 c20011| 2016-04-06T02:52:43.131-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929163000|2, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|1, t: 3 }, name-id: "244" } [js_test:multi_coll_drop] 2016-04-06T02:53:32.569-0500 c20011| 2016-04-06T02:52:43.131-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|1, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.576-0500 c20011| 2016-04-06T02:52:43.139-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:32.576-0500 c20011| 2016-04-06T02:52:43.139-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:32.582-0500 c20011| 2016-04-06T02:52:43.139-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.586-0500 c20011| 2016-04-06T02:52:43.139-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|2, t: 3 } and is durable through: { ts: Timestamp 1459929163000|2, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.587-0500 c20011| 2016-04-06T02:52:43.139-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929163000|2, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.592-0500 c20011| 2016-04-06T02:52:43.139-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.605-0500 c20011| 2016-04-06T02:52:43.159-0500 I COMMAND [conn40] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:43.119-0500-5704c04b65c17830b843f1c4", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929163119), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -66.0 }, max: { _id: MaxKey } }, left: { min: { _id: -66.0 }, max: { _id: -65.0 }, lastmod: Timestamp 1000|71, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -65.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|72, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 40ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.608-0500 c20011| 2016-04-06T02:52:43.159-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|1, t: 3 } } cursorid:19853084149 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 32ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.611-0500 c20011| 2016-04-06T02:52:43.160-0500 D COMMAND [conn40] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c04a65c17830b843f1c3') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.615-0500 c20011| 2016-04-06T02:52:43.160-0500 D QUERY [conn40] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:32.622-0500 c20011| 2016-04-06T02:52:43.160-0500 D QUERY [conn40] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c04a65c17830b843f1c3') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.625-0500 c20011| 2016-04-06T02:52:43.161-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:32.629-0500 c20011| 2016-04-06T02:52:43.161-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|2, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.633-0500 c20011| 2016-04-06T02:52:43.163-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929163000|3, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|2, t: 3 }, name-id: "245" } [js_test:multi_coll_drop] 2016-04-06T02:53:32.635-0500 c20011| 2016-04-06T02:52:43.164-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:32.643-0500 c20011| 2016-04-06T02:52:43.180-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:32.646-0500 c20011| 2016-04-06T02:52:43.180-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:32.651-0500 c20011| 2016-04-06T02:52:43.180-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.653-0500 c20011| 2016-04-06T02:52:43.180-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|3, t: 3 } and is durable through: { ts: Timestamp 1459929163000|2, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.656-0500 c20011| 2016-04-06T02:52:43.180-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929163000|3, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|2, t: 3 }, name-id: "245" } [js_test:multi_coll_drop] 2016-04-06T02:53:32.666-0500 c20011| 2016-04-06T02:52:43.180-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.673-0500 c20011| 2016-04-06T02:52:43.186-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|3, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:32.673-0500 c20011| 2016-04-06T02:52:43.186-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:32.675-0500 c20011| 2016-04-06T02:52:43.186-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.677-0500 c20011| 2016-04-06T02:52:43.188-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|3, t: 3 } and is durable through: { ts: Timestamp 1459929163000|3, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.679-0500 c20011| 2016-04-06T02:52:43.188-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929163000|3, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.683-0500 c20011| 2016-04-06T02:52:43.188-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|3, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.691-0500 c20011| 2016-04-06T02:52:43.189-0500 I COMMAND [conn40] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c04a65c17830b843f1c3') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 29ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.701-0500 c20011| 2016-04-06T02:52:43.191-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|2, t: 3 } } cursorid:19853084149 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 26ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.701-0500 c20011| 2016-04-06T02:52:43.192-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|3, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:32.704-0500 c20011| 2016-04-06T02:52:43.195-0500 D COMMAND [conn36] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.705-0500 c20011| 2016-04-06T02:52:43.195-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|3, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:32.706-0500 c20011| 2016-04-06T02:52:43.195-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.725-0500 c20011| 2016-04-06T02:52:43.195-0500 D QUERY [conn36] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:32.736-0500 c20011| 2016-04-06T02:52:43.197-0500 I COMMAND [conn36] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.744-0500 c20011| 2016-04-06T02:52:43.203-0500 D COMMAND [conn40] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c04b65c17830b843f1c5'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929163203), why: "splitting chunk [{ _id: -65.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.745-0500 c20011| 2016-04-06T02:52:43.203-0500 D QUERY [conn40] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:32.745-0500 c20011| 2016-04-06T02:52:43.203-0500 D QUERY [conn40] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:32.749-0500 c20011| 2016-04-06T02:52:43.204-0500 D QUERY [conn40] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.754-0500 c20011| 2016-04-06T02:52:43.205-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|3, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 12ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.756-0500 c20011| 2016-04-06T02:52:43.209-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|3, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:32.760-0500 c20011| 2016-04-06T02:52:43.214-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|3, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:32.762-0500 c20011| 2016-04-06T02:52:43.214-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:32.765-0500 c20011| 2016-04-06T02:52:43.214-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.765-0500 c20011| 2016-04-06T02:52:43.214-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|4, t: 3 } and is durable through: { ts: Timestamp 1459929163000|3, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.771-0500 c20011| 2016-04-06T02:52:43.214-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|3, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.771-0500 c20011| 2016-04-06T02:52:43.223-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929163000|4, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|3, t: 3 }, name-id: "246" } [js_test:multi_coll_drop] 2016-04-06T02:53:32.774-0500 c20011| 2016-04-06T02:52:43.226-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:32.775-0500 c20011| 2016-04-06T02:52:43.226-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:32.783-0500 c20011| 2016-04-06T02:52:43.226-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.805-0500 c20011| 2016-04-06T02:52:43.226-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|4, t: 3 } and is durable through: { ts: Timestamp 1459929163000|4, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.808-0500 c20011| 2016-04-06T02:52:43.226-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929163000|4, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.813-0500 c20011| 2016-04-06T02:52:43.226-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.818-0500 c20011| 2016-04-06T02:52:43.230-0500 I COMMAND [conn40] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c04b65c17830b843f1c5'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929163203), why: "splitting chunk [{ _id: -65.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c04b65c17830b843f1c5'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929163203), why: "splitting chunk [{ _id: -65.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 26ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.846-0500 c20011| 2016-04-06T02:52:43.230-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|3, t: 3 } } cursorid:19853084149 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 20ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.853-0500 c20011| 2016-04-06T02:52:43.231-0500 D COMMAND [conn40] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|4, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.860-0500 c20011| 2016-04-06T02:52:43.231-0500 D COMMAND [conn40] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|4, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:32.867-0500 c20011| 2016-04-06T02:52:43.231-0500 D COMMAND [conn40] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|4, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.867-0500 c20011| 2016-04-06T02:52:43.231-0500 D QUERY [conn40] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:32.868-0500 c20011| 2016-04-06T02:52:43.231-0500 I COMMAND [conn40] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|4, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:512 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.869-0500 c20011| 2016-04-06T02:52:43.232-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|4, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:32.875-0500 c20011| 2016-04-06T02:52:43.232-0500 D COMMAND [conn40] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-65.0", lastmod: Timestamp 1000|73, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -65.0 }, max: { _id: -64.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-65.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-64.0", lastmod: Timestamp 1000|74, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -64.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-64.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|72 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.875-0500 c20011| 2016-04-06T02:52:43.232-0500 D QUERY [conn40] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:32.877-0500 c20011| 2016-04-06T02:52:43.233-0500 D QUERY [conn40] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:32.881-0500 c20011| 2016-04-06T02:52:43.233-0500 I COMMAND [conn40] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.881-0500 c20011| 2016-04-06T02:52:43.233-0500 D QUERY [conn40] Using idhack: { _id: "multidrop.coll-_id_-65.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:32.882-0500 c20011| 2016-04-06T02:52:43.233-0500 D QUERY [conn40] Using idhack: { _id: "multidrop.coll-_id_-64.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:32.884-0500 c20011| 2016-04-06T02:52:43.234-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|4, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:1038 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.887-0500 c20011| 2016-04-06T02:52:43.236-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|4, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:32.888-0500 c20011| 2016-04-06T02:52:43.242-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929163000|5, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|4, t: 3 }, name-id: "247" } [js_test:multi_coll_drop] 2016-04-06T02:53:32.895-0500 c20011| 2016-04-06T02:52:43.243-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:32.897-0500 c20011| 2016-04-06T02:52:43.243-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:32.897-0500 c20011| 2016-04-06T02:52:43.243-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.901-0500 c20011| 2016-04-06T02:52:43.243-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|5, t: 3 } and is durable through: { ts: Timestamp 1459929163000|4, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.909-0500 c20011| 2016-04-06T02:52:43.243-0500 D REPL [conn35] Required snapshot optime: { ts: Timestamp 1459929163000|5, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|4, t: 3 }, name-id: "247" } [js_test:multi_coll_drop] 2016-04-06T02:53:32.925-0500 c20011| 2016-04-06T02:52:43.243-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.952-0500 c20011| 2016-04-06T02:52:43.259-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:32.962-0500 c20011| 2016-04-06T02:52:43.259-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:32.964-0500 c20011| 2016-04-06T02:52:43.259-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.972-0500 c20011| 2016-04-06T02:52:43.259-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|5, t: 3 } and is durable through: { ts: Timestamp 1459929163000|5, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.974-0500 c20011| 2016-04-06T02:52:43.259-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929163000|5, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.979-0500 c20011| 2016-04-06T02:52:43.259-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.983-0500 c20011| 2016-04-06T02:52:43.260-0500 I COMMAND [conn40] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-65.0", lastmod: Timestamp 1000|73, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -65.0 }, max: { _id: -64.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-65.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-64.0", lastmod: Timestamp 1000|74, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -64.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-64.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|72 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 27ms [js_test:multi_coll_drop] 2016-04-06T02:53:32.983-0500 c20012| 2016-04-06T02:53:04.695-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:32.983-0500 c20012| 2016-04-06T02:53:04.695-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:32.984-0500 c20012| 2016-04-06T02:53:04.695-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:32.987-0500 c20012| 2016-04-06T02:53:04.695-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1140 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:32.988-0500 c20012| 2016-04-06T02:53:04.695-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:32.988-0500 c20012| 2016-04-06T02:53:04.695-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:32.989-0500 c20012| 2016-04-06T02:53:04.695-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1140 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:32.989-0500 c20012| 2016-04-06T02:53:04.695-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:32.990-0500 c20012| 2016-04-06T02:53:04.695-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1140 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:32.991-0500 c20012| 2016-04-06T02:53:04.695-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:32.992-0500 c20012| 2016-04-06T02:53:04.695-0500 D REPL [conn38] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29972475μs [js_test:multi_coll_drop] 2016-04-06T02:53:32.993-0500 c20012| 2016-04-06T02:53:04.695-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:32.995-0500 c20012| 2016-04-06T02:53:04.695-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.028-0500 c20012| 2016-04-06T02:53:04.695-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1142 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:33.063-0500 c20012| 2016-04-06T02:53:04.695-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1142 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:33.066-0500 c20012| 2016-04-06T02:53:04.696-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.095-0500 c20012| 2016-04-06T02:53:04.696-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.097-0500 c20012| 2016-04-06T02:53:04.696-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.098-0500 c20012| 2016-04-06T02:53:04.696-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.099-0500 c20012| 2016-04-06T02:53:04.696-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.107-0500 c20012| 2016-04-06T02:53:04.696-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1142 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:33.108-0500 c20012| 2016-04-06T02:53:04.696-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.112-0500 c20012| 2016-04-06T02:53:04.696-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.114-0500 c20012| 2016-04-06T02:53:04.696-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.115-0500 c20012| 2016-04-06T02:53:04.696-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.119-0500 c20012| 2016-04-06T02:53:04.696-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.119-0500 c20012| 2016-04-06T02:53:04.696-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.121-0500 c20012| 2016-04-06T02:53:04.696-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.122-0500 c20012| 2016-04-06T02:53:04.696-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.123-0500 c20012| 2016-04-06T02:53:04.696-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:33.124-0500 c20012| 2016-04-06T02:53:04.696-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.125-0500 c20012| 2016-04-06T02:53:04.696-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.126-0500 c20012| 2016-04-06T02:53:04.696-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-71.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:33.134-0500 c20012| 2016-04-06T02:53:04.696-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:33.140-0500 c20012| 2016-04-06T02:53:04.696-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1144 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:33.155-0500 c20012| 2016-04-06T02:53:04.696-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1144 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:33.170-0500 c20012| 2016-04-06T02:53:04.696-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-70.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:33.172-0500 c20012| 2016-04-06T02:53:04.696-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1144 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:33.173-0500 c20012| 2016-04-06T02:53:04.696-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.177-0500 c20012| 2016-04-06T02:53:04.696-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.177-0500 c20012| 2016-04-06T02:53:04.696-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.178-0500 c20012| 2016-04-06T02:53:04.696-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.181-0500 c20012| 2016-04-06T02:53:04.696-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.183-0500 c20012| 2016-04-06T02:53:04.696-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.186-0500 c20012| 2016-04-06T02:53:04.696-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.189-0500 c20012| 2016-04-06T02:53:04.697-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.190-0500 c20012| 2016-04-06T02:53:04.696-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.191-0500 c20012| 2016-04-06T02:53:04.697-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.201-0500 c20012| 2016-04-06T02:53:04.696-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.202-0500 c20012| 2016-04-06T02:53:04.697-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.210-0500 c20012| 2016-04-06T02:53:04.697-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.211-0500 c20012| 2016-04-06T02:53:04.697-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.213-0500 c20012| 2016-04-06T02:53:04.697-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.215-0500 c20012| 2016-04-06T02:53:04.697-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.227-0500 c20012| 2016-04-06T02:53:04.697-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:33.235-0500 c20012| 2016-04-06T02:53:04.697-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:33.238-0500 c20012| 2016-04-06T02:53:04.697-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1146 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:33.239-0500 c20012| 2016-04-06T02:53:04.697-0500 D REPL [conn38] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29970825μs [js_test:multi_coll_drop] 2016-04-06T02:53:33.241-0500 c20012| 2016-04-06T02:53:04.697-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1146 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:33.241-0500 c20012| 2016-04-06T02:53:04.697-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1146 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:33.243-0500 c20012| 2016-04-06T02:53:04.697-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:33.243-0500 c20012| 2016-04-06T02:53:04.697-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.244-0500 c20012| 2016-04-06T02:53:04.698-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.245-0500 c20012| 2016-04-06T02:53:04.698-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.248-0500 c20012| 2016-04-06T02:53:04.698-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.249-0500 c20012| 2016-04-06T02:53:04.698-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.256-0500 c20012| 2016-04-06T02:53:04.698-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.263-0500 c20012| 2016-04-06T02:53:04.698-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.263-0500 c20012| 2016-04-06T02:53:04.698-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.266-0500 c20012| 2016-04-06T02:53:04.698-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.282-0500 c20012| 2016-04-06T02:53:04.698-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:33.285-0500 c20012| 2016-04-06T02:53:04.698-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1148 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:33.286-0500 c20012| 2016-04-06T02:53:04.698-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.291-0500 c20012| 2016-04-06T02:53:04.698-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1148 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:33.294-0500 c20012| 2016-04-06T02:53:04.698-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.304-0500 c20012| 2016-04-06T02:53:04.698-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.305-0500 c20012| 2016-04-06T02:53:04.698-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.306-0500 c20012| 2016-04-06T02:53:04.698-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.310-0500 c20012| 2016-04-06T02:53:04.698-0500 D REPL [rsSync] replication batch size is 3 [js_test:multi_coll_drop] 2016-04-06T02:53:33.311-0500 c20012| 2016-04-06T02:53:04.698-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1148 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:33.315-0500 s20015| 2016-04-06T02:53:18.968-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Failed to execute command: RemoteCommand 99 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:41.723-0500 cmd:{ findAndModify: "lockpings", query: { _id: "mongovm16:20015:1459929127:-1485108316" }, update: { $set: { ping: new Date(1459929191723) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } reason: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:33.317-0500 s20015| 2016-04-06T02:53:18.968-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 99 finished with response: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:33.319-0500 s20015| 2016-04-06T02:53:18.968-0500 D NETWORK [replSetDistLockPinger] Marking host mongovm16:20013 as failed [js_test:multi_coll_drop] 2016-04-06T02:53:33.321-0500 d20010| 2016-04-06T02:53:18.968-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] dropping unhealthy pooled connection to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:33.322-0500 s20015| 2016-04-06T02:53:18.968-0500 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:33.325-0500 s20015| 2016-04-06T02:53:18.968-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Failed to execute command: RemoteCommand 100 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:48.271-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929198271), up: 71, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } reason: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:33.326-0500 s20015| 2016-04-06T02:53:18.968-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 100 finished with response: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:33.327-0500 s20015| 2016-04-06T02:53:18.969-0500 D NETWORK [Balancer] Marking host mongovm16:20013 as failed [js_test:multi_coll_drop] 2016-04-06T02:53:33.327-0500 s20015| 2016-04-06T02:53:18.969-0500 D SHARDING [Balancer] Command failed with retriable error and will be retried :: caused by :: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:33.328-0500 2016-04-06T02:53:18.969-0500 I NETWORK [ReplicaSetMonitorWatcher] Detected bad connection created at 1459929186841388 microSec, clearing pool for mongovm16:20013 of 0 connections [js_test:multi_coll_drop] 2016-04-06T02:53:33.329-0500 s20015| 2016-04-06T02:53:18.969-0500 D NETWORK [Balancer] polling for status of connection to 192.168.100.28:20011, event detected [js_test:multi_coll_drop] 2016-04-06T02:53:33.330-0500 s20015| 2016-04-06T02:53:18.969-0500 I NETWORK [Balancer] Socket closed remotely, no longer connected (idle 14 secs, remote host 192.168.100.28:20011) [js_test:multi_coll_drop] 2016-04-06T02:53:33.332-0500 s20015| 2016-04-06T02:53:18.969-0500 D NETWORK [ReplicaSetMonitorWatcher] SocketException: remote: (NONE):0 error: 9001 socket exception [CLOSED] server [192.168.100.28:20013] [js_test:multi_coll_drop] 2016-04-06T02:53:33.332-0500 s20015| 2016-04-06T02:53:18.969-0500 D NETWORK [Balancer] creating new connection to:mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:33.334-0500 s20015| 2016-04-06T02:53:18.969-0500 D - [ReplicaSetMonitorWatcher] User Assertion: 6:network error while attempting to run command 'ismaster' on host 'mongovm16:20013' [js_test:multi_coll_drop] 2016-04-06T02:53:33.340-0500 s20015| 2016-04-06T02:53:18.969-0500 I NETWORK [ReplicaSetMonitorWatcher] Detected bad connection created at 1459929184668004 microSec, clearing pool for mongovm16:20013 of 0 connections [js_test:multi_coll_drop] 2016-04-06T02:53:33.342-0500 s20015| 2016-04-06T02:53:18.969-0500 D NETWORK [ReplicaSetMonitorWatcher] Marking host mongovm16:20013 as failed [js_test:multi_coll_drop] 2016-04-06T02:53:33.359-0500 c20012| 2016-04-06T02:53:04.698-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.361-0500 c20012| 2016-04-06T02:53:04.698-0500 D QUERY [repl writer worker 4] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:33.371-0500 c20012| 2016-04-06T02:53:04.698-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.373-0500 c20012| 2016-04-06T02:53:04.698-0500 D QUERY [repl writer worker 4] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:33.375-0500 c20012| 2016-04-06T02:53:04.699-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.377-0500 c20012| 2016-04-06T02:53:04.699-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.377-0500 c20012| 2016-04-06T02:53:04.699-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.377-0500 c20012| 2016-04-06T02:53:04.699-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.380-0500 c20012| 2016-04-06T02:53:04.699-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.384-0500 c20012| 2016-04-06T02:53:04.699-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.385-0500 c20012| 2016-04-06T02:53:04.699-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.387-0500 c20012| 2016-04-06T02:53:04.700-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.388-0500 c20012| 2016-04-06T02:53:04.700-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.390-0500 c20012| 2016-04-06T02:53:04.700-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.390-0500 c20012| 2016-04-06T02:53:04.700-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.391-0500 c20012| 2016-04-06T02:53:04.700-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.393-0500 c20012| 2016-04-06T02:53:04.700-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.395-0500 c20012| 2016-04-06T02:53:04.700-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.397-0500 c20012| 2016-04-06T02:53:04.700-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.397-0500 c20012| 2016-04-06T02:53:04.700-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.425-0500 c20012| 2016-04-06T02:53:04.700-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:33.432-0500 c20012| 2016-04-06T02:53:04.700-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:33.440-0500 c20012| 2016-04-06T02:53:04.700-0500 D REPL [conn38] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29967550μs [js_test:multi_coll_drop] 2016-04-06T02:53:33.443-0500 c20012| 2016-04-06T02:53:04.700-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.444-0500 c20012| 2016-04-06T02:53:04.700-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.444-0500 c20012| 2016-04-06T02:53:04.700-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.445-0500 c20012| 2016-04-06T02:53:04.700-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.447-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.447-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.448-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.448-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.449-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.449-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.450-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.456-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.457-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.457-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.458-0500 c20012| 2016-04-06T02:53:04.701-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:33.459-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.460-0500 c20012| 2016-04-06T02:53:04.701-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:33.462-0500 c20012| 2016-04-06T02:53:04.701-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-70.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:33.462-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.464-0500 c20012| 2016-04-06T02:53:04.701-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1150 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:33.465-0500 c20012| 2016-04-06T02:53:04.701-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1150 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:33.466-0500 c20012| 2016-04-06T02:53:04.701-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-69.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:33.467-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.468-0500 c20013| 2016-04-06T02:52:22.559-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.468-0500 d20010| 2016-04-06T02:53:18.968-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:53:33.471-0500 s20014| 2016-04-06T02:53:18.969-0500 D ASIO [conn1] startCommand: RemoteCommand 418 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:48.969-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|12, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:33.471-0500 d20010| 2016-04-06T02:53:18.968-0500 I SHARDING [conn5] distributed lock with ts: 5704c06465c17830b843f1cb' unlocked. [js_test:multi_coll_drop] 2016-04-06T02:53:33.475-0500 d20010| 2016-04-06T02:53:18.968-0500 I COMMAND [conn5] command admin.$cmd command: splitChunk { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -62.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -61.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|78, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } numYields:0 reslen:74 locks:{ Global: { acquireCount: { r: 6, w: 2 } }, Database: { acquireCount: { r: 2, w: 2 } }, Collection: { acquireCount: { r: 2, W: 2 } } } protocol:op_command 10241ms [js_test:multi_coll_drop] 2016-04-06T02:53:33.476-0500 d20010| 2016-04-06T02:53:18.969-0500 I NETWORK [ReplicaSetMonitorWatcher] Detected bad connection created at 1459929184722089 microSec, clearing pool for mongovm16:20013 of 0 connections [js_test:multi_coll_drop] 2016-04-06T02:53:33.479-0500 d20010| 2016-04-06T02:53:18.969-0500 I NETWORK [ReplicaSetMonitorWatcher] Socket closed remotely, no longer connected (idle 14 secs, remote host 192.168.100.28:20011) [js_test:multi_coll_drop] 2016-04-06T02:53:33.481-0500 d20010| 2016-04-06T02:53:18.973-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:33.484-0500 d20010| 2016-04-06T02:53:18.974-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -60.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:33.485-0500 d20010| 2016-04-06T02:53:18.975-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:33.488-0500 d20010| 2016-04-06T02:53:18.991-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:33.493-0500 d20010| 2016-04-06T02:53:18.996-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -59.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:33.496-0500 d20010| 2016-04-06T02:53:18.999-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:33.502-0500 d20010| 2016-04-06T02:53:19.016-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -58.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:33.508-0500 d20010| 2016-04-06T02:53:19.035-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:33.509-0500 d20010| 2016-04-06T02:53:19.036-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -57.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:33.516-0500 d20010| 2016-04-06T02:53:19.040-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:33.517-0500 ReplSetTest Could not call ismaster on node connection to mongovm16:20013: Error: error doing query: failed: network error while attempting to run command 'ismaster' on host 'mongovm16:20013' [js_test:multi_coll_drop] 2016-04-06T02:53:33.517-0500 c20012| 2016-04-06T02:53:04.701-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1150 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:33.518-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.519-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.520-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.522-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.523-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.525-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.526-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.528-0500 c20012| 2016-04-06T02:53:04.701-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:33.529-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.541-0500 c20012| 2016-04-06T02:53:04.701-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1151 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:33.541-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.543-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.543-0500 c20012| 2016-04-06T02:53:04.701-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1151 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:33.545-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.545-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.550-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.550-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.551-0500 c20012| 2016-04-06T02:53:04.701-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.551-0500 c20012| 2016-04-06T02:53:04.701-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1151 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:33.552-0500 c20012| 2016-04-06T02:53:04.702-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:33.553-0500 c20012| 2016-04-06T02:53:04.702-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:33.556-0500 c20012| 2016-04-06T02:53:04.702-0500 D REPL [conn38] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29966238μs [js_test:multi_coll_drop] 2016-04-06T02:53:33.569-0500 c20012| 2016-04-06T02:53:04.702-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.577-0500 c20012| 2016-04-06T02:53:04.702-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.577-0500 c20012| 2016-04-06T02:53:04.702-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.580-0500 c20012| 2016-04-06T02:53:04.702-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.592-0500 c20012| 2016-04-06T02:53:04.702-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.593-0500 c20012| 2016-04-06T02:53:04.702-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.597-0500 c20012| 2016-04-06T02:53:04.702-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.599-0500 c20012| 2016-04-06T02:53:04.702-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.601-0500 c20012| 2016-04-06T02:53:04.702-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.612-0500 c20012| 2016-04-06T02:53:04.702-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.625-0500 c20012| 2016-04-06T02:53:04.702-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.625-0500 c20012| 2016-04-06T02:53:04.702-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.626-0500 c20012| 2016-04-06T02:53:04.702-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.630-0500 c20012| 2016-04-06T02:53:04.702-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.631-0500 c20012| 2016-04-06T02:53:04.702-0500 D REPL [rsSync] replication batch size is 3 [js_test:multi_coll_drop] 2016-04-06T02:53:33.632-0500 c20012| 2016-04-06T02:53:04.702-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:33.636-0500 c20012| 2016-04-06T02:53:04.702-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:33.642-0500 c20012| 2016-04-06T02:53:04.702-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1154 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:33.644-0500 c20012| 2016-04-06T02:53:04.702-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1154 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:33.646-0500 c20012| 2016-04-06T02:53:04.702-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:33.646-0500 c20012| 2016-04-06T02:53:04.702-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.651-0500 c20012| 2016-04-06T02:53:04.703-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1154 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:33.652-0500 c20012| 2016-04-06T02:53:04.703-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.652-0500 c20012| 2016-04-06T02:53:04.703-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.655-0500 c20012| 2016-04-06T02:53:04.703-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.656-0500 c20012| 2016-04-06T02:53:04.703-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.656-0500 c20012| 2016-04-06T02:53:04.703-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.656-0500 c20012| 2016-04-06T02:53:04.703-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.677-0500 c20012| 2016-04-06T02:53:04.703-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.680-0500 c20012| 2016-04-06T02:53:04.703-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.683-0500 c20012| 2016-04-06T02:53:04.703-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.684-0500 c20012| 2016-04-06T02:53:04.703-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.684-0500 c20012| 2016-04-06T02:53:04.703-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.685-0500 c20012| 2016-04-06T02:53:04.703-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.688-0500 c20012| 2016-04-06T02:53:04.703-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.689-0500 c20012| 2016-04-06T02:53:04.703-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.690-0500 c20012| 2016-04-06T02:53:04.703-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.692-0500 c20012| 2016-04-06T02:53:04.703-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.693-0500 c20012| 2016-04-06T02:53:04.703-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.696-0500 c20012| 2016-04-06T02:53:04.703-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:33.701-0500 c20012| 2016-04-06T02:53:04.703-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1156 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:33.701-0500 c20012| 2016-04-06T02:53:04.703-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1156 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:33.703-0500 c20012| 2016-04-06T02:53:04.704-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:33.703-0500 c20012| 2016-04-06T02:53:04.704-0500 D REPL [conn38] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29963900μs [js_test:multi_coll_drop] 2016-04-06T02:53:33.704-0500 c20012| 2016-04-06T02:53:04.704-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:33.706-0500 c20012| 2016-04-06T02:53:04.704-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1156 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:33.713-0500 c20012| 2016-04-06T02:53:04.704-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:33.717-0500 c20012| 2016-04-06T02:53:04.704-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1157 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:33.718-0500 c20012| 2016-04-06T02:53:04.704-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.718-0500 c20012| 2016-04-06T02:53:04.704-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1157 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:33.719-0500 c20012| 2016-04-06T02:53:04.704-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.720-0500 c20012| 2016-04-06T02:53:04.704-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.723-0500 c20012| 2016-04-06T02:53:04.704-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.723-0500 c20012| 2016-04-06T02:53:04.704-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.724-0500 c20012| 2016-04-06T02:53:04.704-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.725-0500 c20012| 2016-04-06T02:53:04.704-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1157 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:33.726-0500 c20012| 2016-04-06T02:53:04.704-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.727-0500 c20012| 2016-04-06T02:53:04.704-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.727-0500 c20012| 2016-04-06T02:53:04.704-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.728-0500 c20012| 2016-04-06T02:53:04.704-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.730-0500 c20012| 2016-04-06T02:53:04.704-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.732-0500 c20012| 2016-04-06T02:53:04.704-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.732-0500 c20012| 2016-04-06T02:53:04.705-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.734-0500 c20012| 2016-04-06T02:53:04.705-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.735-0500 c20012| 2016-04-06T02:53:04.705-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:33.735-0500 c20012| 2016-04-06T02:53:04.705-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.739-0500 c20012| 2016-04-06T02:53:04.705-0500 D QUERY [repl writer worker 4] Using idhack: { _id: "multidrop.coll-_id_-69.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:33.740-0500 c20012| 2016-04-06T02:53:04.705-0500 D QUERY [repl writer worker 4] Using idhack: { _id: "multidrop.coll-_id_-68.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:33.744-0500 c20012| 2016-04-06T02:53:04.705-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.748-0500 c20012| 2016-04-06T02:53:04.705-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.755-0500 c20012| 2016-04-06T02:53:04.705-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.767-0500 c20012| 2016-04-06T02:53:04.705-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.791-0500 c20012| 2016-04-06T02:53:04.705-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.799-0500 c20012| 2016-04-06T02:53:04.705-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.808-0500 c20012| 2016-04-06T02:53:04.705-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.809-0500 c20012| 2016-04-06T02:53:04.705-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.811-0500 c20012| 2016-04-06T02:53:04.705-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.812-0500 c20012| 2016-04-06T02:53:04.705-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.815-0500 c20012| 2016-04-06T02:53:04.705-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.815-0500 c20012| 2016-04-06T02:53:04.705-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.819-0500 c20012| 2016-04-06T02:53:04.705-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.821-0500 c20012| 2016-04-06T02:53:04.705-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.824-0500 c20012| 2016-04-06T02:53:04.705-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.826-0500 c20012| 2016-04-06T02:53:04.705-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.836-0500 c20012| 2016-04-06T02:53:04.705-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.846-0500 c20012| 2016-04-06T02:53:04.705-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:33.863-0500 c20012| 2016-04-06T02:53:04.705-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:33.871-0500 c20012| 2016-04-06T02:53:04.705-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1160 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:33.872-0500 c20012| 2016-04-06T02:53:04.705-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1160 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:33.875-0500 c20012| 2016-04-06T02:53:04.706-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:33.875-0500 c20012| 2016-04-06T02:53:04.706-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.877-0500 c20012| 2016-04-06T02:53:04.706-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1160 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:33.880-0500 c20012| 2016-04-06T02:53:04.706-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:33.923-0500 c20012| 2016-04-06T02:53:04.706-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1161 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:33.925-0500 c20012| 2016-04-06T02:53:04.706-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.926-0500 c20012| 2016-04-06T02:53:04.706-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.926-0500 c20012| 2016-04-06T02:53:04.706-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1161 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:33.955-0500 c20012| 2016-04-06T02:53:04.706-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.959-0500 c20012| 2016-04-06T02:53:04.706-0500 D REPL [conn38] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29962183μs [js_test:multi_coll_drop] 2016-04-06T02:53:33.975-0500 c20012| 2016-04-06T02:53:04.706-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.976-0500 c20012| 2016-04-06T02:53:04.706-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.981-0500 c20012| 2016-04-06T02:53:04.706-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.981-0500 c20012| 2016-04-06T02:53:04.706-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.984-0500 c20012| 2016-04-06T02:53:04.706-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.984-0500 c20012| 2016-04-06T02:53:04.706-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1161 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:33.985-0500 c20012| 2016-04-06T02:53:04.706-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.986-0500 c20012| 2016-04-06T02:53:04.706-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.987-0500 c20012| 2016-04-06T02:53:04.706-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.989-0500 c20012| 2016-04-06T02:53:04.706-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.989-0500 c20012| 2016-04-06T02:53:04.706-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.990-0500 c20012| 2016-04-06T02:53:04.706-0500 D REPL [rsSync] replication batch size is 3 [js_test:multi_coll_drop] 2016-04-06T02:53:33.991-0500 c20012| 2016-04-06T02:53:04.706-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.991-0500 c20012| 2016-04-06T02:53:04.706-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:33.993-0500 c20012| 2016-04-06T02:53:04.706-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:33.994-0500 c20012| 2016-04-06T02:53:04.706-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.995-0500 c20012| 2016-04-06T02:53:04.706-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.996-0500 c20012| 2016-04-06T02:53:04.706-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.998-0500 c20012| 2016-04-06T02:53:04.706-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.998-0500 c20012| 2016-04-06T02:53:04.706-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:33.998-0500 c20012| 2016-04-06T02:53:04.706-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.004-0500 c20012| 2016-04-06T02:53:04.706-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.007-0500 c20012| 2016-04-06T02:53:04.706-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:34.007-0500 c20012| 2016-04-06T02:53:04.707-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.008-0500 c20012| 2016-04-06T02:53:04.706-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.009-0500 c20012| 2016-04-06T02:53:04.707-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1164 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:34.010-0500 c20012| 2016-04-06T02:53:04.707-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.010-0500 c20012| 2016-04-06T02:53:04.707-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1164 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:34.011-0500 c20012| 2016-04-06T02:53:04.707-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.011-0500 c20012| 2016-04-06T02:53:04.707-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.012-0500 c20012| 2016-04-06T02:53:04.707-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.013-0500 c20012| 2016-04-06T02:53:04.707-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.013-0500 c20012| 2016-04-06T02:53:04.707-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.015-0500 c20012| 2016-04-06T02:53:04.707-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.030-0500 c20012| 2016-04-06T02:53:04.707-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.034-0500 c20012| 2016-04-06T02:53:04.707-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1164 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:34.035-0500 c20012| 2016-04-06T02:53:04.707-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:34.038-0500 s20014| 2016-04-06T02:53:18.969-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Failed to execute command: RemoteCommand 416 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:41.722-0500 cmd:{ findAndModify: "lockpings", query: { _id: "mongovm16:20014:1459929123:-665935931" }, update: { $set: { ping: new Date(1459929191722) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } reason: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:34.039-0500 s20014| 2016-04-06T02:53:18.969-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 416 finished with response: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:34.040-0500 s20014| 2016-04-06T02:53:18.969-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Failed to execute command: RemoteCommand 417 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:48.273-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929198273), up: 71, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } reason: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:34.041-0500 s20014| 2016-04-06T02:53:18.969-0500 D NETWORK [replSetDistLockPinger] Marking host mongovm16:20013 as failed [js_test:multi_coll_drop] 2016-04-06T02:53:34.042-0500 s20014| 2016-04-06T02:53:18.969-0500 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:34.043-0500 s20014| 2016-04-06T02:53:18.969-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 417 finished with response: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:34.044-0500 s20014| 2016-04-06T02:53:18.970-0500 D NETWORK [Balancer] Marking host mongovm16:20013 as failed [js_test:multi_coll_drop] 2016-04-06T02:53:34.044-0500 s20014| 2016-04-06T02:53:18.970-0500 D SHARDING [Balancer] Command failed with retriable error and will be retried :: caused by :: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:34.045-0500 s20014| 2016-04-06T02:53:18.970-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:34.047-0500 s20014| 2016-04-06T02:53:18.970-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 418 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:34.047-0500 s20014| 2016-04-06T02:53:18.970-0500 D NETWORK [Balancer] polling for status of connection to 192.168.100.28:20013, event detected [js_test:multi_coll_drop] 2016-04-06T02:53:34.054-0500 s20014| 2016-04-06T02:53:18.970-0500 I NETWORK [Balancer] Socket closed remotely, no longer connected (idle 15 secs, remote host 192.168.100.28:20013) [js_test:multi_coll_drop] 2016-04-06T02:53:34.058-0500 s20014| 2016-04-06T02:53:18.970-0500 D NETWORK [Balancer] creating new connection to:mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:34.064-0500 s20014| 2016-04-06T02:53:18.970-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 418 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:34.064-0500 s20014| 2016-04-06T02:53:18.971-0500 D SHARDING [conn1] loading chunk manager for collection multidrop.coll using old chunk manager w/ version 1|78||5704c02806c33406d4d9c0c0 and 40 chunks [js_test:multi_coll_drop] 2016-04-06T02:53:34.066-0500 s20014| 2016-04-06T02:53:18.971-0500 D SHARDING [conn1] major version query from 1|78||5704c02806c33406d4d9c0c0 and over 1 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|78 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:34.079-0500 s20014| 2016-04-06T02:53:18.971-0500 D ASIO [conn1] startCommand: RemoteCommand 422 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:48.971-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|78 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929194000|2, t: 5 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:34.079-0500 s20014| 2016-04-06T02:53:18.971-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 422 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:34.087-0500 s20014| 2016-04-06T02:53:18.972-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 422 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-62.0", lastmod: Timestamp 1000|79, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -62.0 }, max: { _id: -61.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:34.088-0500 s20014| 2016-04-06T02:53:18.972-0500 D SHARDING [conn1] loaded 2 chunks into new chunk manager for multidrop.coll with version 1|80||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:34.090-0500 s20014| 2016-04-06T02:53:18.972-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 1ms sequenceNumber: 43 version: 1|80||5704c02806c33406d4d9c0c0 based on: 1|78||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:34.094-0500 s20014| 2016-04-06T02:53:18.972-0500 D ASIO [conn1] startCommand: RemoteCommand 424 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:48.972-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929194000|2, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:34.095-0500 s20014| 2016-04-06T02:53:18.972-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 424 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:34.097-0500 s20014| 2016-04-06T02:53:18.973-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 424 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:34.099-0500 s20014| 2016-04-06T02:53:18.974-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:34.100-0500 s20015| 2016-04-06T02:53:18.969-0500 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.100.28:20012, no events [js_test:multi_coll_drop] 2016-04-06T02:53:34.132-0500 c20012| 2016-04-06T02:53:04.707-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|20, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:34.137-0500 c20012| 2016-04-06T02:53:04.707-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1166 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|20, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:34.137-0500 c20012| 2016-04-06T02:53:04.707-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:34.140-0500 c20012| 2016-04-06T02:53:04.707-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1166 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:34.141-0500 c20012| 2016-04-06T02:53:04.707-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.143-0500 c20012| 2016-04-06T02:53:04.707-0500 D REPL [conn38] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29960822μs [js_test:multi_coll_drop] 2016-04-06T02:53:34.144-0500 c20012| 2016-04-06T02:53:04.707-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.145-0500 c20012| 2016-04-06T02:53:04.707-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.149-0500 c20012| 2016-04-06T02:53:04.707-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.152-0500 c20012| 2016-04-06T02:53:04.707-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1166 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:34.153-0500 c20012| 2016-04-06T02:53:04.707-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.153-0500 c20012| 2016-04-06T02:53:04.707-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.155-0500 c20012| 2016-04-06T02:53:04.707-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.156-0500 c20012| 2016-04-06T02:53:04.707-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.156-0500 c20012| 2016-04-06T02:53:04.707-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.159-0500 c20012| 2016-04-06T02:53:04.707-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.161-0500 c20012| 2016-04-06T02:53:04.707-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.161-0500 c20012| 2016-04-06T02:53:04.707-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.162-0500 c20012| 2016-04-06T02:53:04.707-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.163-0500 c20012| 2016-04-06T02:53:04.707-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:34.164-0500 c20012| 2016-04-06T02:53:04.707-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.164-0500 c20012| 2016-04-06T02:53:04.707-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.165-0500 c20012| 2016-04-06T02:53:04.708-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-68.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:34.165-0500 c20012| 2016-04-06T02:53:04.708-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-67.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:34.166-0500 c20012| 2016-04-06T02:53:04.708-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.170-0500 c20012| 2016-04-06T02:53:04.708-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.171-0500 c20012| 2016-04-06T02:53:04.708-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.171-0500 c20012| 2016-04-06T02:53:04.708-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.172-0500 c20012| 2016-04-06T02:53:04.708-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.172-0500 c20012| 2016-04-06T02:53:04.708-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.173-0500 c20012| 2016-04-06T02:53:04.708-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.173-0500 c20012| 2016-04-06T02:53:04.708-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.173-0500 c20012| 2016-04-06T02:53:04.708-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.173-0500 c20012| 2016-04-06T02:53:04.708-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.174-0500 c20012| 2016-04-06T02:53:04.708-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.174-0500 c20012| 2016-04-06T02:53:04.708-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.175-0500 c20012| 2016-04-06T02:53:04.708-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.175-0500 c20012| 2016-04-06T02:53:04.708-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.176-0500 c20012| 2016-04-06T02:53:04.708-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.180-0500 c20012| 2016-04-06T02:53:04.708-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.180-0500 c20012| 2016-04-06T02:53:04.708-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.181-0500 c20012| 2016-04-06T02:53:04.708-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:34.183-0500 c20012| 2016-04-06T02:53:04.708-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:34.192-0500 c20012| 2016-04-06T02:53:04.708-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:34.195-0500 c20012| 2016-04-06T02:53:04.708-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1168 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:34.200-0500 c20012| 2016-04-06T02:53:04.708-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1168 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:34.200-0500 c20012| 2016-04-06T02:53:04.708-0500 D REPL [conn38] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29959565μs [js_test:multi_coll_drop] 2016-04-06T02:53:34.203-0500 c20012| 2016-04-06T02:53:04.708-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.204-0500 c20012| 2016-04-06T02:53:04.708-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.205-0500 c20012| 2016-04-06T02:53:04.708-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.208-0500 c20012| 2016-04-06T02:53:04.708-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.209-0500 c20012| 2016-04-06T02:53:04.708-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.212-0500 c20012| 2016-04-06T02:53:04.708-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1168 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:34.212-0500 c20012| 2016-04-06T02:53:04.708-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.216-0500 c20012| 2016-04-06T02:53:04.709-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.230-0500 c20012| 2016-04-06T02:53:04.709-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.231-0500 c20012| 2016-04-06T02:53:04.709-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.232-0500 c20012| 2016-04-06T02:53:04.709-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.234-0500 c20012| 2016-04-06T02:53:04.709-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.248-0500 c20012| 2016-04-06T02:53:04.709-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.249-0500 c20012| 2016-04-06T02:53:04.709-0500 D REPL [rsSync] replication batch size is 3 [js_test:multi_coll_drop] 2016-04-06T02:53:34.262-0500 c20012| 2016-04-06T02:53:04.709-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.275-0500 c20012| 2016-04-06T02:53:04.709-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.279-0500 c20012| 2016-04-06T02:53:04.709-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:34.286-0500 c20012| 2016-04-06T02:53:04.709-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:34.297-0500 c20012| 2016-04-06T02:53:04.709-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|20, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:34.301-0500 c20012| 2016-04-06T02:53:04.709-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1170 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|20, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:34.302-0500 c20012| 2016-04-06T02:53:04.709-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1170 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:34.303-0500 c20012| 2016-04-06T02:53:04.709-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1170 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:34.303-0500 c20012| 2016-04-06T02:53:04.709-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.311-0500 c20012| 2016-04-06T02:53:04.710-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.312-0500 c20012| 2016-04-06T02:53:04.710-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.318-0500 c20012| 2016-04-06T02:53:04.710-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.319-0500 c20012| 2016-04-06T02:53:04.710-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.321-0500 c20012| 2016-04-06T02:53:04.710-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.324-0500 c20012| 2016-04-06T02:53:04.710-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.326-0500 c20012| 2016-04-06T02:53:04.710-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.327-0500 c20012| 2016-04-06T02:53:04.710-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.328-0500 c20012| 2016-04-06T02:53:04.710-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.335-0500 c20012| 2016-04-06T02:53:04.710-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.338-0500 c20012| 2016-04-06T02:53:04.710-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.339-0500 c20012| 2016-04-06T02:53:04.710-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.341-0500 c20012| 2016-04-06T02:53:04.710-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.343-0500 c20012| 2016-04-06T02:53:04.710-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.357-0500 c20012| 2016-04-06T02:53:04.710-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:34.359-0500 c20012| 2016-04-06T02:53:04.710-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.361-0500 c20012| 2016-04-06T02:53:04.710-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1172 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:34.362-0500 c20012| 2016-04-06T02:53:04.710-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.364-0500 c20012| 2016-04-06T02:53:04.710-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1172 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:34.367-0500 c20012| 2016-04-06T02:53:04.710-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.370-0500 c20012| 2016-04-06T02:53:04.710-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:34.371-0500 c20012| 2016-04-06T02:53:04.710-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1172 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:34.373-0500 c20012| 2016-04-06T02:53:04.710-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:34.375-0500 c20012| 2016-04-06T02:53:04.710-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.375-0500 c20012| 2016-04-06T02:53:04.710-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.383-0500 c20012| 2016-04-06T02:53:04.710-0500 D REPL [conn38] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29957418μs [js_test:multi_coll_drop] 2016-04-06T02:53:34.387-0500 c20012| 2016-04-06T02:53:04.710-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.397-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.413-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.413-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.415-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.418-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.424-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.427-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.428-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.431-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.432-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.433-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.436-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.437-0500 c20012| 2016-04-06T02:53:04.711-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:34.446-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.475-0500 c20012| 2016-04-06T02:53:04.711-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:34.478-0500 c20012| 2016-04-06T02:53:04.711-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll-_id_-67.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:34.484-0500 c20012| 2016-04-06T02:53:04.711-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1174 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:34.491-0500 c20012| 2016-04-06T02:53:04.711-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1174 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:34.491-0500 c20012| 2016-04-06T02:53:04.711-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll-_id_-66.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:34.492-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.493-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.493-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.494-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.499-0500 c20012| 2016-04-06T02:53:04.711-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1174 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:34.504-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.533-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.534-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.535-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.536-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.540-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.542-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.543-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.547-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.548-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.549-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.551-0500 c20012| 2016-04-06T02:53:04.711-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.552-0500 c20012| 2016-04-06T02:53:04.712-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:34.553-0500 c20012| 2016-04-06T02:53:04.712-0500 D REPL [conn38] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29956195μs [js_test:multi_coll_drop] 2016-04-06T02:53:34.553-0500 c20012| 2016-04-06T02:53:04.712-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:34.558-0500 c20012| 2016-04-06T02:53:04.712-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:34.561-0500 c20012| 2016-04-06T02:53:04.712-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1176 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:34.562-0500 c20012| 2016-04-06T02:53:04.712-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1176 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:34.564-0500 c20012| 2016-04-06T02:53:04.712-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.568-0500 c20012| 2016-04-06T02:53:04.712-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.571-0500 c20012| 2016-04-06T02:53:04.712-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.573-0500 c20012| 2016-04-06T02:53:04.712-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.577-0500 c20012| 2016-04-06T02:53:04.712-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1176 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:34.577-0500 c20012| 2016-04-06T02:53:04.712-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.580-0500 c20012| 2016-04-06T02:53:04.712-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.580-0500 c20012| 2016-04-06T02:53:04.712-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.581-0500 c20012| 2016-04-06T02:53:04.712-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.588-0500 c20012| 2016-04-06T02:53:04.712-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.589-0500 c20012| 2016-04-06T02:53:04.712-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.590-0500 c20012| 2016-04-06T02:53:04.712-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.598-0500 c20012| 2016-04-06T02:53:04.712-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.601-0500 c20012| 2016-04-06T02:53:04.712-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.601-0500 c20012| 2016-04-06T02:53:04.712-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.603-0500 c20012| 2016-04-06T02:53:04.712-0500 D REPL [rsSync] replication batch size is 3 [js_test:multi_coll_drop] 2016-04-06T02:53:34.605-0500 c20012| 2016-04-06T02:53:04.712-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.607-0500 c20012| 2016-04-06T02:53:04.712-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:34.613-0500 c20012| 2016-04-06T02:53:04.712-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.614-0500 c20012| 2016-04-06T02:53:04.713-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:34.621-0500 c20012| 2016-04-06T02:53:04.713-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:34.625-0500 c20012| 2016-04-06T02:53:04.713-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1178 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:34.626-0500 c20012| 2016-04-06T02:53:04.713-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.627-0500 c20012| 2016-04-06T02:53:04.713-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.627-0500 c20012| 2016-04-06T02:53:04.713-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1178 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:34.629-0500 c20012| 2016-04-06T02:53:04.713-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.631-0500 c20012| 2016-04-06T02:53:04.713-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.632-0500 c20012| 2016-04-06T02:53:04.713-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.632-0500 c20012| 2016-04-06T02:53:04.713-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.634-0500 c20012| 2016-04-06T02:53:04.713-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.635-0500 c20012| 2016-04-06T02:53:04.713-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.635-0500 c20012| 2016-04-06T02:53:04.713-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.635-0500 c20012| 2016-04-06T02:53:04.713-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.642-0500 c20012| 2016-04-06T02:53:04.713-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.643-0500 c20012| 2016-04-06T02:53:04.713-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.646-0500 c20012| 2016-04-06T02:53:04.713-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.648-0500 c20012| 2016-04-06T02:53:04.713-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.648-0500 c20012| 2016-04-06T02:53:04.713-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.651-0500 c20012| 2016-04-06T02:53:04.713-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.653-0500 c20012| 2016-04-06T02:53:04.713-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:34.659-0500 c20012| 2016-04-06T02:53:04.713-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1178 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:34.660-0500 c20012| 2016-04-06T02:53:04.713-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:34.660-0500 c20012| 2016-04-06T02:53:04.713-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.664-0500 c20012| 2016-04-06T02:53:04.713-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.667-0500 c20012| 2016-04-06T02:53:04.713-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:34.669-0500 c20012| 2016-04-06T02:53:04.714-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1180 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:34.671-0500 c20012| 2016-04-06T02:53:04.714-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.672-0500 c20012| 2016-04-06T02:53:04.714-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.673-0500 c20012| 2016-04-06T02:53:04.714-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1180 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:34.674-0500 c20012| 2016-04-06T02:53:04.714-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.674-0500 c20012| 2016-04-06T02:53:04.714-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.674-0500 c20012| 2016-04-06T02:53:04.714-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.677-0500 c20012| 2016-04-06T02:53:04.714-0500 D REPL [conn38] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29954276μs [js_test:multi_coll_drop] 2016-04-06T02:53:34.680-0500 c20012| 2016-04-06T02:53:04.714-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.680-0500 c20012| 2016-04-06T02:53:04.714-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.680-0500 c20012| 2016-04-06T02:53:04.714-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.684-0500 c20012| 2016-04-06T02:53:04.714-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.690-0500 c20012| 2016-04-06T02:53:04.714-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.694-0500 c20012| 2016-04-06T02:53:04.714-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1180 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:34.695-0500 c20012| 2016-04-06T02:53:04.714-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.696-0500 c20012| 2016-04-06T02:53:04.714-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.697-0500 c20012| 2016-04-06T02:53:04.714-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:34.710-0500 c20012| 2016-04-06T02:53:04.714-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.713-0500 c20012| 2016-04-06T02:53:04.714-0500 D QUERY [repl writer worker 3] Using idhack: { _id: "multidrop.coll-_id_-66.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:34.713-0500 c20012| 2016-04-06T02:53:04.714-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.714-0500 c20012| 2016-04-06T02:53:04.714-0500 D QUERY [repl writer worker 3] Using idhack: { _id: "multidrop.coll-_id_-65.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:34.715-0500 c20012| 2016-04-06T02:53:04.714-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.716-0500 c20012| 2016-04-06T02:53:04.714-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.718-0500 c20012| 2016-04-06T02:53:04.714-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.719-0500 c20012| 2016-04-06T02:53:04.714-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.719-0500 c20012| 2016-04-06T02:53:04.714-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.720-0500 c20012| 2016-04-06T02:53:04.714-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.721-0500 c20012| 2016-04-06T02:53:04.714-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.722-0500 c20012| 2016-04-06T02:53:04.714-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.723-0500 c20012| 2016-04-06T02:53:04.715-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.724-0500 c20012| 2016-04-06T02:53:04.715-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.724-0500 c20012| 2016-04-06T02:53:04.715-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.728-0500 c20012| 2016-04-06T02:53:04.715-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:34.729-0500 c20012| 2016-04-06T02:53:04.715-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.733-0500 c20012| 2016-04-06T02:53:04.715-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1182 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:34.735-0500 c20012| 2016-04-06T02:53:04.715-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.735-0500 c20012| 2016-04-06T02:53:04.715-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.737-0500 c20012| 2016-04-06T02:53:04.715-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.740-0500 c20012| 2016-04-06T02:53:04.715-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1182 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:34.740-0500 c20012| 2016-04-06T02:53:04.715-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.742-0500 c20012| 2016-04-06T02:53:04.715-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:34.746-0500 c20012| 2016-04-06T02:53:04.715-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1182 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:34.756-0500 c20012| 2016-04-06T02:53:04.715-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|1, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:34.762-0500 c20012| 2016-04-06T02:53:04.715-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1184 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|1, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:34.764-0500 c20012| 2016-04-06T02:53:04.715-0500 D REPL [conn38] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29952902μs [js_test:multi_coll_drop] 2016-04-06T02:53:34.766-0500 c20012| 2016-04-06T02:53:04.715-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1184 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:34.769-0500 c20012| 2016-04-06T02:53:04.715-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:34.770-0500 c20012| 2016-04-06T02:53:04.715-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.772-0500 c20012| 2016-04-06T02:53:04.715-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.777-0500 c20012| 2016-04-06T02:53:04.715-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.777-0500 c20012| 2016-04-06T02:53:04.715-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.778-0500 c20012| 2016-04-06T02:53:04.715-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.779-0500 c20012| 2016-04-06T02:53:04.715-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.781-0500 c20012| 2016-04-06T02:53:04.715-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.781-0500 c20012| 2016-04-06T02:53:04.715-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1184 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:34.786-0500 c20012| 2016-04-06T02:53:04.715-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.791-0500 c20012| 2016-04-06T02:53:04.715-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.800-0500 c20012| 2016-04-06T02:53:04.715-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.802-0500 c20012| 2016-04-06T02:53:04.715-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.804-0500 c20012| 2016-04-06T02:53:04.715-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.809-0500 c20012| 2016-04-06T02:53:04.715-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.812-0500 c20012| 2016-04-06T02:53:04.716-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.814-0500 c20012| 2016-04-06T02:53:04.716-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.818-0500 c20012| 2016-04-06T02:53:04.716-0500 D REPL [rsSync] replication batch size is 3 [js_test:multi_coll_drop] 2016-04-06T02:53:34.819-0500 c20012| 2016-04-06T02:53:04.716-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.820-0500 c20012| 2016-04-06T02:53:04.716-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:34.820-0500 c20012| 2016-04-06T02:53:04.716-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:34.821-0500 c20012| 2016-04-06T02:53:04.716-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.823-0500 c20012| 2016-04-06T02:53:04.716-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.828-0500 c20012| 2016-04-06T02:53:04.716-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.830-0500 c20012| 2016-04-06T02:53:04.716-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.830-0500 c20012| 2016-04-06T02:53:04.716-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.831-0500 c20012| 2016-04-06T02:53:04.716-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.833-0500 c20012| 2016-04-06T02:53:04.716-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.834-0500 c20012| 2016-04-06T02:53:04.716-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.835-0500 c20012| 2016-04-06T02:53:04.716-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.836-0500 c20012| 2016-04-06T02:53:04.716-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.836-0500 c20012| 2016-04-06T02:53:04.716-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.839-0500 c20012| 2016-04-06T02:53:04.716-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.840-0500 c20012| 2016-04-06T02:53:04.716-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.840-0500 c20012| 2016-04-06T02:53:04.716-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.841-0500 c20012| 2016-04-06T02:53:04.716-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.842-0500 c20012| 2016-04-06T02:53:04.716-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.843-0500 c20012| 2016-04-06T02:53:04.716-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:34.846-0500 c20012| 2016-04-06T02:53:04.717-0500 D REPL [conn38] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29951324μs [js_test:multi_coll_drop] 2016-04-06T02:53:34.851-0500 c20012| 2016-04-06T02:53:04.717-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:34.858-0500 c20012| 2016-04-06T02:53:04.717-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1186 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:34.861-0500 c20012| 2016-04-06T02:53:04.717-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:34.863-0500 c20012| 2016-04-06T02:53:04.717-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1186 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:34.866-0500 c20012| 2016-04-06T02:53:04.717-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.867-0500 c20012| 2016-04-06T02:53:04.717-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.868-0500 c20012| 2016-04-06T02:53:04.717-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.869-0500 c20012| 2016-04-06T02:53:04.717-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.870-0500 c20012| 2016-04-06T02:53:04.717-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.871-0500 c20012| 2016-04-06T02:53:04.717-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.875-0500 c20012| 2016-04-06T02:53:04.717-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.880-0500 c20012| 2016-04-06T02:53:04.717-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1186 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:34.889-0500 c20012| 2016-04-06T02:53:04.717-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.889-0500 c20012| 2016-04-06T02:53:04.717-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.901-0500 c20012| 2016-04-06T02:53:04.717-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.904-0500 c20012| 2016-04-06T02:53:04.717-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.904-0500 c20012| 2016-04-06T02:53:04.717-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.905-0500 c20012| 2016-04-06T02:53:04.717-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.919-0500 c20012| 2016-04-06T02:53:04.717-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.920-0500 c20012| 2016-04-06T02:53:04.717-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.921-0500 c20012| 2016-04-06T02:53:04.717-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:34.923-0500 c20012| 2016-04-06T02:53:04.717-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.927-0500 c20012| 2016-04-06T02:53:04.717-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll-_id_-65.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:34.937-0500 c20012| 2016-04-06T02:53:04.717-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:34.951-0500 c20012| 2016-04-06T02:53:04.717-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1188 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:34.957-0500 c20012| 2016-04-06T02:53:04.717-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1188 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:34.958-0500 c20012| 2016-04-06T02:53:04.717-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll-_id_-64.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:34.961-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.962-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.964-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.964-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.966-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.967-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.969-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.971-0500 c20012| 2016-04-06T02:53:04.718-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1188 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:34.971-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.975-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.978-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.978-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.987-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:34.992-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.017-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.023-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.026-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.027-0500 c20012| 2016-04-06T02:53:04.718-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:35.030-0500 c20012| 2016-04-06T02:53:04.718-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:35.031-0500 c20012| 2016-04-06T02:53:04.718-0500 D REPL [conn38] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29949764μs [js_test:multi_coll_drop] 2016-04-06T02:53:35.032-0500 c20012| 2016-04-06T02:53:04.718-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:35.033-0500 c20012| 2016-04-06T02:53:04.718-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1190 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:35.034-0500 c20012| 2016-04-06T02:53:04.718-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1190 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:35.035-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.040-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.041-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.043-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.044-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.049-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.057-0500 c20012| 2016-04-06T02:53:04.718-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1190 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.060-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.060-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.061-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.063-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.064-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.064-0500 c20012| 2016-04-06T02:53:04.718-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.066-0500 c20012| 2016-04-06T02:53:04.719-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.067-0500 c20012| 2016-04-06T02:53:04.719-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.068-0500 c20012| 2016-04-06T02:53:04.719-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.069-0500 c20012| 2016-04-06T02:53:04.719-0500 D REPL [rsSync] replication batch size is 3 [js_test:multi_coll_drop] 2016-04-06T02:53:35.069-0500 c20012| 2016-04-06T02:53:04.719-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.074-0500 c20012| 2016-04-06T02:53:04.719-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:35.077-0500 c20012| 2016-04-06T02:53:04.719-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:35.078-0500 c20012| 2016-04-06T02:53:04.719-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.088-0500 c20012| 2016-04-06T02:53:04.719-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.089-0500 c20012| 2016-04-06T02:53:04.719-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.091-0500 c20012| 2016-04-06T02:53:04.719-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.092-0500 c20012| 2016-04-06T02:53:04.719-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.092-0500 c20012| 2016-04-06T02:53:04.719-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.094-0500 c20012| 2016-04-06T02:53:04.719-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.095-0500 c20012| 2016-04-06T02:53:04.719-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.095-0500 c20012| 2016-04-06T02:53:04.719-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.097-0500 c20012| 2016-04-06T02:53:04.719-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.103-0500 c20012| 2016-04-06T02:53:04.719-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:35.104-0500 c20012| 2016-04-06T02:53:04.719-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.114-0500 c20012| 2016-04-06T02:53:04.719-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.138-0500 c20012| 2016-04-06T02:53:04.719-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1192 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:35.158-0500 c20012| 2016-04-06T02:53:04.719-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1192 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:35.164-0500 c20012| 2016-04-06T02:53:04.719-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.168-0500 c20012| 2016-04-06T02:53:04.719-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.169-0500 c20012| 2016-04-06T02:53:04.719-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.195-0500 c20012| 2016-04-06T02:53:04.719-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.196-0500 c20012| 2016-04-06T02:53:04.719-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1192 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.199-0500 c20012| 2016-04-06T02:53:04.719-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:35.203-0500 c20012| 2016-04-06T02:53:04.719-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:35.206-0500 c20012| 2016-04-06T02:53:04.719-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1194 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:35.207-0500 c20012| 2016-04-06T02:53:04.719-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|8, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:35.212-0500 c20012| 2016-04-06T02:53:04.720-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|74 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|8, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.217-0500 c20012| 2016-04-06T02:53:04.720-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1194 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:35.220-0500 c20012| 2016-04-06T02:53:04.720-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1194 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.226-0500 c20012| 2016-04-06T02:53:04.720-0500 D QUERY [conn38] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:35.232-0500 c20012| 2016-04-06T02:53:04.720-0500 I COMMAND [conn38] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|74 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|8, t: 3 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 51ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.235-0500 c20012| 2016-04-06T02:53:04.720-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:35.242-0500 c20012| 2016-04-06T02:53:04.720-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1196 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:35.244-0500 c20012| 2016-04-06T02:53:04.720-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1196 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:35.247-0500 c20012| 2016-04-06T02:53:04.720-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1196 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.250-0500 c20012| 2016-04-06T02:53:04.721-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:35.254-0500 c20012| 2016-04-06T02:53:04.721-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1198 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:35.257-0500 c20012| 2016-04-06T02:53:04.721-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1198 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:35.257-0500 c20012| 2016-04-06T02:53:04.721-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1198 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.259-0500 c20012| 2016-04-06T02:53:04.721-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.271-0500 c20012| 2016-04-06T02:53:04.721-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.282-0500 c20012| 2016-04-06T02:53:04.744-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1200 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:14.744-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.288-0500 c20012| 2016-04-06T02:53:04.744-0500 I ASIO [ReplicationExecutor] dropping unhealthy pooled connection to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:35.291-0500 c20012| 2016-04-06T02:53:04.744-0500 I ASIO [ReplicationExecutor] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:53:35.293-0500 c20012| 2016-04-06T02:53:04.745-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:35.300-0500 c20012| 2016-04-06T02:53:04.745-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1201 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:35.301-0500 c20012| 2016-04-06T02:53:04.746-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:35.302-0500 c20012| 2016-04-06T02:53:04.746-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1201 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:35.306-0500 c20012| 2016-04-06T02:53:04.746-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1200 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:35.314-0500 c20012| 2016-04-06T02:53:04.747-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1200 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 4, durableOpTime: { ts: Timestamp 1459929171000|2, t: 3 }, opTime: { ts: Timestamp 1459929171000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:35.315-0500 c20012| 2016-04-06T02:53:04.747-0500 I REPL [ReplicationExecutor] Member mongovm16:20011 is now in state SECONDARY [js_test:multi_coll_drop] 2016-04-06T02:53:35.318-0500 c20012| 2016-04-06T02:53:04.747-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:06.747Z [js_test:multi_coll_drop] 2016-04-06T02:53:35.326-0500 c20012| 2016-04-06T02:53:05.166-0500 D COMMAND [conn32] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.327-0500 c20012| 2016-04-06T02:53:05.166-0500 I COMMAND [conn32] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.338-0500 c20012| 2016-04-06T02:53:05.170-0500 D COMMAND [conn34] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.344-0500 c20012| 2016-04-06T02:53:05.174-0500 I COMMAND [conn34] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.351-0500 c20012| 2016-04-06T02:53:05.223-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.366-0500 c20012| 2016-04-06T02:53:05.223-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.384-0500 c20012| 2016-04-06T02:53:05.663-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1112 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929185000|1, t: 4, h: -8800919752589540802, v: 2, op: "n", ns: "", o: { msg: "new primary" } } ], id: 22887452903, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.387-0500 c20012| 2016-04-06T02:53:05.665-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929185000|1 and ending at ts: Timestamp 1459929185000|1 [js_test:multi_coll_drop] 2016-04-06T02:53:35.390-0500 c20012| 2016-04-06T02:53:05.665-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:35.394-0500 c20012| 2016-04-06T02:53:05.665-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.402-0500 c20012| 2016-04-06T02:53:05.665-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.422-0500 c20012| 2016-04-06T02:53:05.665-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.422-0500 c20012| 2016-04-06T02:53:05.665-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.424-0500 c20012| 2016-04-06T02:53:05.665-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.427-0500 c20012| 2016-04-06T02:53:05.665-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.429-0500 c20012| 2016-04-06T02:53:05.666-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.430-0500 c20012| 2016-04-06T02:53:05.666-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.432-0500 c20012| 2016-04-06T02:53:05.666-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.435-0500 c20012| 2016-04-06T02:53:05.666-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.436-0500 c20012| 2016-04-06T02:53:05.666-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.437-0500 c20012| 2016-04-06T02:53:05.665-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.438-0500 c20012| 2016-04-06T02:53:05.666-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:35.438-0500 c20012| 2016-04-06T02:53:05.666-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.439-0500 c20012| 2016-04-06T02:53:05.666-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.439-0500 c20012| 2016-04-06T02:53:05.666-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.442-0500 c20012| 2016-04-06T02:53:05.666-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.443-0500 c20012| 2016-04-06T02:53:05.666-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.446-0500 c20012| 2016-04-06T02:53:05.666-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.447-0500 c20012| 2016-04-06T02:53:05.666-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.448-0500 c20012| 2016-04-06T02:53:05.666-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.449-0500 c20012| 2016-04-06T02:53:05.666-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.450-0500 c20012| 2016-04-06T02:53:05.666-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.452-0500 c20012| 2016-04-06T02:53:05.666-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.453-0500 c20012| 2016-04-06T02:53:05.666-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.455-0500 c20012| 2016-04-06T02:53:05.666-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.457-0500 c20012| 2016-04-06T02:53:05.666-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.457-0500 c20012| 2016-04-06T02:53:05.666-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.459-0500 c20012| 2016-04-06T02:53:05.666-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.460-0500 c20012| 2016-04-06T02:53:05.675-0500 D COMMAND [conn34] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.462-0500 c20012| 2016-04-06T02:53:05.675-0500 I COMMAND [conn34] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.465-0500 c20012| 2016-04-06T02:53:05.676-0500 D REPL [rsBackgroundSync-0] Cancelling oplog query because we have to choose a sync source. Current source: mongovm16:20013, OpTime{ ts: Timestamp 1459929163000|8, t: 3 }, hasSyncSource:0 [js_test:multi_coll_drop] 2016-04-06T02:53:35.466-0500 c20012| 2016-04-06T02:53:05.676-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1204 -- target:mongovm16:20013 db:local cmd:{ killCursors: "oplog.rs", cursors: [ 22887452903 ] } [js_test:multi_coll_drop] 2016-04-06T02:53:35.468-0500 c20012| 2016-04-06T02:53:05.676-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1204 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:35.471-0500 c20012| 2016-04-06T02:53:05.676-0500 D REPL [rsBackgroundSync] fetcher stopped reading remote oplog on mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:35.475-0500 c20012| 2016-04-06T02:53:05.676-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1204 finished with response: { cursorsKilled: [ 22887452903 ], cursorsNotFound: [], cursorsAlive: [], cursorsUnknown: [], ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.476-0500 c20012| 2016-04-06T02:53:05.676-0500 I REPL [ReplicationExecutor] could not find member to sync from [js_test:multi_coll_drop] 2016-04-06T02:53:35.478-0500 c20012| 2016-04-06T02:53:05.676-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:05.676Z [js_test:multi_coll_drop] 2016-04-06T02:53:35.480-0500 c20012| 2016-04-06T02:53:05.676-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:05.676Z [js_test:multi_coll_drop] 2016-04-06T02:53:35.484-0500 c20012| 2016-04-06T02:53:05.676-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1206 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:15.676-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.490-0500 c20012| 2016-04-06T02:53:05.677-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1207 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:15.677-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.493-0500 c20012| 2016-04-06T02:53:05.677-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1206 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:35.496-0500 c20012| 2016-04-06T02:53:05.677-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1207 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:35.498-0500 c20012| 2016-04-06T02:53:05.679-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1206 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 4, durableOpTime: { ts: Timestamp 1459929171000|2, t: 3 }, opTime: { ts: Timestamp 1459929171000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:35.499-0500 c20012| 2016-04-06T02:53:05.679-0500 D COMMAND [conn32] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.501-0500 c20012| 2016-04-06T02:53:05.679-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:08.179Z [js_test:multi_coll_drop] 2016-04-06T02:53:35.504-0500 c20012| 2016-04-06T02:53:05.679-0500 I COMMAND [conn32] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.506-0500 c20012| 2016-04-06T02:53:05.682-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.508-0500 c20012| 2016-04-06T02:53:05.682-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.508-0500 c20012| 2016-04-06T02:53:05.682-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.511-0500 c20012| 2016-04-06T02:53:05.682-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1207 finished with response: { ok: 1.0, electionTime: new Date(6270348099755966465), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 4, primaryId: 2, durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, opTime: { ts: Timestamp 1459929185000|1, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:35.513-0500 c20012| 2016-04-06T02:53:05.682-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.515-0500 c20012| 2016-04-06T02:53:05.682-0500 I REPL [ReplicationExecutor] Member mongovm16:20013 is now in state PRIMARY [js_test:multi_coll_drop] 2016-04-06T02:53:35.518-0500 c20012| 2016-04-06T02:53:05.682-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:08.182Z [js_test:multi_coll_drop] 2016-04-06T02:53:35.520-0500 c20012| 2016-04-06T02:53:05.682-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:35.523-0500 c20012| 2016-04-06T02:53:05.683-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter failed to prepare update command with status: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:35.526-0500 c20012| 2016-04-06T02:53:05.683-0500 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to mongovm16:20013: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:35.528-0500 c20012| 2016-04-06T02:53:05.683-0500 D REPL [SyncSourceFeedback] The replication progress command (replSetUpdatePosition) failed and will be retried: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:35.529-0500 c20012| 2016-04-06T02:53:05.723-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.532-0500 c20012| 2016-04-06T02:53:05.724-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.534-0500 c20012| 2016-04-06T02:53:05.740-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.535-0500 c20012| 2016-04-06T02:53:05.740-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:35.537-0500 c20012| 2016-04-06T02:53:05.741-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 4 } numYields:0 reslen:458 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.537-0500 c20012| 2016-04-06T02:53:05.920-0500 D COMMAND [conn31] run command local.$cmd { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:35.542-0500 c20012| 2016-04-06T02:53:05.920-0500 D QUERY [conn31] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: 1 } projection: {} limit: 1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:35.549-0500 c20012| 2016-04-06T02:53:05.920-0500 I COMMAND [conn31] command local.oplog.rs command: find { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:254 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.551-0500 c20012| 2016-04-06T02:53:05.921-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:39686 #39 (9 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:35.552-0500 c20012| 2016-04-06T02:53:05.921-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:39688 #40 (10 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:35.553-0500 c20012| 2016-04-06T02:53:05.921-0500 D COMMAND [conn40] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20011" } [js_test:multi_coll_drop] 2016-04-06T02:53:35.556-0500 c20012| 2016-04-06T02:53:05.922-0500 I COMMAND [conn40] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20011" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.556-0500 c20012| 2016-04-06T02:53:05.922-0500 D COMMAND [conn39] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20011" } [js_test:multi_coll_drop] 2016-04-06T02:53:35.557-0500 c20012| 2016-04-06T02:53:05.922-0500 I COMMAND [conn39] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20011" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.565-0500 c20012| 2016-04-06T02:53:05.922-0500 D COMMAND [conn40] run command local.$cmd { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929171000|2 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.571-0500 c20012| 2016-04-06T02:53:05.922-0500 D COMMAND [conn39] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929171000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929171000|2, t: 3 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:35.577-0500 c20012| 2016-04-06T02:53:05.922-0500 D COMMAND [conn39] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:35.582-0500 c20012| 2016-04-06T02:53:05.922-0500 D REPL [conn39] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929171000|2, t: 3 } and is durable through: { ts: Timestamp 1459929171000|2, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.583-0500 c20012| 2016-04-06T02:53:05.922-0500 D REPL [conn39] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|8, t: 3 } and is durable through: { ts: Timestamp 1459929163000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.592-0500 c20012| 2016-04-06T02:53:05.922-0500 I COMMAND [conn39] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929171000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929171000|2, t: 3 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.605-0500 c20012| 2016-04-06T02:53:05.922-0500 I COMMAND [conn40] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929171000|2 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 4 } planSummary: COLLSCAN cursorid:23130095408 keysExamined:0 docsExamined:2 numYields:0 nreturned:1 reslen:468 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.606-0500 c20012| 2016-04-06T02:53:05.923-0500 D COMMAND [conn40] run command local.$cmd { killCursors: "oplog.rs", cursors: [ 23130095408 ] } [js_test:multi_coll_drop] 2016-04-06T02:53:35.608-0500 c20012| 2016-04-06T02:53:05.923-0500 I COMMAND [conn40] command local.oplog.rs command: killCursors { killCursors: "oplog.rs", cursors: [ 23130095408 ] } numYields:0 reslen:175 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.613-0500 c20012| 2016-04-06T02:53:05.924-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:39689 #41 (11 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:35.621-0500 c20012| 2016-04-06T02:53:05.924-0500 D COMMAND [conn41] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20011" } [js_test:multi_coll_drop] 2016-04-06T02:53:35.624-0500 c20012| 2016-04-06T02:53:05.925-0500 I COMMAND [conn41] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20011" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.640-0500 c20012| 2016-04-06T02:53:05.925-0500 D COMMAND [conn41] run command admin.$cmd { replSetGetRBID: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.640-0500 c20012| 2016-04-06T02:53:05.925-0500 D COMMAND [conn41] command: replSetGetRBID [js_test:multi_coll_drop] 2016-04-06T02:53:35.652-0500 c20012| 2016-04-06T02:53:05.925-0500 I COMMAND [conn41] command admin.$cmd command: replSetGetRBID { replSetGetRBID: 1 } numYields:0 reslen:92 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.665-0500 c20012| 2016-04-06T02:53:05.925-0500 D QUERY [conn41] Running query: query: {} sort: { $natural: -1 } projection: { ts: 1, h: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.678-0500 c20012| 2016-04-06T02:53:05.925-0500 D QUERY [conn41] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: -1 } projection: { ts: 1, h: 1 }, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:35.685-0500 c20012| 2016-04-06T02:53:05.925-0500 I COMMAND [conn41] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN cursorid:22266800349 ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:101 numYields:0 nreturned:101 reslen:2848 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.693-0500 c20012| 2016-04-06T02:53:05.925-0500 D COMMAND [conn41] killcursors: found 1 of 1 [js_test:multi_coll_drop] 2016-04-06T02:53:35.694-0500 c20012| 2016-04-06T02:53:05.925-0500 I COMMAND [conn41] killcursors local.oplog.rs numYields:0 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.698-0500 c20012| 2016-04-06T02:53:05.925-0500 D QUERY [conn41] Running query: query: { _id: "mongovm16:20014" } sort: {} projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:35.702-0500 c20012| 2016-04-06T02:53:05.925-0500 D QUERY [conn41] Using idhack: query: { _id: "mongovm16:20014" } sort: {} projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:35.709-0500 c20012| 2016-04-06T02:53:05.925-0500 I COMMAND [conn41] query config.mongos query: { _id: "mongovm16:20014" } planSummary: IDHACK ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:122 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.717-0500 c20012| 2016-04-06T02:53:05.925-0500 D QUERY [conn41] Running query: query: { _id: "mongovm16:20015" } sort: {} projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:35.732-0500 c20012| 2016-04-06T02:53:05.925-0500 D QUERY [conn41] Using idhack: query: { _id: "mongovm16:20015" } sort: {} projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:35.736-0500 c20012| 2016-04-06T02:53:05.925-0500 I COMMAND [conn41] query config.mongos query: { _id: "mongovm16:20015" } planSummary: IDHACK ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:122 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.737-0500 c20012| 2016-04-06T02:53:05.925-0500 D QUERY [conn41] Running query: query: {} sort: { $natural: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:35.743-0500 c20012| 2016-04-06T02:53:05.925-0500 D QUERY [conn41] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: -1 } projection: {} ntoreturn=1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:35.744-0500 c20012| 2016-04-06T02:53:05.925-0500 I COMMAND [conn41] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:114 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.746-0500 c20012| 2016-04-06T02:53:05.925-0500 D COMMAND [conn41] run command admin.$cmd { replSetGetRBID: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.749-0500 c20012| 2016-04-06T02:53:05.925-0500 D COMMAND [conn41] command: replSetGetRBID [js_test:multi_coll_drop] 2016-04-06T02:53:35.751-0500 c20012| 2016-04-06T02:53:05.925-0500 I COMMAND [conn41] command admin.$cmd command: replSetGetRBID { replSetGetRBID: 1 } numYields:0 reslen:92 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.756-0500 c20012| 2016-04-06T02:53:05.926-0500 D COMMAND [conn39] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:35.758-0500 c20012| 2016-04-06T02:53:05.926-0500 D COMMAND [conn39] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:35.764-0500 c20012| 2016-04-06T02:53:05.926-0500 D REPL [conn39] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|8, t: 3 } and is durable through: { ts: Timestamp 1459929163000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.770-0500 c20012| 2016-04-06T02:53:05.926-0500 D REPL [conn39] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|8, t: 3 } and is durable through: { ts: Timestamp 1459929163000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.781-0500 c20012| 2016-04-06T02:53:05.926-0500 I COMMAND [conn39] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.784-0500 c20012| 2016-04-06T02:53:05.938-0500 D NETWORK [conn41] SocketException: remote: 192.168.100.28:39689 error: 9001 socket exception [CLOSED] server [192.168.100.28:39689] [js_test:multi_coll_drop] 2016-04-06T02:53:35.787-0500 c20012| 2016-04-06T02:53:05.938-0500 I NETWORK [conn41] end connection 192.168.100.28:39689 (10 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:35.789-0500 c20012| 2016-04-06T02:53:05.939-0500 D COMMAND [conn31] run command local.$cmd { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:35.792-0500 c20012| 2016-04-06T02:53:05.939-0500 D QUERY [conn31] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: 1 } projection: {} limit: 1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:35.795-0500 c20012| 2016-04-06T02:53:05.939-0500 I COMMAND [conn31] command local.oplog.rs command: find { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:254 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.798-0500 c20012| 2016-04-06T02:53:05.940-0500 D COMMAND [conn40] run command local.$cmd { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929163000|8 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.800-0500 c20012| 2016-04-06T02:53:05.940-0500 I COMMAND [conn40] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929163000|8 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 4 } planSummary: COLLSCAN cursorid:25053585400 keysExamined:0 docsExamined:2 numYields:0 nreturned:2 reslen:718 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.803-0500 c20012| 2016-04-06T02:53:05.940-0500 D COMMAND [conn39] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:35.804-0500 c20012| 2016-04-06T02:53:05.940-0500 D COMMAND [conn39] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:35.805-0500 c20012| 2016-04-06T02:53:05.940-0500 D REPL [conn39] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|8, t: 3 } and is durable through: { ts: Timestamp 1459929163000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.807-0500 c20012| 2016-04-06T02:53:05.940-0500 D REPL [conn39] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|8, t: 3 } and is durable through: { ts: Timestamp 1459929163000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.812-0500 c20012| 2016-04-06T02:53:05.940-0500 I COMMAND [conn39] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.813-0500 c20012| 2016-04-06T02:53:05.946-0500 D COMMAND [conn40] run command local.$cmd { getMore: 25053585400, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:35.814-0500 c20012| 2016-04-06T02:53:06.671-0500 D COMMAND [conn37] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.817-0500 c20012| 2016-04-06T02:53:06.671-0500 D COMMAND [conn37] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:35.820-0500 c20012| 2016-04-06T02:53:06.672-0500 I COMMAND [conn37] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } numYields:0 reslen:478 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.821-0500 c20012| 2016-04-06T02:53:06.840-0500 D COMMAND [conn35] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.841-0500 c20012| 2016-04-06T02:53:06.840-0500 I COMMAND [conn35] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.844-0500 c20012| 2016-04-06T02:53:08.183-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1210 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:18.183-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.846-0500 c20012| 2016-04-06T02:53:08.183-0500 I ASIO [ReplicationExecutor] dropping unhealthy pooled connection to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:35.848-0500 c20012| 2016-04-06T02:53:08.183-0500 I ASIO [ReplicationExecutor] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:53:35.849-0500 c20012| 2016-04-06T02:53:08.183-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:35.852-0500 c20012| 2016-04-06T02:53:08.183-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1212 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:18.183-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.862-0500 c20012| 2016-04-06T02:53:08.183-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1212 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:35.863-0500 c20012| 2016-04-06T02:53:08.184-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1211 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:35.873-0500 c20012| 2016-04-06T02:53:08.184-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1212 finished with response: { ok: 1.0, electionTime: new Date(6270348099755966465), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 4, primaryId: 2, durableOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, opTime: { ts: Timestamp 1459929185000|4, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:35.875-0500 c20012| 2016-04-06T02:53:08.184-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:35.879-0500 c20012| 2016-04-06T02:53:08.184-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1211 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:35.879-0500 c20012| 2016-04-06T02:53:08.184-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1210 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:35.883-0500 c20012| 2016-04-06T02:53:08.185-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929185000|1, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.886-0500 c20012| 2016-04-06T02:53:08.185-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:10.685Z [js_test:multi_coll_drop] 2016-04-06T02:53:35.895-0500 c20012| 2016-04-06T02:53:08.185-0500 I COMMAND [conn40] command local.oplog.rs command: getMore { getMore: 25053585400, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|8, t: 3 } } cursorid:25053585400 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 2238ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.900-0500 c20012| 2016-04-06T02:53:08.186-0500 D COMMAND [conn40] run command local.$cmd { killCursors: "oplog.rs", cursors: [ 25053585400 ] } [js_test:multi_coll_drop] 2016-04-06T02:53:35.902-0500 c20012| 2016-04-06T02:53:08.187-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1210 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20012", term: 4, primaryId: 2, durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, opTime: { ts: Timestamp 1459929185000|1, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:35.908-0500 c20012| 2016-04-06T02:53:08.187-0500 I COMMAND [conn40] command local.oplog.rs command: killCursors { killCursors: "oplog.rs", cursors: [ 25053585400 ] } numYields:0 reslen:175 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.910-0500 c20012| 2016-04-06T02:53:08.190-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:10.690Z [js_test:multi_coll_drop] 2016-04-06T02:53:35.911-0500 c20012| 2016-04-06T02:53:08.246-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.911-0500 c20012| 2016-04-06T02:53:08.246-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:35.913-0500 c20012| 2016-04-06T02:53:08.248-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 4 } numYields:0 reslen:458 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.917-0500 c20012| 2016-04-06T02:53:08.445-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:39846 #42 (11 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:35.918-0500 c20012| 2016-04-06T02:53:08.445-0500 D COMMAND [conn42] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:35.923-0500 c20012| 2016-04-06T02:53:08.445-0500 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20014" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.925-0500 c20012| 2016-04-06T02:53:08.446-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|8, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.926-0500 c20012| 2016-04-06T02:53:08.446-0500 D REPL [conn42] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929188000|8, t: 4 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929185000|1, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.926-0500 c20012| 2016-04-06T02:53:08.446-0500 D REPL [conn42] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999984μs [js_test:multi_coll_drop] 2016-04-06T02:53:35.928-0500 c20012| 2016-04-06T02:53:08.673-0500 D COMMAND [conn37] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.929-0500 c20012| 2016-04-06T02:53:08.673-0500 D COMMAND [conn37] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:35.932-0500 c20012| 2016-04-06T02:53:08.675-0500 I COMMAND [conn37] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } numYields:0 reslen:478 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:35.932-0500 c20012| 2016-04-06T02:53:08.677-0500 I REPL [ReplicationExecutor] syncing from: mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:35.934-0500 c20012| 2016-04-06T02:53:08.678-0500 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 1215 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:38.677-0500 cmd:{ find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:35.935-0500 c20012| 2016-04-06T02:53:08.681-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1215 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:35.938-0500 c20012| 2016-04-06T02:53:08.681-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1215 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1459929117000|1, h: 1169182228640141205, v: 2, op: "n", ns: "", o: { msg: "initiating set" } } ], id: 0, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.942-0500 c20012| 2016-04-06T02:53:08.682-0500 D REPL [rsBackgroundSync] scheduling fetcher to read remote oplog on mongovm16:20013 starting at filter: { ts: { $gte: Timestamp 1459929185000|1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:35.944-0500 c20012| 2016-04-06T02:53:08.682-0500 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 1217 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.682-0500 cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929185000|1 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.945-0500 c20012| 2016-04-06T02:53:08.682-0500 D REPL [SyncSourceFeedback] setting syncSourceFeedback to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:35.946-0500 c20012| 2016-04-06T02:53:08.682-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1217 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:35.952-0500 c20012| 2016-04-06T02:53:08.682-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:35.960-0500 c20012| 2016-04-06T02:53:08.682-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1218 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:35.961-0500 c20012| 2016-04-06T02:53:08.682-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1218 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:35.970-0500 c20012| 2016-04-06T02:53:08.682-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1217 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1459929185000|1, t: 4, h: -8800919752589540802, v: 2, op: "n", ns: "", o: { msg: "new primary" } }, { ts: Timestamp 1459929185000|2, t: 4, h: -3715515470456908696, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20014" }, o: { $set: { ping: new Date(1459929171765), up: 44, waiting: false } } }, { ts: Timestamp 1459929185000|3, t: 4, h: -2117331217373926554, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20015" }, o: { $set: { ping: new Date(1459929171773), up: 44, waiting: false } } }, { ts: Timestamp 1459929185000|4, t: 4, h: 7420545252714322932, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-64.0", lastmod: Timestamp 1000|75, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -64.0 }, max: { _id: -63.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-64.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-63.0", lastmod: Timestamp 1000|76, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -63.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-63.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } }, { ts: Timestamp 1459929188000|1, t: 4, h: 9006822706624246442, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:53:08.213-0500-5704c06465c17830b843f1c8", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929188213), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -64.0 }, max: { _id: MaxKey } }, left: { min: { _id: -64.0 }, max: { _id: -63.0 }, lastmod: Timestamp 1000|75, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -63.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|76, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } }, { ts: Timestamp 1459929188000|2, t: 4, h: -5651042818538587262, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20014" }, o: { $set: { ping: new Date(1459929188220), up: 61, waiting: true } } }, { ts: Timestamp 1459929188000|3, t: 4, h: -5240106199834540916, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20015" }, o: { $set: { ping: new Date(1459929188221), up: 61, waiting: true } } }, { ts: Timestamp 1459929188000|4, t: 4, h: -8682658828438772587, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } }, { ts: Timestamp 1459929188000|5, t: 4, h: -3166850081498560888, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c06465c17830b843f1c9'), state: 2, when: new Date(1459929188315), why: "splitting chunk [{ _id: -63.0 }, { _id: MaxKey }) in multidrop.coll" } } }, { ts: Timestamp 1459929188000|6, t: 4, h: -6079188038794452835, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-63.0", lastmod: Timestamp 1000|77, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -63.0 }, max: { _id: -62.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-63.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-62.0", lastmod: Timestamp 1000|78, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -62.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-62.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } }, { ts: Timestamp 1459929188000|7, t: 4, h: -5413670652354134036, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:53:08.379-0500-5704c06465c17830b843f1ca", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929188379), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -63.0 }, max: { _id: MaxKey } }, left: { min: { _id: -63.0 }, max: { _id: -62.0 }, lastmod: Timestamp 1000|77, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -62.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|78, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } }, { ts: Timestamp 1459929188000|8, t: 4, h: -4362257609073136726, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 21969886375, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.971-0500 c20012| 2016-04-06T02:53:08.682-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1218 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.972-0500 c20012| 2016-04-06T02:53:08.683-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929188000|8, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:35.974-0500 c20012| 2016-04-06T02:53:08.683-0500 D REPL [rsBackgroundSync-0] fetcher read 12 operations from remote oplog starting at ts: Timestamp 1459929185000|1 and ending at ts: Timestamp 1459929188000|8 [js_test:multi_coll_drop] 2016-04-06T02:53:35.980-0500 c20012| 2016-04-06T02:53:08.683-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:35.981-0500 c20012| 2016-04-06T02:53:08.684-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.983-0500 c20012| 2016-04-06T02:53:08.684-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.984-0500 c20012| 2016-04-06T02:53:08.684-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.985-0500 c20012| 2016-04-06T02:53:08.684-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.988-0500 c20012| 2016-04-06T02:53:08.684-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.991-0500 c20012| 2016-04-06T02:53:08.684-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.992-0500 c20012| 2016-04-06T02:53:08.684-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.995-0500 c20012| 2016-04-06T02:53:08.684-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.996-0500 c20012| 2016-04-06T02:53:08.684-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.998-0500 c20012| 2016-04-06T02:53:08.684-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:35.999-0500 c20012| 2016-04-06T02:53:08.684-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.024-0500 c20012| 2016-04-06T02:53:08.684-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.029-0500 c20012| 2016-04-06T02:53:08.684-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.036-0500 c20012| 2016-04-06T02:53:08.684-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.047-0500 c20012| 2016-04-06T02:53:08.685-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.051-0500 c20012| 2016-04-06T02:53:08.685-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.058-0500 c20012| 2016-04-06T02:53:08.684-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:36.060-0500 c20012| 2016-04-06T02:53:08.686-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:36.065-0500 c20012| 2016-04-06T02:53:08.686-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1221 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.686-0500 cmd:{ getMore: 21969886375, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|8, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:36.066-0500 c20012| 2016-04-06T02:53:08.686-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1221 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:36.066-0500 c20012| 2016-04-06T02:53:08.686-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.068-0500 c20012| 2016-04-06T02:53:08.686-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.071-0500 c20012| 2016-04-06T02:53:08.686-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.071-0500 c20012| 2016-04-06T02:53:08.686-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.072-0500 c20012| 2016-04-06T02:53:08.686-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.075-0500 c20012| 2016-04-06T02:53:08.686-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.076-0500 c20012| 2016-04-06T02:53:08.687-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.077-0500 c20012| 2016-04-06T02:53:08.687-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.080-0500 c20012| 2016-04-06T02:53:08.687-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.085-0500 c20012| 2016-04-06T02:53:08.687-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.088-0500 c20012| 2016-04-06T02:53:08.687-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.090-0500 c20012| 2016-04-06T02:53:08.687-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.091-0500 c20012| 2016-04-06T02:53:08.687-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.091-0500 c20012| 2016-04-06T02:53:08.688-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.092-0500 c20012| 2016-04-06T02:53:08.688-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.092-0500 c20012| 2016-04-06T02:53:08.688-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.094-0500 c20012| 2016-04-06T02:53:08.691-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:36.096-0500 c20012| 2016-04-06T02:53:08.691-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:36.096-0500 c20012| 2016-04-06T02:53:08.691-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.097-0500 c20012| 2016-04-06T02:53:08.691-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.098-0500 c20012| 2016-04-06T02:53:08.691-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.098-0500 c20012| 2016-04-06T02:53:08.691-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.101-0500 c20012| 2016-04-06T02:53:08.691-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.102-0500 c20012| 2016-04-06T02:53:08.691-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.104-0500 c20012| 2016-04-06T02:53:08.691-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.106-0500 c20012| 2016-04-06T02:53:08.691-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.107-0500 c20012| 2016-04-06T02:53:08.691-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.112-0500 c20012| 2016-04-06T02:53:08.692-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.113-0500 c20012| 2016-04-06T02:53:08.692-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.115-0500 c20012| 2016-04-06T02:53:08.692-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.119-0500 c20012| 2016-04-06T02:53:08.692-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:36.120-0500 c20012| 2016-04-06T02:53:08.692-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.122-0500 c20012| 2016-04-06T02:53:08.692-0500 D QUERY [repl writer worker 7] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:36.130-0500 c20012| 2016-04-06T02:53:08.692-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|2, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:36.138-0500 c20012| 2016-04-06T02:53:08.692-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1222 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|2, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:36.141-0500 c20012| 2016-04-06T02:53:08.692-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1222 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:36.145-0500 c20012| 2016-04-06T02:53:08.692-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.150-0500 c20012| 2016-04-06T02:53:08.692-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.154-0500 c20012| 2016-04-06T02:53:08.692-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.155-0500 c20012| 2016-04-06T02:53:08.692-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.158-0500 c20012| 2016-04-06T02:53:08.692-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.159-0500 c20012| 2016-04-06T02:53:08.692-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.161-0500 c20012| 2016-04-06T02:53:08.692-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.164-0500 c20012| 2016-04-06T02:53:08.692-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.166-0500 c20012| 2016-04-06T02:53:08.692-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.168-0500 c20012| 2016-04-06T02:53:08.692-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.171-0500 c20012| 2016-04-06T02:53:08.692-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1222 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.175-0500 c20012| 2016-04-06T02:53:08.692-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.179-0500 c20012| 2016-04-06T02:53:08.692-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.180-0500 c20012| 2016-04-06T02:53:08.692-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.185-0500 c20012| 2016-04-06T02:53:08.692-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.189-0500 c20012| 2016-04-06T02:53:08.692-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.189-0500 c20012| 2016-04-06T02:53:08.693-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.194-0500 c20012| 2016-04-06T02:53:08.693-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.195-0500 c20012| 2016-04-06T02:53:08.693-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.197-0500 c20012| 2016-04-06T02:53:08.693-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.202-0500 c20012| 2016-04-06T02:53:08.696-0500 D REPL [conn42] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29750260μs [js_test:multi_coll_drop] 2016-04-06T02:53:36.206-0500 c20012| 2016-04-06T02:53:08.697-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:36.212-0500 c20012| 2016-04-06T02:53:08.700-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|3, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:36.220-0500 c20012| 2016-04-06T02:53:08.700-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1224 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|3, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:36.222-0500 c20012| 2016-04-06T02:53:08.700-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1224 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:36.222-0500 c20012| 2016-04-06T02:53:08.700-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1224 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.224-0500 c20012| 2016-04-06T02:53:08.701-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:36.227-0500 c20012| 2016-04-06T02:53:08.700-0500 D REPL [conn42] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29745992μs [js_test:multi_coll_drop] 2016-04-06T02:53:36.229-0500 c20012| 2016-04-06T02:53:08.701-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.231-0500 c20012| 2016-04-06T02:53:08.701-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.231-0500 c20012| 2016-04-06T02:53:08.701-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.232-0500 c20012| 2016-04-06T02:53:08.701-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.234-0500 c20012| 2016-04-06T02:53:08.701-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.237-0500 c20013| 2016-04-06T02:52:22.559-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.243-0500 c20011| 2016-04-06T02:52:43.260-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|4, t: 3 } } cursorid:19853084149 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 23ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.244-0500 s20015| 2016-04-06T02:53:18.969-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:53:36.247-0500 s20015| 2016-04-06T02:53:18.969-0500 D NETWORK [Balancer] connected to server mongovm16:20011 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:53:36.247-0500 s20015| 2016-04-06T02:53:18.969-0500 D NETWORK [Balancer] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:53:36.250-0500 d20010| 2016-04-06T02:53:21.977-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -56.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:36.263-0500 c20013| 2016-04-06T02:52:22.559-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1051 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.559-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929141000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:36.264-0500 c20013| 2016-04-06T02:52:22.559-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:36.265-0500 c20013| 2016-04-06T02:52:22.560-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1051 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:36.269-0500 c20013| 2016-04-06T02:52:22.560-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:36.270-0500 c20013| 2016-04-06T02:52:22.560-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.271-0500 c20013| 2016-04-06T02:52:22.560-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.271-0500 c20013| 2016-04-06T02:52:22.560-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.274-0500 c20013| 2016-04-06T02:52:22.560-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.278-0500 c20013| 2016-04-06T02:52:22.560-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.279-0500 c20013| 2016-04-06T02:52:22.560-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.281-0500 c20013| 2016-04-06T02:52:22.560-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:36.285-0500 s20015| 2016-04-06T02:53:18.970-0500 D ASIO [Balancer] startCommand: RemoteCommand 103 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:48.970-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929198271), up: 71, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.288-0500 s20015| 2016-04-06T02:53:18.970-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 103 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:36.293-0500 s20015| 2016-04-06T02:53:18.986-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 103 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929198000|1, t: 5 }, electionId: ObjectId('7fffffff0000000000000005') } [js_test:multi_coll_drop] 2016-04-06T02:53:36.300-0500 s20015| 2016-04-06T02:53:18.986-0500 D ASIO [Balancer] startCommand: RemoteCommand 105 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:48.986-0500 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|1, t: 5 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.301-0500 s20015| 2016-04-06T02:53:18.986-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 105 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:36.304-0500 s20015| 2016-04-06T02:53:18.987-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 105 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "shard0000", host: "mongovm16:20010" } ], id: 0, ns: "config.shards" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.306-0500 s20015| 2016-04-06T02:53:18.987-0500 D SHARDING [Balancer] found 1 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp 1459929198000|1, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.318-0500 c20011| 2016-04-06T02:52:43.260-0500 D COMMAND [conn40] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:43.260-0500-5704c04b65c17830b843f1c6", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929163260), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -65.0 }, max: { _id: MaxKey } }, left: { min: { _id: -65.0 }, max: { _id: -64.0 }, lastmod: Timestamp 1000|73, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -64.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|74, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.320-0500 c20011| 2016-04-06T02:52:43.260-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|5, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:36.328-0500 c20011| 2016-04-06T02:52:43.261-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|5, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.335-0500 c20011| 2016-04-06T02:52:43.267-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|5, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:36.342-0500 c20011| 2016-04-06T02:52:43.274-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|6, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:36.344-0500 c20011| 2016-04-06T02:52:43.274-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:36.350-0500 c20011| 2016-04-06T02:52:43.274-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.355-0500 c20011| 2016-04-06T02:52:43.274-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|6, t: 3 } and is durable through: { ts: Timestamp 1459929163000|5, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.359-0500 c20011| 2016-04-06T02:52:43.274-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|6, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.363-0500 c20011| 2016-04-06T02:52:43.288-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929163000|6, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|5, t: 3 }, name-id: "248" } [js_test:multi_coll_drop] 2016-04-06T02:53:36.370-0500 c20011| 2016-04-06T02:52:43.291-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|6, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|6, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:36.374-0500 c20011| 2016-04-06T02:52:43.292-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:36.375-0500 c20011| 2016-04-06T02:52:43.292-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.377-0500 c20011| 2016-04-06T02:52:43.292-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|6, t: 3 } and is durable through: { ts: Timestamp 1459929163000|6, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.378-0500 c20011| 2016-04-06T02:52:43.292-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929163000|6, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.381-0500 c20011| 2016-04-06T02:52:43.292-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|6, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|6, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.402-0500 c20011| 2016-04-06T02:52:43.292-0500 I COMMAND [conn40] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:52:43.260-0500-5704c04b65c17830b843f1c6", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929163260), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -65.0 }, max: { _id: MaxKey } }, left: { min: { _id: -65.0 }, max: { _id: -64.0 }, lastmod: Timestamp 1000|73, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -64.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|74, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 31ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.406-0500 c20011| 2016-04-06T02:52:43.292-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|5, t: 3 } } cursorid:19853084149 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 25ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.409-0500 c20011| 2016-04-06T02:52:43.292-0500 D COMMAND [conn40] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c04b65c17830b843f1c5') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.412-0500 c20011| 2016-04-06T02:52:43.293-0500 D QUERY [conn40] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:36.414-0500 c20011| 2016-04-06T02:52:43.293-0500 D QUERY [conn40] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c04b65c17830b843f1c5') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.417-0500 c20011| 2016-04-06T02:52:43.298-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|6, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:36.420-0500 c20011| 2016-04-06T02:52:43.298-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|6, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.422-0500 c20011| 2016-04-06T02:52:43.301-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|6, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|7, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:36.424-0500 c20011| 2016-04-06T02:52:43.301-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:36.425-0500 c20011| 2016-04-06T02:52:43.301-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.442-0500 c20011| 2016-04-06T02:52:43.301-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|7, t: 3 } and is durable through: { ts: Timestamp 1459929163000|6, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.458-0500 c20011| 2016-04-06T02:52:43.301-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|6, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|7, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.461-0500 c20011| 2016-04-06T02:52:43.302-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|6, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:36.463-0500 c20011| 2016-04-06T02:52:43.316-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929163000|7, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|6, t: 3 }, name-id: "249" } [js_test:multi_coll_drop] 2016-04-06T02:53:36.465-0500 c20011| 2016-04-06T02:52:43.323-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|7, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|7, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:36.466-0500 c20011| 2016-04-06T02:52:43.323-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:36.468-0500 c20011| 2016-04-06T02:52:43.323-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.472-0500 c20011| 2016-04-06T02:52:43.323-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|7, t: 3 } and is durable through: { ts: Timestamp 1459929163000|7, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.475-0500 c20011| 2016-04-06T02:52:43.323-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929163000|7, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.481-0500 c20011| 2016-04-06T02:52:43.323-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|7, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|7, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.485-0500 c20011| 2016-04-06T02:52:43.324-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|6, t: 3 } } cursorid:19853084149 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 22ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.499-0500 c20011| 2016-04-06T02:52:43.324-0500 I COMMAND [conn40] command config.locks command: findAndModify { findAndModify: "locks", query: { ts: ObjectId('5704c04b65c17830b843f1c5') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { state: 0 } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 31ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.509-0500 c20011| 2016-04-06T02:52:43.331-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|7, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:36.513-0500 c20011| 2016-04-06T02:52:43.332-0500 D COMMAND [conn36] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|72 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|7, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.517-0500 c20011| 2016-04-06T02:52:43.332-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|7, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:36.520-0500 c20011| 2016-04-06T02:52:43.332-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|72 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|7, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.522-0500 c20011| 2016-04-06T02:52:43.333-0500 D QUERY [conn36] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:36.535-0500 c20011| 2016-04-06T02:52:43.333-0500 I COMMAND [conn36] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|72 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|7, t: 3 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:732 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.539-0500 c20011| 2016-04-06T02:52:43.333-0500 D COMMAND [conn36] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|7, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.541-0500 c20011| 2016-04-06T02:52:43.333-0500 D COMMAND [conn36] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|7, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:36.544-0500 c20011| 2016-04-06T02:52:43.333-0500 D COMMAND [conn36] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|7, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.545-0500 c20011| 2016-04-06T02:52:43.333-0500 D QUERY [conn36] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:36.550-0500 c20011| 2016-04-06T02:52:43.334-0500 I COMMAND [conn36] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|7, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.556-0500 c20011| 2016-04-06T02:52:43.335-0500 D COMMAND [conn40] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c04b65c17830b843f1c7'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929163335), why: "splitting chunk [{ _id: -64.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.559-0500 c20011| 2016-04-06T02:52:43.335-0500 D QUERY [conn40] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:36.561-0500 c20011| 2016-04-06T02:52:43.335-0500 D QUERY [conn40] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:36.563-0500 c20011| 2016-04-06T02:52:43.335-0500 D QUERY [conn40] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.565-0500 c20011| 2016-04-06T02:52:43.337-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|7, t: 3 } } cursorid:19853084149 numYields:0 nreturned:1 reslen:602 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.569-0500 c20011| 2016-04-06T02:52:43.340-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|7, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:36.581-0500 c20011| 2016-04-06T02:52:43.346-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|7, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:36.582-0500 c20011| 2016-04-06T02:52:43.346-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:36.607-0500 c20011| 2016-04-06T02:52:43.346-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.612-0500 c20011| 2016-04-06T02:52:43.346-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|8, t: 3 } and is durable through: { ts: Timestamp 1459929163000|7, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.621-0500 c20011| 2016-04-06T02:52:43.347-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|7, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.627-0500 c20011| 2016-04-06T02:52:43.355-0500 D REPL [conn40] Required snapshot optime: { ts: Timestamp 1459929163000|8, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|7, t: 3 }, name-id: "250" } [js_test:multi_coll_drop] 2016-04-06T02:53:36.636-0500 c20011| 2016-04-06T02:52:43.366-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:36.636-0500 c20011| 2016-04-06T02:52:43.366-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:36.638-0500 c20011| 2016-04-06T02:52:43.366-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.639-0500 c20011| 2016-04-06T02:52:43.366-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|8, t: 3 } and is durable through: { ts: Timestamp 1459929163000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.642-0500 c20011| 2016-04-06T02:52:43.366-0500 D REPL [conn35] Updating _lastCommittedOpTime to { ts: Timestamp 1459929163000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.647-0500 c20011| 2016-04-06T02:52:43.367-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.655-0500 c20011| 2016-04-06T02:52:43.367-0500 I COMMAND [conn40] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c04b65c17830b843f1c7'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929163335), why: "splitting chunk [{ _id: -64.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c04b65c17830b843f1c7'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929163335), why: "splitting chunk [{ _id: -64.0 }, { _id: MaxKey }) in multidrop.coll" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:2 numYields:0 reslen:611 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 31ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.660-0500 c20011| 2016-04-06T02:52:43.367-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|7, t: 3 } } cursorid:19853084149 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 26ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.663-0500 c20011| 2016-04-06T02:52:43.722-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 306 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:53.722-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.663-0500 c20011| 2016-04-06T02:52:43.722-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:36.664-0500 c20011| 2016-04-06T02:52:43.722-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 307 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:36.664-0500 c20011| 2016-04-06T02:52:43.723-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:36.665-0500 c20011| 2016-04-06T02:52:43.723-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 307 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:36.665-0500 c20011| 2016-04-06T02:52:43.723-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 306 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:36.668-0500 c20011| 2016-04-06T02:52:43.723-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 306 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, opTime: { ts: Timestamp 1459929161000|3, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:36.671-0500 c20011| 2016-04-06T02:52:43.724-0500 D REPL [ReplicationExecutor] Ignoring older committed snapshot optime: { ts: Timestamp 1459929152000|2, t: 3 }, currentCommittedOpTime: { ts: Timestamp 1459929163000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.672-0500 c20011| 2016-04-06T02:52:43.724-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:45.724Z [js_test:multi_coll_drop] 2016-04-06T02:53:36.678-0500 c20011| 2016-04-06T02:52:44.213-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 309 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:54.213-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.679-0500 c20011| 2016-04-06T02:52:44.214-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 309 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:36.680-0500 c20011| 2016-04-06T02:52:44.227-0500 D COMMAND [conn29] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.680-0500 c20011| 2016-04-06T02:52:44.227-0500 D COMMAND [conn29] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:36.684-0500 c20011| 2016-04-06T02:52:44.228-0500 I COMMAND [conn29] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } numYields:0 reslen:500 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.687-0500 c20011| 2016-04-06T02:52:44.591-0500 D COMMAND [conn29] run command local.$cmd { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:36.688-0500 c20011| 2016-04-06T02:52:44.591-0500 D QUERY [conn29] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: 1 } projection: {} limit: 1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:36.692-0500 c20011| 2016-04-06T02:52:44.591-0500 I COMMAND [conn29] command local.oplog.rs command: find { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:274 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.693-0500 c20011| 2016-04-06T02:52:44.591-0500 D COMMAND [conn31] run command local.$cmd { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929161000|3 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.707-0500 c20011| 2016-04-06T02:52:44.592-0500 D COMMAND [conn34] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:36.708-0500 c20011| 2016-04-06T02:52:44.592-0500 D COMMAND [conn34] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:36.714-0500 c20011| 2016-04-06T02:52:44.592-0500 D REPL [conn34] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|3, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.720-0500 c20011| 2016-04-06T02:52:44.592-0500 D REPL [conn34] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929152000|2, t: 3 } and is durable through: { ts: Timestamp 1459929152000|2, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.728-0500 c20011| 2016-04-06T02:52:44.592-0500 I COMMAND [conn34] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.736-0500 c20011| 2016-04-06T02:52:44.592-0500 I COMMAND [conn31] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929161000|3 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 3 } planSummary: COLLSCAN cursorid:20716408231 keysExamined:0 docsExamined:49 numYields:0 nreturned:49 reslen:19067 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.737-0500 c20011| 2016-04-06T02:52:44.593-0500 D COMMAND [conn31] run command local.$cmd { killCursors: "oplog.rs", cursors: [ 20716408231 ] } [js_test:multi_coll_drop] 2016-04-06T02:53:36.746-0500 c20011| 2016-04-06T02:52:44.593-0500 I COMMAND [conn31] command local.oplog.rs command: killCursors { killCursors: "oplog.rs", cursors: [ 20716408231 ] } numYields:0 reslen:175 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.746-0500 c20011| 2016-04-06T02:52:44.594-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:32849 #44 (17 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:36.747-0500 c20011| 2016-04-06T02:52:44.594-0500 D COMMAND [conn44] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20012" } [js_test:multi_coll_drop] 2016-04-06T02:53:36.749-0500 c20011| 2016-04-06T02:52:44.594-0500 I COMMAND [conn44] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20012" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.750-0500 c20011| 2016-04-06T02:52:44.594-0500 D COMMAND [conn44] run command admin.$cmd { replSetGetRBID: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.754-0500 c20011| 2016-04-06T02:52:44.594-0500 D COMMAND [conn44] command: replSetGetRBID [js_test:multi_coll_drop] 2016-04-06T02:53:36.755-0500 c20011| 2016-04-06T02:52:44.594-0500 I COMMAND [conn44] command admin.$cmd command: replSetGetRBID { replSetGetRBID: 1 } numYields:0 reslen:92 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.755-0500 c20011| 2016-04-06T02:52:44.595-0500 D QUERY [conn44] Running query: query: {} sort: { $natural: -1 } projection: { ts: 1, h: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.756-0500 c20011| 2016-04-06T02:52:44.595-0500 D QUERY [conn44] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: -1 } projection: { ts: 1, h: 1 }, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:36.759-0500 c20011| 2016-04-06T02:52:44.595-0500 I COMMAND [conn44] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN cursorid:17928380138 ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:101 numYields:0 nreturned:101 reslen:2848 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.762-0500 c20011| 2016-04-06T02:52:44.595-0500 D COMMAND [conn44] killcursors: found 1 of 1 [js_test:multi_coll_drop] 2016-04-06T02:53:36.763-0500 c20011| 2016-04-06T02:52:44.595-0500 I COMMAND [conn44] killcursors local.oplog.rs numYields:0 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.763-0500 c20011| 2016-04-06T02:52:44.595-0500 D QUERY [conn44] Running query: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:36.765-0500 c20011| 2016-04-06T02:52:44.595-0500 D QUERY [conn44] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:36.767-0500 c20011| 2016-04-06T02:52:44.595-0500 I COMMAND [conn44] query config.lockpings query: { _id: "mongovm16:20010:1459929128:185613966" } planSummary: IDHACK ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:85 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.769-0500 c20011| 2016-04-06T02:52:44.595-0500 D QUERY [conn44] Running query: query: { _id: "mongovm16:20014" } sort: {} projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:36.771-0500 c20011| 2016-04-06T02:52:44.595-0500 D QUERY [conn44] Using idhack: query: { _id: "mongovm16:20014" } sort: {} projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:36.773-0500 c20011| 2016-04-06T02:52:44.595-0500 I COMMAND [conn44] query config.mongos query: { _id: "mongovm16:20014" } planSummary: IDHACK ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:122 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.774-0500 c20011| 2016-04-06T02:52:44.595-0500 D QUERY [conn44] Running query: query: { _id: "mongovm16:20015" } sort: {} projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:36.775-0500 c20011| 2016-04-06T02:52:44.595-0500 D QUERY [conn44] Using idhack: query: { _id: "mongovm16:20015" } sort: {} projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:36.778-0500 c20011| 2016-04-06T02:52:44.595-0500 I COMMAND [conn44] query config.mongos query: { _id: "mongovm16:20015" } planSummary: IDHACK ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:122 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.779-0500 c20011| 2016-04-06T02:52:44.595-0500 D QUERY [conn44] Running query: query: {} sort: { $natural: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:53:36.780-0500 c20011| 2016-04-06T02:52:44.595-0500 D QUERY [conn44] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: -1 } projection: {} ntoreturn=1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:36.783-0500 c20011| 2016-04-06T02:52:44.595-0500 I COMMAND [conn44] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:267 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.784-0500 c20011| 2016-04-06T02:52:44.595-0500 D COMMAND [conn44] run command admin.$cmd { replSetGetRBID: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.785-0500 c20011| 2016-04-06T02:52:44.595-0500 D COMMAND [conn44] command: replSetGetRBID [js_test:multi_coll_drop] 2016-04-06T02:53:36.787-0500 c20011| 2016-04-06T02:52:44.595-0500 I COMMAND [conn44] command admin.$cmd command: replSetGetRBID { replSetGetRBID: 1 } numYields:0 reslen:92 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:36.793-0500 c20011| 2016-04-06T02:52:44.596-0500 D COMMAND [conn34] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:36.795-0500 c20011| 2016-04-06T02:52:44.596-0500 D COMMAND [conn34] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:36.798-0500 c20011| 2016-04-06T02:52:44.596-0500 D REPL [conn34] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|10, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.800-0500 c20011| 2016-04-06T02:52:44.596-0500 D REPL [conn34] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929152000|2, t: 3 } and is durable through: { ts: Timestamp 1459929152000|2, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:36.804-0500 d20010| 2016-04-06T02:53:21.983-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:36.805-0500 d20010| 2016-04-06T02:53:21.987-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -55.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:36.810-0500 d20010| 2016-04-06T02:53:21.994-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:36.815-0500 d20010| 2016-04-06T02:53:21.995-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -54.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:36.817-0500 d20010| 2016-04-06T02:53:21.997-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:36.821-0500 d20010| 2016-04-06T02:53:21.999-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -53.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:36.824-0500 d20010| 2016-04-06T02:53:22.024-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:36.828-0500 d20010| 2016-04-06T02:53:22.026-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -52.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:36.829-0500 d20010| 2016-04-06T02:53:22.030-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:36.833-0500 d20010| 2016-04-06T02:53:22.036-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -51.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:36.836-0500 d20010| 2016-04-06T02:53:22.041-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:36.837-0500 d20010| 2016-04-06T02:53:22.043-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -50.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:36.839-0500 d20010| 2016-04-06T02:53:22.046-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:36.841-0500 d20010| 2016-04-06T02:53:22.047-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -49.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:36.843-0500 d20010| 2016-04-06T02:53:22.049-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:36.863-0500 d20010| 2016-04-06T02:53:22.050-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -48.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:36.865-0500 d20010| 2016-04-06T02:53:22.056-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:36.868-0500 d20010| 2016-04-06T02:53:22.059-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -47.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:36.870-0500 d20010| 2016-04-06T02:53:22.063-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:36.878-0500 d20010| 2016-04-06T02:53:22.070-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -46.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:36.882-0500 d20010| 2016-04-06T02:53:22.073-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:36.893-0500 d20010| 2016-04-06T02:53:22.075-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -45.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:36.896-0500 d20010| 2016-04-06T02:53:22.081-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:36.908-0500 d20010| 2016-04-06T02:53:22.082-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -44.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:36.911-0500 d20010| 2016-04-06T02:53:22.087-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:36.921-0500 d20010| 2016-04-06T02:53:22.091-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -43.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:36.922-0500 d20010| 2016-04-06T02:53:22.097-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:36.924-0500 d20010| 2016-04-06T02:53:22.098-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -42.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:36.925-0500 d20010| 2016-04-06T02:53:22.102-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:36.928-0500 d20010| 2016-04-06T02:53:22.102-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -41.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:36.930-0500 d20010| 2016-04-06T02:53:22.105-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:36.936-0500 d20010| 2016-04-06T02:53:22.107-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -40.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:36.945-0500 d20010| 2016-04-06T02:53:22.116-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:36.960-0500 d20010| 2016-04-06T02:53:22.118-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -39.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:36.966-0500 d20010| 2016-04-06T02:53:22.123-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:36.986-0500 d20010| 2016-04-06T02:53:22.126-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -38.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:36.993-0500 d20010| 2016-04-06T02:53:22.129-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.024-0500 d20010| 2016-04-06T02:53:22.129-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -37.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.035-0500 d20010| 2016-04-06T02:53:22.134-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.039-0500 d20010| 2016-04-06T02:53:22.135-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -36.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.039-0500 d20010| 2016-04-06T02:53:22.138-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.041-0500 d20010| 2016-04-06T02:53:22.139-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -35.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.042-0500 d20010| 2016-04-06T02:53:22.143-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.043-0500 d20010| 2016-04-06T02:53:22.144-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -34.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.045-0500 d20010| 2016-04-06T02:53:22.147-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.046-0500 d20010| 2016-04-06T02:53:22.148-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -33.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.050-0500 d20010| 2016-04-06T02:53:22.149-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.056-0500 d20010| 2016-04-06T02:53:22.150-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -32.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.057-0500 d20010| 2016-04-06T02:53:22.152-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.059-0500 d20010| 2016-04-06T02:53:22.152-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -31.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.060-0500 d20010| 2016-04-06T02:53:22.154-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.063-0500 d20010| 2016-04-06T02:53:22.155-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -30.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.070-0500 d20010| 2016-04-06T02:53:22.157-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.076-0500 d20010| 2016-04-06T02:53:22.157-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -29.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.079-0500 d20010| 2016-04-06T02:53:22.159-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.082-0500 d20010| 2016-04-06T02:53:22.159-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -28.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.085-0500 d20010| 2016-04-06T02:53:22.161-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.089-0500 d20010| 2016-04-06T02:53:22.162-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -27.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.092-0500 d20010| 2016-04-06T02:53:22.164-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.094-0500 d20010| 2016-04-06T02:53:22.165-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -26.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.095-0500 d20010| 2016-04-06T02:53:22.167-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.102-0500 d20010| 2016-04-06T02:53:22.168-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -25.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.104-0500 d20010| 2016-04-06T02:53:22.171-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.108-0500 d20010| 2016-04-06T02:53:22.173-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -24.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.108-0500 d20010| 2016-04-06T02:53:22.184-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.113-0500 d20010| 2016-04-06T02:53:22.195-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -23.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.117-0500 d20010| 2016-04-06T02:53:22.203-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.124-0500 d20010| 2016-04-06T02:53:22.204-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -22.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.125-0500 d20010| 2016-04-06T02:53:22.215-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.146-0500 d20010| 2016-04-06T02:53:22.217-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -21.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.148-0500 d20010| 2016-04-06T02:53:22.224-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.163-0500 d20010| 2016-04-06T02:53:22.226-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -20.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.165-0500 d20010| 2016-04-06T02:53:22.229-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.170-0500 d20010| 2016-04-06T02:53:22.232-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -19.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.172-0500 d20010| 2016-04-06T02:53:22.235-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.184-0500 d20010| 2016-04-06T02:53:22.242-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -18.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.188-0500 d20010| 2016-04-06T02:53:22.244-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.189-0500 d20010| 2016-04-06T02:53:22.246-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -17.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.193-0500 d20010| 2016-04-06T02:53:22.249-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.200-0500 d20010| 2016-04-06T02:53:22.250-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -16.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.201-0500 d20010| 2016-04-06T02:53:22.255-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.212-0500 d20010| 2016-04-06T02:53:22.256-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -15.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.215-0500 d20010| 2016-04-06T02:53:22.259-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.224-0500 d20010| 2016-04-06T02:53:22.262-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -14.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.225-0500 d20010| 2016-04-06T02:53:22.266-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.234-0500 d20010| 2016-04-06T02:53:22.267-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -13.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.237-0500 d20010| 2016-04-06T02:53:22.272-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.240-0500 d20010| 2016-04-06T02:53:22.273-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -12.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.243-0500 d20010| 2016-04-06T02:53:22.276-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.244-0500 d20010| 2016-04-06T02:53:22.277-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -11.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.245-0500 d20010| 2016-04-06T02:53:22.279-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.248-0500 d20010| 2016-04-06T02:53:22.280-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -10.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.251-0500 d20010| 2016-04-06T02:53:22.282-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.255-0500 d20010| 2016-04-06T02:53:22.283-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -9.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.259-0500 d20010| 2016-04-06T02:53:22.285-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.271-0500 d20010| 2016-04-06T02:53:22.286-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -8.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.280-0500 d20010| 2016-04-06T02:53:22.288-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.282-0500 d20010| 2016-04-06T02:53:22.289-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -7.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.314-0500 d20010| 2016-04-06T02:53:22.291-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.322-0500 d20010| 2016-04-06T02:53:22.292-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -6.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.323-0500 d20010| 2016-04-06T02:53:22.295-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.328-0500 d20010| 2016-04-06T02:53:22.296-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -5.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.329-0500 d20010| 2016-04-06T02:53:22.300-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.333-0500 d20010| 2016-04-06T02:53:22.301-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -4.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.334-0500 d20010| 2016-04-06T02:53:22.304-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.335-0500 d20010| 2016-04-06T02:53:22.311-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -3.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.337-0500 d20010| 2016-04-06T02:53:22.315-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.340-0500 d20010| 2016-04-06T02:53:22.316-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -2.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.342-0500 d20010| 2016-04-06T02:53:22.324-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.345-0500 d20010| 2016-04-06T02:53:22.327-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -1.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.353-0500 d20010| 2016-04-06T02:53:22.331-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.373-0500 d20010| 2016-04-06T02:53:22.334-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 0.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.375-0500 d20010| 2016-04-06T02:53:22.345-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.385-0500 d20010| 2016-04-06T02:53:22.346-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 1.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.400-0500 d20010| 2016-04-06T02:53:22.349-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.402-0500 d20010| 2016-04-06T02:53:22.350-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 2.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.419-0500 d20010| 2016-04-06T02:53:22.352-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.429-0500 d20010| 2016-04-06T02:53:22.354-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 3.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.432-0500 d20010| 2016-04-06T02:53:22.356-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.460-0500 d20010| 2016-04-06T02:53:22.357-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 4.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.467-0500 d20010| 2016-04-06T02:53:22.360-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.476-0500 d20010| 2016-04-06T02:53:22.363-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 5.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.481-0500 d20010| 2016-04-06T02:53:22.369-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.485-0500 d20010| 2016-04-06T02:53:22.370-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 6.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.488-0500 d20010| 2016-04-06T02:53:22.373-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.491-0500 d20010| 2016-04-06T02:53:22.374-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 7.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.492-0500 d20010| 2016-04-06T02:53:22.377-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.498-0500 d20010| 2016-04-06T02:53:22.378-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 8.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.506-0500 d20010| 2016-04-06T02:53:22.382-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.508-0500 d20010| 2016-04-06T02:53:22.383-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 9.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.509-0500 d20010| 2016-04-06T02:53:22.390-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.511-0500 d20010| 2016-04-06T02:53:22.391-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 10.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.512-0500 d20010| 2016-04-06T02:53:22.400-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.514-0500 d20010| 2016-04-06T02:53:22.400-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 11.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.518-0500 d20010| 2016-04-06T02:53:22.406-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.524-0500 d20010| 2016-04-06T02:53:22.407-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 12.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.526-0500 d20010| 2016-04-06T02:53:22.409-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.545-0500 d20010| 2016-04-06T02:53:22.409-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 13.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.551-0500 d20010| 2016-04-06T02:53:22.411-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.555-0500 d20010| 2016-04-06T02:53:22.412-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 14.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.558-0500 d20010| 2016-04-06T02:53:22.413-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.560-0500 d20010| 2016-04-06T02:53:22.414-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 15.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.562-0500 d20010| 2016-04-06T02:53:22.416-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.566-0500 d20010| 2016-04-06T02:53:22.416-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 16.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.568-0500 d20010| 2016-04-06T02:53:22.418-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.571-0500 d20010| 2016-04-06T02:53:22.418-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 17.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.574-0500 d20010| 2016-04-06T02:53:22.420-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.582-0500 d20010| 2016-04-06T02:53:22.421-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 18.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.584-0500 d20010| 2016-04-06T02:53:22.422-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.586-0500 d20010| 2016-04-06T02:53:22.423-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 19.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.590-0500 d20010| 2016-04-06T02:53:22.425-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.596-0500 d20010| 2016-04-06T02:53:22.425-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 20.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.597-0500 d20010| 2016-04-06T02:53:22.427-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.601-0500 d20010| 2016-04-06T02:53:22.428-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 21.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.607-0500 d20010| 2016-04-06T02:53:22.430-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.610-0500 d20010| 2016-04-06T02:53:22.430-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 22.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.615-0500 d20010| 2016-04-06T02:53:22.433-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.618-0500 d20010| 2016-04-06T02:53:22.433-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 23.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.620-0500 d20010| 2016-04-06T02:53:22.437-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.623-0500 d20010| 2016-04-06T02:53:22.442-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 24.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.625-0500 d20010| 2016-04-06T02:53:22.446-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.629-0500 d20010| 2016-04-06T02:53:22.447-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 25.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.633-0500 d20010| 2016-04-06T02:53:22.451-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.639-0500 d20010| 2016-04-06T02:53:22.452-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 26.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.640-0500 d20010| 2016-04-06T02:53:22.456-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.642-0500 d20010| 2016-04-06T02:53:22.457-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 27.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.646-0500 d20010| 2016-04-06T02:53:22.460-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.650-0500 d20010| 2016-04-06T02:53:22.461-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 28.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.651-0500 d20010| 2016-04-06T02:53:22.464-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.656-0500 d20010| 2016-04-06T02:53:22.465-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 29.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.657-0500 d20010| 2016-04-06T02:53:22.469-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.658-0500 d20010| 2016-04-06T02:53:22.469-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 30.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.659-0500 d20010| 2016-04-06T02:53:22.476-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.663-0500 d20010| 2016-04-06T02:53:22.481-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 31.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.663-0500 d20010| 2016-04-06T02:53:22.498-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.666-0500 d20010| 2016-04-06T02:53:22.499-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 32.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.673-0500 d20010| 2016-04-06T02:53:22.506-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.677-0500 d20010| 2016-04-06T02:53:22.509-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 33.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.678-0500 d20010| 2016-04-06T02:53:22.515-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.681-0500 d20010| 2016-04-06T02:53:22.517-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 34.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.685-0500 d20010| 2016-04-06T02:53:22.530-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.688-0500 d20010| 2016-04-06T02:53:22.531-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 35.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.692-0500 d20010| 2016-04-06T02:53:22.548-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.706-0500 d20010| 2016-04-06T02:53:22.550-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 36.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.709-0500 d20010| 2016-04-06T02:53:22.556-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.714-0500 d20010| 2016-04-06T02:53:22.557-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 37.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.715-0500 d20010| 2016-04-06T02:53:22.562-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.719-0500 d20010| 2016-04-06T02:53:22.563-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 38.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.725-0500 d20010| 2016-04-06T02:53:22.572-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.730-0500 d20010| 2016-04-06T02:53:22.573-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 39.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.734-0500 d20010| 2016-04-06T02:53:22.575-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:37.737-0500 d20010| 2016-04-06T02:53:22.576-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 40.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:37.740-0500 s20015| 2016-04-06T02:53:18.987-0500 D ASIO [Balancer] startCommand: RemoteCommand 107 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:48.987-0500 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:37.743-0500 s20015| 2016-04-06T02:53:18.987-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 107 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:37.746-0500 s20015| 2016-04-06T02:53:18.987-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 107 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "chunksize", value: 50 } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:37.747-0500 s20015| 2016-04-06T02:53:18.987-0500 D SHARDING [Balancer] Refreshing MaxChunkSize: 50MB [js_test:multi_coll_drop] 2016-04-06T02:53:37.753-0500 s20015| 2016-04-06T02:53:18.987-0500 D ASIO [Balancer] startCommand: RemoteCommand 109 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:48.987-0500 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:37.754-0500 s20015| 2016-04-06T02:53:18.987-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 109 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:37.760-0500 s20015| 2016-04-06T02:53:18.987-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 109 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "balancer", stopped: true } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:37.762-0500 s20015| 2016-04-06T02:53:18.987-0500 D SHARDING [Balancer] skipping balancing round because balancing is disabled [js_test:multi_coll_drop] 2016-04-06T02:53:37.764-0500 c20013| 2016-04-06T02:52:22.560-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.766-0500 c20013| 2016-04-06T02:52:22.560-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.767-0500 c20013| 2016-04-06T02:52:22.560-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.768-0500 c20013| 2016-04-06T02:52:22.560-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.770-0500 c20013| 2016-04-06T02:52:22.560-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.772-0500 c20013| 2016-04-06T02:52:22.560-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.773-0500 c20013| 2016-04-06T02:52:22.560-0500 D REPL [rsSync] replication batch size is 4 [js_test:multi_coll_drop] 2016-04-06T02:53:37.777-0500 c20013| 2016-04-06T02:52:22.560-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.778-0500 c20013| 2016-04-06T02:52:22.560-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.790-0500 c20013| 2016-04-06T02:52:22.560-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929139000|2, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:37.794-0500 c20013| 2016-04-06T02:52:22.560-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1052 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929139000|2, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:37.799-0500 c20013| 2016-04-06T02:52:22.560-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:37.801-0500 c20013| 2016-04-06T02:52:22.560-0500 D QUERY [repl writer worker 8] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:37.802-0500 c20013| 2016-04-06T02:52:22.560-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1052 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:37.805-0500 c20013| 2016-04-06T02:52:22.560-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1052 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:37.807-0500 c20013| 2016-04-06T02:52:22.560-0500 D QUERY [repl writer worker 8] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:37.808-0500 c20013| 2016-04-06T02:52:22.560-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:37.809-0500 c20013| 2016-04-06T02:52:22.560-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.810-0500 c20013| 2016-04-06T02:52:22.560-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.810-0500 c20013| 2016-04-06T02:52:22.560-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.812-0500 c20013| 2016-04-06T02:52:22.560-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.813-0500 c20013| 2016-04-06T02:52:22.560-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.813-0500 c20013| 2016-04-06T02:52:22.560-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.814-0500 c20013| 2016-04-06T02:52:22.560-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.815-0500 c20013| 2016-04-06T02:52:22.561-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.817-0500 c20013| 2016-04-06T02:52:22.561-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.820-0500 c20013| 2016-04-06T02:52:22.561-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.821-0500 c20013| 2016-04-06T02:52:22.561-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.822-0500 c20013| 2016-04-06T02:52:22.561-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.822-0500 c20013| 2016-04-06T02:52:22.561-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.823-0500 c20013| 2016-04-06T02:52:22.561-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.823-0500 c20013| 2016-04-06T02:52:22.561-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.824-0500 c20013| 2016-04-06T02:52:22.561-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.825-0500 c20013| 2016-04-06T02:52:22.561-0500 D COMMAND [conn16] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929139000|5, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:37.828-0500 c20013| 2016-04-06T02:52:22.561-0500 D COMMAND [conn16] Using 'committed' snapshot. { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929139000|5, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:37.829-0500 c20013| 2016-04-06T02:52:22.561-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.830-0500 c20013| 2016-04-06T02:52:22.561-0500 D COMMAND [conn15] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929139000|5, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:37.833-0500 c20013| 2016-04-06T02:52:22.561-0500 D COMMAND [conn15] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929139000|5, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:37.834-0500 c20013| 2016-04-06T02:52:22.561-0500 D QUERY [conn16] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:37.834-0500 c20013| 2016-04-06T02:52:22.561-0500 D QUERY [conn15] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:37.837-0500 c20013| 2016-04-06T02:52:22.561-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:37.842-0500 c20013| 2016-04-06T02:52:22.561-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:37.846-0500 c20013| 2016-04-06T02:52:22.561-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1054 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:37.860-0500 c20013| 2016-04-06T02:52:22.561-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1054 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:37.863-0500 c20013| 2016-04-06T02:52:22.561-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1054 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:37.867-0500 c20013| 2016-04-06T02:52:22.561-0500 I COMMAND [conn15] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929139000|5, t: 2 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:492 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 915ms [js_test:multi_coll_drop] 2016-04-06T02:53:37.870-0500 c20013| 2016-04-06T02:52:22.561-0500 I COMMAND [conn16] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929139000|5, t: 2 } }, maxTimeMS: 30000 } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:423 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 918ms [js_test:multi_coll_drop] 2016-04-06T02:53:37.874-0500 c20013| 2016-04-06T02:52:22.562-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929139000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:37.878-0500 c20013| 2016-04-06T02:52:22.562-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1056 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929139000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:37.879-0500 c20013| 2016-04-06T02:52:22.562-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1056 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:37.880-0500 c20013| 2016-04-06T02:52:22.562-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1056 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:37.885-0500 c20013| 2016-04-06T02:52:22.565-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1051 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929142000|1, t: 2, h: -2425702389962912903, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-81.0", lastmod: Timestamp 1000|41, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -81.0 }, max: { _id: -80.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-81.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-80.0", lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -80.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-80.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:37.888-0500 c20013| 2016-04-06T02:52:22.565-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929142000|1 and ending at ts: Timestamp 1459929142000|1 [js_test:multi_coll_drop] 2016-04-06T02:53:37.891-0500 c20013| 2016-04-06T02:52:22.565-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:37.893-0500 c20013| 2016-04-06T02:52:22.565-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.895-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.895-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.896-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.898-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.899-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.901-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.903-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.904-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.906-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.906-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.909-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.910-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.911-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.916-0500 c20013| 2016-04-06T02:52:22.566-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:37.916-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.918-0500 c20013| 2016-04-06T02:52:22.566-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-81.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:37.921-0500 c20013| 2016-04-06T02:52:22.566-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-80.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:37.923-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.925-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.927-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.929-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.934-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.937-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.938-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.939-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.941-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.944-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.946-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.947-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.954-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.955-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.959-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.960-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.962-0500 c20013| 2016-04-06T02:52:22.566-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:37.964-0500 c20013| 2016-04-06T02:52:22.567-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:37.968-0500 c20013| 2016-04-06T02:52:22.567-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929139000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|1, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:37.971-0500 c20013| 2016-04-06T02:52:22.567-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1059 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929139000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|1, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:37.971-0500 c20013| 2016-04-06T02:52:22.567-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1059 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:37.973-0500 c20013| 2016-04-06T02:52:22.567-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1059 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:37.975-0500 c20013| 2016-04-06T02:52:22.567-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1061 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.567-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929141000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:37.976-0500 c20013| 2016-04-06T02:52:22.569-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1061 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:37.991-0500 c20013| 2016-04-06T02:52:22.569-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1061 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929142000|2, t: 2, h: 2120859807080656699, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20015" }, o: { $set: { ping: new Date(1459929142564), up: 15, waiting: true } } } ], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:37.998-0500 c20013| 2016-04-06T02:52:22.569-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929142000|2 and ending at ts: Timestamp 1459929142000|2 [js_test:multi_coll_drop] 2016-04-06T02:53:38.008-0500 c20013| 2016-04-06T02:52:22.571-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:38.009-0500 c20013| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.009-0500 c20013| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.014-0500 c20013| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.014-0500 c20013| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.034-0500 c20013| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.035-0500 c20013| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.041-0500 c20013| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.054-0500 c20013| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.065-0500 c20013| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.065-0500 c20013| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.068-0500 c20013| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.069-0500 c20013| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.069-0500 c20013| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.070-0500 c20013| 2016-04-06T02:52:22.571-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.070-0500 c20013| 2016-04-06T02:52:22.571-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:38.070-0500 c20013| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.071-0500 c20013| 2016-04-06T02:52:22.572-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:38.079-0500 c20013| 2016-04-06T02:52:22.572-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1063 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.572-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929141000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:38.079-0500 c20013| 2016-04-06T02:52:22.572-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1063 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:38.080-0500 c20013| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.080-0500 c20013| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.081-0500 c20013| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.082-0500 c20013| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.084-0500 c20013| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.084-0500 c20013| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.085-0500 c20013| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.085-0500 c20013| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.086-0500 c20013| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.087-0500 c20013| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.088-0500 c20013| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.089-0500 c20013| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.090-0500 c20013| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.091-0500 c20013| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.092-0500 c20013| 2016-04-06T02:52:22.572-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.093-0500 c20013| 2016-04-06T02:52:22.575-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.094-0500 c20013| 2016-04-06T02:52:22.575-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.096-0500 c20013| 2016-04-06T02:52:22.575-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:38.100-0500 c20013| 2016-04-06T02:52:22.576-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929139000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|2, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:38.110-0500 c20013| 2016-04-06T02:52:22.576-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1064 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929139000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|2, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:38.112-0500 c20013| 2016-04-06T02:52:22.576-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1064 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:38.113-0500 c20013| 2016-04-06T02:52:22.576-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1064 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.115-0500 c20013| 2016-04-06T02:52:22.590-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1063 finished with response: { cursor: { nextBatch: [], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.120-0500 c20013| 2016-04-06T02:52:22.590-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|2, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:38.135-0500 c20013| 2016-04-06T02:52:22.590-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1067 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|2, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:38.139-0500 c20013| 2016-04-06T02:52:22.590-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1067 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:38.147-0500 c20013| 2016-04-06T02:52:22.591-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1067 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.149-0500 c20013| 2016-04-06T02:52:22.591-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.151-0500 c20013| 2016-04-06T02:52:22.591-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:38.170-0500 c20013| 2016-04-06T02:52:22.591-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1069 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.591-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:38.175-0500 c20013| 2016-04-06T02:52:22.591-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1069 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:38.209-0500 c20013| 2016-04-06T02:52:22.591-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1069 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929142000|3, t: 2, h: -7768965791966286535, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:22.591-0500-5704c03665c17830b843f1a6", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929142591), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -81.0 }, max: { _id: MaxKey } }, left: { min: { _id: -81.0 }, max: { _id: -80.0 }, lastmod: Timestamp 1000|41, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -80.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.214-0500 c20013| 2016-04-06T02:52:22.591-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929142000|3 and ending at ts: Timestamp 1459929142000|3 [js_test:multi_coll_drop] 2016-04-06T02:53:38.222-0500 c20013| 2016-04-06T02:52:22.592-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:38.225-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.228-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.229-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.230-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.230-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.234-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.236-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.237-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.238-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.240-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.242-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.243-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.246-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.251-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.254-0500 c20013| 2016-04-06T02:52:22.592-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:38.261-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.263-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.270-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.270-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.274-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.278-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.278-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.281-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.283-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.285-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.285-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.286-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.288-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.290-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.292-0500 c20013| 2016-04-06T02:52:22.592-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.305-0500 c20013| 2016-04-06T02:52:22.593-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.309-0500 c20013| 2016-04-06T02:52:22.593-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.313-0500 c20013| 2016-04-06T02:52:22.593-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.321-0500 c20013| 2016-04-06T02:52:22.593-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:38.327-0500 c20013| 2016-04-06T02:52:22.593-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:38.330-0500 c20013| 2016-04-06T02:52:22.593-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1071 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929141000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:38.331-0500 c20013| 2016-04-06T02:52:22.593-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1071 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:38.332-0500 c20013| 2016-04-06T02:52:22.593-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1071 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.335-0500 c20013| 2016-04-06T02:52:22.594-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1073 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.594-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:38.336-0500 c20013| 2016-04-06T02:52:22.594-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1073 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:38.339-0500 c20013| 2016-04-06T02:52:22.615-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:38.367-0500 c20013| 2016-04-06T02:52:22.615-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1074 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:38.368-0500 c20013| 2016-04-06T02:52:22.615-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1074 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:38.371-0500 c20013| 2016-04-06T02:52:22.615-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1074 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.374-0500 c20013| 2016-04-06T02:52:22.626-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1073 finished with response: { cursor: { nextBatch: [], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.380-0500 c20013| 2016-04-06T02:52:22.626-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|2, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.387-0500 c20013| 2016-04-06T02:52:22.626-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:38.393-0500 c20013| 2016-04-06T02:52:22.626-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1077 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.626-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|2, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:38.395-0500 c20013| 2016-04-06T02:52:22.626-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1077 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:38.406-0500 c20013| 2016-04-06T02:52:22.633-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:38.411-0500 c20013| 2016-04-06T02:52:22.633-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1078 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:38.413-0500 c20013| 2016-04-06T02:52:22.633-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1077 finished with response: { cursor: { nextBatch: [], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.416-0500 c20013| 2016-04-06T02:52:22.633-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1078 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:38.417-0500 c20013| 2016-04-06T02:52:22.634-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1078 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.418-0500 c20013| 2016-04-06T02:52:22.634-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|3, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.418-0500 c20013| 2016-04-06T02:52:22.634-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:38.427-0500 c20013| 2016-04-06T02:52:22.634-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1081 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.634-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|3, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:38.428-0500 c20013| 2016-04-06T02:52:22.634-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1081 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:38.430-0500 c20013| 2016-04-06T02:52:22.634-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1081 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929142000|4, t: 2, h: 5387421193544532636, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.432-0500 c20013| 2016-04-06T02:52:22.634-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929142000|4 and ending at ts: Timestamp 1459929142000|4 [js_test:multi_coll_drop] 2016-04-06T02:53:38.435-0500 c20013| 2016-04-06T02:52:22.634-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:38.436-0500 c20013| 2016-04-06T02:52:22.635-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.437-0500 c20013| 2016-04-06T02:52:22.635-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.438-0500 c20013| 2016-04-06T02:52:22.635-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.440-0500 c20013| 2016-04-06T02:52:22.635-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.441-0500 c20013| 2016-04-06T02:52:22.635-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.442-0500 c20013| 2016-04-06T02:52:22.635-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.443-0500 c20013| 2016-04-06T02:52:22.635-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.445-0500 c20013| 2016-04-06T02:52:22.635-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.447-0500 c20013| 2016-04-06T02:52:22.635-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.449-0500 c20013| 2016-04-06T02:52:22.635-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.450-0500 c20013| 2016-04-06T02:52:22.635-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.450-0500 c20013| 2016-04-06T02:52:22.635-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.451-0500 c20013| 2016-04-06T02:52:22.635-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.451-0500 c20013| 2016-04-06T02:52:22.635-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.454-0500 c20013| 2016-04-06T02:52:22.635-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.454-0500 c20013| 2016-04-06T02:52:22.635-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:38.455-0500 c20013| 2016-04-06T02:52:22.635-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.455-0500 c20013| 2016-04-06T02:52:22.635-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:38.455-0500 c20013| 2016-04-06T02:52:22.640-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.456-0500 c20013| 2016-04-06T02:52:22.640-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.457-0500 c20013| 2016-04-06T02:52:22.640-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.461-0500 c20013| 2016-04-06T02:52:22.640-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.465-0500 c20013| 2016-04-06T02:52:22.640-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1083 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.640-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|3, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:38.467-0500 c20013| 2016-04-06T02:52:22.640-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.469-0500 c20013| 2016-04-06T02:52:22.640-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1083 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:38.469-0500 c20013| 2016-04-06T02:52:22.640-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.469-0500 c20013| 2016-04-06T02:52:22.641-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.471-0500 c20013| 2016-04-06T02:52:22.641-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.472-0500 c20013| 2016-04-06T02:52:22.641-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.472-0500 c20013| 2016-04-06T02:52:22.641-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.473-0500 c20013| 2016-04-06T02:52:22.641-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.473-0500 c20013| 2016-04-06T02:52:22.641-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.474-0500 c20013| 2016-04-06T02:52:22.641-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.476-0500 c20013| 2016-04-06T02:52:22.641-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.478-0500 c20013| 2016-04-06T02:52:22.641-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.479-0500 c20013| 2016-04-06T02:52:22.641-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.479-0500 c20013| 2016-04-06T02:52:22.641-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:38.481-0500 c20013| 2016-04-06T02:52:22.641-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:38.485-0500 c20013| 2016-04-06T02:52:22.641-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1084 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:38.486-0500 c20013| 2016-04-06T02:52:22.641-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1084 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:38.488-0500 c20013| 2016-04-06T02:52:22.642-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1084 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.491-0500 c20013| 2016-04-06T02:52:22.653-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:38.495-0500 c20013| 2016-04-06T02:52:22.653-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1086 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:38.496-0500 c20013| 2016-04-06T02:52:22.653-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1086 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:38.503-0500 c20013| 2016-04-06T02:52:22.653-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1086 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.506-0500 c20013| 2016-04-06T02:52:22.653-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1083 finished with response: { cursor: { nextBatch: [], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.511-0500 c20013| 2016-04-06T02:52:22.654-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|4, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.513-0500 c20013| 2016-04-06T02:52:22.654-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|4, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.579-0500 c20013| 2016-04-06T02:52:22.654-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|4, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:38.585-0500 c20013| 2016-04-06T02:52:22.654-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|4, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.589-0500 c20013| 2016-04-06T02:52:22.654-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:38.591-0500 c20013| 2016-04-06T02:52:22.654-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:38.595-0500 c20013| 2016-04-06T02:52:22.654-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|4, t: 2 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:38.597-0500 c20013| 2016-04-06T02:52:22.654-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1089 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.654-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|4, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:38.598-0500 c20013| 2016-04-06T02:52:22.654-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1089 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:38.603-0500 c20013| 2016-04-06T02:52:22.657-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1089 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929142000|5, t: 2, h: -3686273911828341714, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c03665c17830b843f1a7'), state: 2, when: new Date(1459929142656), why: "splitting chunk [{ _id: -80.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.608-0500 c20013| 2016-04-06T02:52:22.657-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929142000|5 and ending at ts: Timestamp 1459929142000|5 [js_test:multi_coll_drop] 2016-04-06T02:53:38.613-0500 c20013| 2016-04-06T02:52:22.657-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:38.615-0500 c20013| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.615-0500 c20013| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.631-0500 c20013| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.637-0500 c20013| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.639-0500 c20013| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.639-0500 c20013| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.640-0500 c20013| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.642-0500 c20013| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.643-0500 c20013| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.644-0500 c20013| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.645-0500 c20013| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.645-0500 c20013| 2016-04-06T02:52:22.658-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:38.645-0500 c20013| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.646-0500 c20013| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.648-0500 c20013| 2016-04-06T02:52:22.658-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:38.649-0500 c20013| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.649-0500 c20013| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.650-0500 c20013| 2016-04-06T02:52:22.658-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.650-0500 c20013| 2016-04-06T02:52:22.659-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.651-0500 c20013| 2016-04-06T02:52:22.659-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.651-0500 c20013| 2016-04-06T02:52:22.659-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.652-0500 c20013| 2016-04-06T02:52:22.659-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.653-0500 c20013| 2016-04-06T02:52:22.659-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.654-0500 c20013| 2016-04-06T02:52:22.659-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.655-0500 c20013| 2016-04-06T02:52:22.659-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.656-0500 c20013| 2016-04-06T02:52:22.659-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.656-0500 c20013| 2016-04-06T02:52:22.659-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.657-0500 c20013| 2016-04-06T02:52:22.659-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.657-0500 c20013| 2016-04-06T02:52:22.659-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.658-0500 c20013| 2016-04-06T02:52:22.659-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.658-0500 c20013| 2016-04-06T02:52:22.659-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.658-0500 c20013| 2016-04-06T02:52:22.659-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.659-0500 c20013| 2016-04-06T02:52:22.659-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.661-0500 c20013| 2016-04-06T02:52:22.659-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.661-0500 c20013| 2016-04-06T02:52:22.659-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:38.662-0500 c20013| 2016-04-06T02:52:22.659-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:38.663-0500 c20013| 2016-04-06T02:52:22.660-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1091 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:38.664-0500 c20013| 2016-04-06T02:52:22.660-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1091 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:38.665-0500 c20013| 2016-04-06T02:52:22.660-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1091 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.668-0500 c20013| 2016-04-06T02:52:22.660-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1093 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.660-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|4, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:38.669-0500 c20013| 2016-04-06T02:52:22.660-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1093 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:38.672-0500 c20013| 2016-04-06T02:52:22.663-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:38.678-0500 c20013| 2016-04-06T02:52:22.663-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1094 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:38.682-0500 c20013| 2016-04-06T02:52:22.663-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1094 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:38.683-0500 c20013| 2016-04-06T02:52:22.664-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1094 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.692-0500 c20013| 2016-04-06T02:52:22.664-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1093 finished with response: { cursor: { nextBatch: [], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.706-0500 c20013| 2016-04-06T02:52:22.664-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|5, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.716-0500 c20013| 2016-04-06T02:52:22.664-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:38.725-0500 c20013| 2016-04-06T02:52:22.664-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1097 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.664-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|5, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:38.725-0500 c20013| 2016-04-06T02:52:22.664-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1097 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:38.733-0500 c20013| 2016-04-06T02:52:22.664-0500 D COMMAND [conn15] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|5, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.765-0500 c20013| 2016-04-06T02:52:22.664-0500 D COMMAND [conn15] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|5, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:38.766-0500 c20013| 2016-04-06T02:52:22.664-0500 D COMMAND [conn15] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|5, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.770-0500 c20013| 2016-04-06T02:52:22.665-0500 D QUERY [conn15] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:38.788-0500 c20013| 2016-04-06T02:52:22.665-0500 I COMMAND [conn15] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|5, t: 2 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:492 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:38.792-0500 c20013| 2016-04-06T02:52:22.667-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1097 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929142000|6, t: 2, h: 4143413929093500490, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-80.0", lastmod: Timestamp 1000|43, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -80.0 }, max: { _id: -79.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-80.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-79.0", lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -79.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-79.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.793-0500 c20013| 2016-04-06T02:52:22.667-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929142000|6 and ending at ts: Timestamp 1459929142000|6 [js_test:multi_coll_drop] 2016-04-06T02:53:38.795-0500 c20013| 2016-04-06T02:52:22.667-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:38.796-0500 c20013| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.798-0500 c20013| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.799-0500 c20013| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.799-0500 c20013| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.800-0500 c20013| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.800-0500 c20013| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.801-0500 c20013| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.802-0500 c20013| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.803-0500 c20013| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.805-0500 c20013| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.805-0500 c20013| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.807-0500 c20013| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.809-0500 c20013| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.809-0500 c20013| 2016-04-06T02:52:22.668-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:38.809-0500 c20013| 2016-04-06T02:52:22.668-0500 D QUERY [repl writer worker 3] Using idhack: { _id: "multidrop.coll-_id_-80.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:38.810-0500 c20013| 2016-04-06T02:52:22.668-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.811-0500 c20013| 2016-04-06T02:52:22.668-0500 D QUERY [repl writer worker 3] Using idhack: { _id: "multidrop.coll-_id_-79.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:38.815-0500 c20013| 2016-04-06T02:52:22.668-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.815-0500 c20013| 2016-04-06T02:52:22.668-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.817-0500 c20013| 2016-04-06T02:52:22.668-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.820-0500 c20013| 2016-04-06T02:52:22.668-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.821-0500 c20013| 2016-04-06T02:52:22.668-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.824-0500 c20013| 2016-04-06T02:52:22.668-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.825-0500 c20013| 2016-04-06T02:52:22.668-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.827-0500 c20013| 2016-04-06T02:52:22.668-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.828-0500 c20013| 2016-04-06T02:52:22.668-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.829-0500 c20013| 2016-04-06T02:52:22.668-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.835-0500 c20013| 2016-04-06T02:52:22.668-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.836-0500 c20013| 2016-04-06T02:52:22.668-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.838-0500 c20013| 2016-04-06T02:52:22.668-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.840-0500 c20013| 2016-04-06T02:52:22.668-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.845-0500 c20013| 2016-04-06T02:52:22.667-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.846-0500 c20013| 2016-04-06T02:52:22.668-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.847-0500 c20013| 2016-04-06T02:52:22.668-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.848-0500 c20013| 2016-04-06T02:52:22.668-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.850-0500 c20013| 2016-04-06T02:52:22.669-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:38.859-0500 c20013| 2016-04-06T02:52:22.669-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:38.864-0500 c20013| 2016-04-06T02:52:22.669-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1099 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:38.865-0500 c20013| 2016-04-06T02:52:22.669-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1099 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:38.868-0500 c20013| 2016-04-06T02:52:22.669-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1100 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.669-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|5, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:38.869-0500 c20013| 2016-04-06T02:52:22.669-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1100 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:38.871-0500 c20013| 2016-04-06T02:52:22.669-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1099 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.881-0500 c20013| 2016-04-06T02:52:22.676-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:38.888-0500 c20013| 2016-04-06T02:52:22.676-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1102 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:38.889-0500 c20013| 2016-04-06T02:52:22.676-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1102 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:38.889-0500 c20013| 2016-04-06T02:52:22.676-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1102 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.892-0500 c20013| 2016-04-06T02:52:22.676-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1100 finished with response: { cursor: { nextBatch: [], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.893-0500 c20013| 2016-04-06T02:52:22.676-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|6, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.894-0500 c20013| 2016-04-06T02:52:22.676-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:38.897-0500 c20013| 2016-04-06T02:52:22.676-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1105 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.676-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|6, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:38.898-0500 c20013| 2016-04-06T02:52:22.676-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1105 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:38.903-0500 c20013| 2016-04-06T02:52:22.677-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1105 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929142000|7, t: 2, h: 8677287472431646260, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:22.676-0500-5704c03665c17830b843f1a8", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929142676), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -80.0 }, max: { _id: MaxKey } }, left: { min: { _id: -80.0 }, max: { _id: -79.0 }, lastmod: Timestamp 1000|43, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -79.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:38.905-0500 c20013| 2016-04-06T02:52:22.677-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929142000|7 and ending at ts: Timestamp 1459929142000|7 [js_test:multi_coll_drop] 2016-04-06T02:53:38.911-0500 c20013| 2016-04-06T02:52:22.677-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:38.914-0500 c20013| 2016-04-06T02:52:22.677-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.916-0500 c20013| 2016-04-06T02:52:22.677-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.917-0500 c20013| 2016-04-06T02:52:22.678-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.918-0500 c20013| 2016-04-06T02:52:22.678-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.920-0500 c20013| 2016-04-06T02:52:22.678-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.920-0500 c20013| 2016-04-06T02:52:22.678-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.921-0500 c20013| 2016-04-06T02:52:22.678-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.922-0500 c20013| 2016-04-06T02:52:22.678-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.924-0500 c20013| 2016-04-06T02:52:22.678-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.926-0500 c20013| 2016-04-06T02:52:22.678-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.926-0500 c20013| 2016-04-06T02:52:22.678-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:38.927-0500 c20013| 2016-04-06T02:52:22.678-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.929-0500 c20013| 2016-04-06T02:52:22.678-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.930-0500 c20013| 2016-04-06T02:52:22.678-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.930-0500 c20013| 2016-04-06T02:52:22.678-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.931-0500 c20013| 2016-04-06T02:52:22.678-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.932-0500 c20013| 2016-04-06T02:52:22.679-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.933-0500 c20013| 2016-04-06T02:52:22.679-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.934-0500 c20013| 2016-04-06T02:52:22.679-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.935-0500 c20013| 2016-04-06T02:52:22.679-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.935-0500 c20013| 2016-04-06T02:52:22.679-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.937-0500 c20013| 2016-04-06T02:52:22.679-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.937-0500 c20013| 2016-04-06T02:52:22.679-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.938-0500 c20013| 2016-04-06T02:52:22.679-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.939-0500 c20013| 2016-04-06T02:52:22.679-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.942-0500 c20013| 2016-04-06T02:52:22.679-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1107 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.679-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|6, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:38.945-0500 c20013| 2016-04-06T02:52:22.679-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1107 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:38.945-0500 c20013| 2016-04-06T02:52:22.680-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.947-0500 c20013| 2016-04-06T02:52:22.680-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.949-0500 c20013| 2016-04-06T02:52:22.680-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.953-0500 c20013| 2016-04-06T02:52:22.678-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.956-0500 c20013| 2016-04-06T02:52:22.681-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.965-0500 c20013| 2016-04-06T02:52:22.681-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.967-0500 c20013| 2016-04-06T02:52:22.681-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.967-0500 c20013| 2016-04-06T02:52:22.681-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:38.978-0500 c20013| 2016-04-06T02:52:22.681-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:38.988-0500 c20013| 2016-04-06T02:52:22.681-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:38.994-0500 c20013| 2016-04-06T02:52:22.681-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1108 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:38.999-0500 c20013| 2016-04-06T02:52:22.681-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1108 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:39.012-0500 c20013| 2016-04-06T02:52:22.681-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1108 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:39.026-0500 c20013| 2016-04-06T02:52:22.690-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:39.035-0500 c20013| 2016-04-06T02:52:22.690-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1110 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:39.037-0500 c20013| 2016-04-06T02:52:22.690-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1110 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:39.040-0500 c20013| 2016-04-06T02:52:22.690-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1110 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:39.040-0500 c20013| 2016-04-06T02:52:22.690-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1107 finished with response: { cursor: { nextBatch: [], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:39.041-0500 c20013| 2016-04-06T02:52:22.690-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|7, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:39.042-0500 c20013| 2016-04-06T02:52:22.690-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:39.049-0500 c20013| 2016-04-06T02:52:22.690-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1113 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.690-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|7, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:39.050-0500 c20013| 2016-04-06T02:52:22.691-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1113 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:39.055-0500 c20013| 2016-04-06T02:52:22.691-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1113 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929142000|8, t: 2, h: 6588589552944971315, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:39.056-0500 c20013| 2016-04-06T02:52:22.691-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929142000|8 and ending at ts: Timestamp 1459929142000|8 [js_test:multi_coll_drop] 2016-04-06T02:53:39.061-0500 c20013| 2016-04-06T02:52:22.691-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:39.062-0500 c20013| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.064-0500 c20013| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.065-0500 c20013| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.067-0500 c20013| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.067-0500 c20013| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.068-0500 c20013| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.071-0500 c20013| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.071-0500 c20013| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.073-0500 c20013| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.073-0500 c20013| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.075-0500 c20013| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.075-0500 c20013| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.078-0500 c20013| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.079-0500 c20013| 2016-04-06T02:52:22.692-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:39.084-0500 c20013| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.089-0500 c20013| 2016-04-06T02:52:22.692-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:39.090-0500 c20013| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.092-0500 c20013| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.095-0500 c20013| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.097-0500 c20013| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.101-0500 c20013| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.105-0500 c20013| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.108-0500 c20013| 2016-04-06T02:52:22.692-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.109-0500 c20013| 2016-04-06T02:52:22.693-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.113-0500 c20013| 2016-04-06T02:52:22.693-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.113-0500 c20013| 2016-04-06T02:52:22.693-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.113-0500 c20013| 2016-04-06T02:52:22.693-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.124-0500 c20013| 2016-04-06T02:52:22.693-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.126-0500 c20013| 2016-04-06T02:52:22.693-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.131-0500 c20013| 2016-04-06T02:52:22.693-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.133-0500 c20013| 2016-04-06T02:52:22.693-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.133-0500 c20013| 2016-04-06T02:52:22.693-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.134-0500 c20013| 2016-04-06T02:52:22.693-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.136-0500 c20013| 2016-04-06T02:52:22.693-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.140-0500 c20013| 2016-04-06T02:52:22.693-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:39.152-0500 c20013| 2016-04-06T02:52:22.693-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:39.155-0500 c20013| 2016-04-06T02:52:22.693-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1115 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:39.156-0500 c20013| 2016-04-06T02:52:22.693-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1115 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:39.157-0500 c20013| 2016-04-06T02:52:22.694-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1115 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:39.161-0500 c20013| 2016-04-06T02:52:22.694-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1117 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.694-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|7, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:39.162-0500 c20013| 2016-04-06T02:52:22.694-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1117 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:39.171-0500 c20013| 2016-04-06T02:52:22.698-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:39.178-0500 c20013| 2016-04-06T02:52:22.698-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1118 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:39.180-0500 c20013| 2016-04-06T02:52:22.698-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1118 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:39.182-0500 c20013| 2016-04-06T02:52:22.699-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1118 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:39.184-0500 c20013| 2016-04-06T02:52:22.699-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1117 finished with response: { cursor: { nextBatch: [], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:39.188-0500 c20013| 2016-04-06T02:52:22.699-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|8, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:39.191-0500 c20013| 2016-04-06T02:52:22.699-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:39.219-0500 c20013| 2016-04-06T02:52:22.699-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1121 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.699-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|8, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:39.231-0500 c20013| 2016-04-06T02:52:22.700-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1121 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:39.250-0500 c20013| 2016-04-06T02:52:22.702-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|8, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:39.254-0500 c20013| 2016-04-06T02:52:22.702-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|8, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:39.261-0500 c20013| 2016-04-06T02:52:22.702-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|8, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:39.264-0500 c20013| 2016-04-06T02:52:22.702-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:39.272-0500 c20013| 2016-04-06T02:52:22.702-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|8, t: 2 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:39.280-0500 c20013| 2016-04-06T02:52:22.703-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1121 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929142000|9, t: 2, h: 4988155221125799883, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c03665c17830b843f1a9'), state: 2, when: new Date(1459929142702), why: "splitting chunk [{ _id: -79.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:39.284-0500 c20013| 2016-04-06T02:52:22.703-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929142000|9 and ending at ts: Timestamp 1459929142000|9 [js_test:multi_coll_drop] 2016-04-06T02:53:39.293-0500 c20013| 2016-04-06T02:52:22.704-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:39.298-0500 c20013| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.330-0500 c20013| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.332-0500 c20013| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.354-0500 c20013| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.355-0500 c20013| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.395-0500 c20013| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.403-0500 c20013| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.405-0500 c20013| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.407-0500 c20013| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.410-0500 c20013| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.632-0500 c20013| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.632-0500 c20013| 2016-04-06T02:52:22.704-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:39.648-0500 c20013| 2016-04-06T02:52:22.704-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:39.650-0500 c20013| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.656-0500 c20013| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.657-0500 c20013| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.658-0500 c20013| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.659-0500 c20013| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.660-0500 c20013| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.661-0500 c20013| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.661-0500 c20013| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.663-0500 c20013| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.669-0500 c20013| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.670-0500 c20013| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.672-0500 c20013| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.675-0500 c20013| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.675-0500 c20013| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.676-0500 c20013| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.677-0500 c20013| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.678-0500 c20013| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.678-0500 c20013| 2016-04-06T02:52:22.704-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.680-0500 c20013| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.684-0500 c20013| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.685-0500 c20013| 2016-04-06T02:52:22.705-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.695-0500 c20013| 2016-04-06T02:52:22.705-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:39.713-0500 c20013| 2016-04-06T02:52:22.705-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:39.725-0500 c20013| 2016-04-06T02:52:22.705-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1123 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:39.729-0500 c20013| 2016-04-06T02:52:22.705-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1123 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:39.729-0500 c20013| 2016-04-06T02:52:22.706-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1123 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:39.742-0500 c20013| 2016-04-06T02:52:22.706-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1125 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.706-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|8, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:39.745-0500 c20013| 2016-04-06T02:52:22.706-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1125 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:39.754-0500 c20013| 2016-04-06T02:52:22.708-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:39.759-0500 c20013| 2016-04-06T02:52:22.708-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1126 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:39.761-0500 c20013| 2016-04-06T02:52:22.708-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1126 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:39.768-0500 c20013| 2016-04-06T02:52:22.709-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1126 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:39.771-0500 c20013| 2016-04-06T02:52:22.709-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1125 finished with response: { cursor: { nextBatch: [], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:39.774-0500 c20013| 2016-04-06T02:52:22.709-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:39.775-0500 c20013| 2016-04-06T02:52:22.709-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:39.784-0500 c20013| 2016-04-06T02:52:22.709-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1129 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.709-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:39.785-0500 c20013| 2016-04-06T02:52:22.709-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1129 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:39.800-0500 c20013| 2016-04-06T02:52:22.710-0500 D COMMAND [conn15] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|44 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|9, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:39.819-0500 c20013| 2016-04-06T02:52:22.710-0500 D COMMAND [conn15] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|9, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:39.824-0500 c20013| 2016-04-06T02:52:22.710-0500 D COMMAND [conn15] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|44 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|9, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:39.828-0500 c20013| 2016-04-06T02:52:22.710-0500 D QUERY [conn15] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:39.839-0500 c20013| 2016-04-06T02:52:22.710-0500 I COMMAND [conn15] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|44 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929142000|9, t: 2 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:39.847-0500 c20013| 2016-04-06T02:52:22.711-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1129 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929142000|10, t: 2, h: -1872902091255565203, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-79.0", lastmod: Timestamp 1000|45, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -79.0 }, max: { _id: -78.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-79.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-78.0", lastmod: Timestamp 1000|46, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -78.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-78.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:39.848-0500 c20013| 2016-04-06T02:52:22.711-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929142000|10 and ending at ts: Timestamp 1459929142000|10 [js_test:multi_coll_drop] 2016-04-06T02:53:39.851-0500 c20013| 2016-04-06T02:52:22.712-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:39.852-0500 c20013| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.852-0500 c20013| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.854-0500 c20013| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.857-0500 c20013| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.859-0500 c20013| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.861-0500 c20013| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.861-0500 c20013| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.866-0500 c20013| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.870-0500 c20013| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.872-0500 c20013| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.875-0500 c20013| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.878-0500 c20013| 2016-04-06T02:52:22.712-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:39.886-0500 c20013| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.886-0500 c20013| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.899-0500 c20013| 2016-04-06T02:52:22.712-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-79.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:39.905-0500 c20013| 2016-04-06T02:52:22.712-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-78.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:39.906-0500 c20013| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.911-0500 c20013| 2016-04-06T02:52:22.712-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.913-0500 c20013| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.913-0500 c20013| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.916-0500 c20013| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.918-0500 c20013| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.919-0500 c20013| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.926-0500 c20013| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.928-0500 c20013| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.930-0500 c20013| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.931-0500 c20013| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.931-0500 c20013| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.934-0500 c20013| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.939-0500 c20013| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.940-0500 c20013| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.941-0500 c20013| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.944-0500 c20013| 2016-04-06T02:52:22.713-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.958-0500 c20013| 2016-04-06T02:52:22.714-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1131 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.714-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:39.959-0500 c20013| 2016-04-06T02:52:22.714-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1131 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:39.959-0500 c20013| 2016-04-06T02:52:22.723-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.961-0500 c20013| 2016-04-06T02:52:22.723-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:39.969-0500 c20013| 2016-04-06T02:52:22.723-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:39.974-0500 c20013| 2016-04-06T02:52:22.723-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:39.987-0500 c20013| 2016-04-06T02:52:22.723-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1132 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:39.990-0500 c20013| 2016-04-06T02:52:22.723-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1132 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:39.991-0500 c20013| 2016-04-06T02:52:22.723-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1132 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:39.997-0500 c20013| 2016-04-06T02:52:22.727-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.024-0500 c20013| 2016-04-06T02:52:22.727-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1134 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.025-0500 c20013| 2016-04-06T02:52:22.727-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1134 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.042-0500 c20013| 2016-04-06T02:52:22.727-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1131 finished with response: { cursor: { nextBatch: [], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.046-0500 c20013| 2016-04-06T02:52:22.727-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1134 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.072-0500 c20013| 2016-04-06T02:52:22.727-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|10, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.074-0500 c20013| 2016-04-06T02:52:22.727-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:40.083-0500 c20013| 2016-04-06T02:52:22.727-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1137 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.727-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:40.084-0500 c20013| 2016-04-06T02:52:22.727-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1137 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.092-0500 c20013| 2016-04-06T02:52:22.728-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1137 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929142000|11, t: 2, h: 1869687273915284121, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:22.727-0500-5704c03665c17830b843f1aa", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929142727), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -79.0 }, max: { _id: MaxKey } }, left: { min: { _id: -79.0 }, max: { _id: -78.0 }, lastmod: Timestamp 1000|45, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -78.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|46, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.097-0500 c20013| 2016-04-06T02:52:22.728-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929142000|11 and ending at ts: Timestamp 1459929142000|11 [js_test:multi_coll_drop] 2016-04-06T02:53:40.102-0500 c20013| 2016-04-06T02:52:22.728-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:40.104-0500 c20013| 2016-04-06T02:52:22.729-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.105-0500 c20013| 2016-04-06T02:52:22.729-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.106-0500 c20013| 2016-04-06T02:52:22.729-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.113-0500 c20013| 2016-04-06T02:52:22.729-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.115-0500 c20013| 2016-04-06T02:52:22.729-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.124-0500 c20013| 2016-04-06T02:52:22.729-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.128-0500 c20013| 2016-04-06T02:52:22.729-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.129-0500 c20013| 2016-04-06T02:52:22.729-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.129-0500 c20013| 2016-04-06T02:52:22.729-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.130-0500 c20013| 2016-04-06T02:52:22.729-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:40.134-0500 c20013| 2016-04-06T02:52:22.729-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.135-0500 c20013| 2016-04-06T02:52:22.729-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.135-0500 c20013| 2016-04-06T02:52:22.729-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.136-0500 c20013| 2016-04-06T02:52:22.729-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.144-0500 c20013| 2016-04-06T02:52:22.729-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.146-0500 c20013| 2016-04-06T02:52:22.729-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.147-0500 c20013| 2016-04-06T02:52:22.729-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.152-0500 c20013| 2016-04-06T02:52:22.729-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.154-0500 c20013| 2016-04-06T02:52:22.729-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.157-0500 c20013| 2016-04-06T02:52:22.729-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.158-0500 c20013| 2016-04-06T02:52:22.729-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.158-0500 c20013| 2016-04-06T02:52:22.729-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.161-0500 c20013| 2016-04-06T02:52:22.729-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.165-0500 c20013| 2016-04-06T02:52:22.729-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.167-0500 c20013| 2016-04-06T02:52:22.729-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.167-0500 c20013| 2016-04-06T02:52:22.729-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.169-0500 c20013| 2016-04-06T02:52:22.729-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.170-0500 c20013| 2016-04-06T02:52:22.730-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.170-0500 c20013| 2016-04-06T02:52:22.730-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.170-0500 c20013| 2016-04-06T02:52:22.730-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.171-0500 c20013| 2016-04-06T02:52:22.730-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.171-0500 c20013| 2016-04-06T02:52:22.730-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.172-0500 c20013| 2016-04-06T02:52:22.730-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.182-0500 c20013| 2016-04-06T02:52:22.731-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1139 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.731-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:40.197-0500 c20013| 2016-04-06T02:52:22.731-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:40.200-0500 c20013| 2016-04-06T02:52:22.731-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1139 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.214-0500 c20013| 2016-04-06T02:52:22.731-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.222-0500 c20013| 2016-04-06T02:52:22.731-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1140 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.223-0500 c20013| 2016-04-06T02:52:22.731-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1140 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.227-0500 c20013| 2016-04-06T02:52:22.731-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1140 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.234-0500 c20013| 2016-04-06T02:52:22.740-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.240-0500 c20013| 2016-04-06T02:52:22.740-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1142 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.240-0500 c20013| 2016-04-06T02:52:22.740-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1142 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.250-0500 c20013| 2016-04-06T02:52:22.740-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1142 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.253-0500 c20013| 2016-04-06T02:52:22.740-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1139 finished with response: { cursor: { nextBatch: [], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.254-0500 c20013| 2016-04-06T02:52:22.740-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|11, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.255-0500 c20013| 2016-04-06T02:52:22.740-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:40.259-0500 c20013| 2016-04-06T02:52:22.741-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1145 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.741-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|11, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:40.262-0500 c20013| 2016-04-06T02:52:22.741-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1145 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.263-0500 c20013| 2016-04-06T02:52:22.741-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1145 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929142000|12, t: 2, h: -7145308920045400114, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.266-0500 c20013| 2016-04-06T02:52:22.741-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929142000|12 and ending at ts: Timestamp 1459929142000|12 [js_test:multi_coll_drop] 2016-04-06T02:53:40.268-0500 c20013| 2016-04-06T02:52:22.742-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:40.269-0500 c20013| 2016-04-06T02:52:22.742-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.274-0500 c20013| 2016-04-06T02:52:22.742-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.276-0500 c20013| 2016-04-06T02:52:22.742-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.278-0500 c20013| 2016-04-06T02:52:22.742-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.278-0500 c20013| 2016-04-06T02:52:22.742-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.279-0500 c20013| 2016-04-06T02:52:22.742-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.280-0500 c20013| 2016-04-06T02:52:22.742-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.281-0500 c20013| 2016-04-06T02:52:22.742-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.282-0500 c20013| 2016-04-06T02:52:22.742-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.283-0500 c20013| 2016-04-06T02:52:22.742-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.284-0500 c20013| 2016-04-06T02:52:22.742-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.284-0500 c20013| 2016-04-06T02:52:22.742-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.286-0500 c20013| 2016-04-06T02:52:22.742-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.287-0500 c20013| 2016-04-06T02:52:22.742-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.289-0500 c20013| 2016-04-06T02:52:22.742-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:40.292-0500 c20013| 2016-04-06T02:52:22.742-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.293-0500 c20013| 2016-04-06T02:52:22.742-0500 D QUERY [repl writer worker 3] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:40.296-0500 c20013| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.296-0500 c20013| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.297-0500 c20013| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.298-0500 c20013| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.299-0500 c20013| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.301-0500 c20013| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.302-0500 c20013| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.306-0500 c20013| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.307-0500 c20013| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.308-0500 c20013| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.308-0500 c20013| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.309-0500 c20013| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.310-0500 c20013| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.310-0500 c20013| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.316-0500 c20013| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.318-0500 c20013| 2016-04-06T02:52:22.743-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1147 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.743-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|11, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:40.320-0500 c20013| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.321-0500 c20013| 2016-04-06T02:52:22.743-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.324-0500 c20013| 2016-04-06T02:52:22.743-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1147 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.328-0500 c20013| 2016-04-06T02:52:22.744-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:40.338-0500 c20013| 2016-04-06T02:52:22.744-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.349-0500 c20013| 2016-04-06T02:52:22.744-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1148 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|11, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.362-0500 c20013| 2016-04-06T02:52:22.744-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1148 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.365-0500 c20013| 2016-04-06T02:52:22.744-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1148 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.373-0500 c20013| 2016-04-06T02:52:22.746-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.378-0500 c20013| 2016-04-06T02:52:22.746-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1150 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.381-0500 c20013| 2016-04-06T02:52:22.746-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1150 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.383-0500 c20013| 2016-04-06T02:52:22.746-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1150 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.384-0500 c20013| 2016-04-06T02:52:22.746-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1147 finished with response: { cursor: { nextBatch: [], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.385-0500 c20013| 2016-04-06T02:52:22.746-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929142000|12, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.387-0500 c20013| 2016-04-06T02:52:22.746-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:40.389-0500 c20013| 2016-04-06T02:52:22.747-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1153 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:27.747-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|12, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:40.389-0500 c20013| 2016-04-06T02:52:22.747-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1153 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.392-0500 c20013| 2016-04-06T02:52:23.554-0500 D COMMAND [conn7] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.392-0500 c20013| 2016-04-06T02:52:23.554-0500 D COMMAND [conn7] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:40.395-0500 c20013| 2016-04-06T02:52:23.554-0500 I COMMAND [conn7] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:40.395-0500 c20013| 2016-04-06T02:52:23.720-0500 D COMMAND [conn9] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.396-0500 c20013| 2016-04-06T02:52:23.720-0500 I COMMAND [conn9] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:40.401-0500 c20013| 2016-04-06T02:52:24.055-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1154 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:34.055-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.402-0500 c20013| 2016-04-06T02:52:24.055-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1154 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:40.406-0500 c20013| 2016-04-06T02:52:24.056-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1154 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20012", term: 2, primaryId: 1, durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, opTime: { ts: Timestamp 1459929142000|12, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:40.407-0500 c20013| 2016-04-06T02:52:24.056-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:26.056Z [js_test:multi_coll_drop] 2016-04-06T02:53:40.413-0500 c20013| 2016-04-06T02:52:24.076-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1156 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:34.076-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.423-0500 c20013| 2016-04-06T02:52:24.077-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1156 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.428-0500 c20013| 2016-04-06T02:52:25.246-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.432-0500 c20013| 2016-04-06T02:52:25.246-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1157 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.434-0500 c20013| 2016-04-06T02:52:25.246-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1157 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.435-0500 c20013| 2016-04-06T02:52:25.555-0500 D COMMAND [conn7] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.435-0500 c20013| 2016-04-06T02:52:25.555-0500 D COMMAND [conn7] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:40.439-0500 c20013| 2016-04-06T02:52:25.555-0500 I COMMAND [conn7] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:40.442-0500 c20013| 2016-04-06T02:52:26.056-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1158 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:36.056-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.444-0500 c20013| 2016-04-06T02:52:26.056-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1158 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:40.446-0500 c20013| 2016-04-06T02:52:26.056-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1158 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20012", term: 2, primaryId: 1, durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, opTime: { ts: Timestamp 1459929142000|12, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:40.447-0500 c20013| 2016-04-06T02:52:26.056-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:28.056Z [js_test:multi_coll_drop] 2016-04-06T02:53:40.449-0500 c20013| 2016-04-06T02:52:26.809-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1153 finished with response: { cursor: { nextBatch: [], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.449-0500 c20013| 2016-04-06T02:52:26.809-0500 D COMMAND [conn8] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.450-0500 c20013| 2016-04-06T02:52:26.809-0500 I COMMAND [conn8] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:40.451-0500 c20013| 2016-04-06T02:52:26.810-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:40.452-0500 c20013| 2016-04-06T02:52:26.810-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1161 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.810-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|12, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:40.453-0500 c20013| 2016-04-06T02:52:26.810-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1161 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.456-0500 c20013| 2016-04-06T02:52:26.811-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1156 finished with response: { ok: 1.0, electionTime: new Date(6270347906482438145), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 2, primaryId: 1, durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, opTime: { ts: Timestamp 1459929142000|12, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:40.457-0500 c20013| 2016-04-06T02:52:26.811-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:28.811Z [js_test:multi_coll_drop] 2016-04-06T02:53:40.459-0500 c20013| 2016-04-06T02:52:26.811-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1157 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.461-0500 c20013| 2016-04-06T02:52:26.811-0500 D COMMAND [conn14] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.461-0500 c20013| 2016-04-06T02:52:26.811-0500 D COMMAND [conn14] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:40.465-0500 c20013| 2016-04-06T02:52:26.811-0500 I COMMAND [conn14] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 2 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:40.468-0500 c20013| 2016-04-06T02:52:26.812-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1161 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929146000|1, t: 2, h: -9183148587310720839, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c03a65c17830b843f1ab'), state: 2, when: new Date(1459929146811), why: "splitting chunk [{ _id: -78.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.469-0500 c20013| 2016-04-06T02:52:26.812-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929146000|1 and ending at ts: Timestamp 1459929146000|1 [js_test:multi_coll_drop] 2016-04-06T02:53:40.469-0500 c20013| 2016-04-06T02:52:26.812-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:40.470-0500 c20013| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.471-0500 c20013| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.472-0500 c20013| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.473-0500 c20013| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.473-0500 c20013| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.475-0500 c20013| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.476-0500 c20013| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.477-0500 c20013| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.478-0500 c20013| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.478-0500 c20013| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.478-0500 c20013| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.479-0500 c20013| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.479-0500 c20013| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.480-0500 c20013| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.482-0500 c20013| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.483-0500 c20013| 2016-04-06T02:52:26.813-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:40.483-0500 c20013| 2016-04-06T02:52:26.813-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.484-0500 c20013| 2016-04-06T02:52:26.813-0500 D QUERY [repl writer worker 3] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:40.485-0500 c20013| 2016-04-06T02:52:26.814-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1165 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.814-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929142000|12, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:40.489-0500 c20013| 2016-04-06T02:52:26.814-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1165 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.489-0500 c20013| 2016-04-06T02:52:26.815-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.490-0500 c20013| 2016-04-06T02:52:26.815-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.491-0500 c20013| 2016-04-06T02:52:26.815-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.492-0500 c20013| 2016-04-06T02:52:26.815-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.492-0500 c20013| 2016-04-06T02:52:26.815-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.494-0500 c20013| 2016-04-06T02:52:26.815-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.495-0500 c20013| 2016-04-06T02:52:26.815-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.496-0500 c20013| 2016-04-06T02:52:26.815-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.501-0500 c20013| 2016-04-06T02:52:26.815-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.503-0500 c20013| 2016-04-06T02:52:26.815-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.504-0500 c20013| 2016-04-06T02:52:26.815-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.507-0500 c20013| 2016-04-06T02:52:26.815-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.508-0500 c20013| 2016-04-06T02:52:26.815-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.508-0500 c20013| 2016-04-06T02:52:26.816-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.509-0500 c20013| 2016-04-06T02:52:26.820-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.511-0500 c20013| 2016-04-06T02:52:26.820-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.512-0500 c20013| 2016-04-06T02:52:26.820-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1165 finished with response: { cursor: { nextBatch: [], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.513-0500 c20013| 2016-04-06T02:52:26.820-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.515-0500 c20013| 2016-04-06T02:52:26.820-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:40.516-0500 c20013| 2016-04-06T02:52:26.820-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:40.519-0500 c20013| 2016-04-06T02:52:26.820-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1167 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.820-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:40.524-0500 c20013| 2016-04-06T02:52:26.820-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1167 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.528-0500 c20013| 2016-04-06T02:52:26.820-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.534-0500 c20013| 2016-04-06T02:52:26.820-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1168 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.535-0500 c20013| 2016-04-06T02:52:26.820-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1168 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.536-0500 c20013| 2016-04-06T02:52:26.821-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1168 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.543-0500 c20013| 2016-04-06T02:52:26.822-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.548-0500 c20013| 2016-04-06T02:52:26.822-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1170 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.550-0500 c20013| 2016-04-06T02:52:26.822-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1170 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.560-0500 c20013| 2016-04-06T02:52:26.822-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1167 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929146000|2, t: 2, h: -8119450810825688742, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-78.0", lastmod: Timestamp 1000|47, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -78.0 }, max: { _id: -77.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-78.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-77.0", lastmod: Timestamp 1000|48, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -77.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-77.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.560-0500 c20013| 2016-04-06T02:52:26.822-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1170 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.563-0500 c20013| 2016-04-06T02:52:26.823-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929146000|2 and ending at ts: Timestamp 1459929146000|2 [js_test:multi_coll_drop] 2016-04-06T02:53:40.563-0500 c20013| 2016-04-06T02:52:26.823-0500 D REPL [rsBackgroundSync-0] bgsync buffer has 0 bytes [js_test:multi_coll_drop] 2016-04-06T02:53:40.565-0500 c20013| 2016-04-06T02:52:26.823-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:40.566-0500 c20013| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.566-0500 c20013| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.568-0500 c20013| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.569-0500 c20013| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.569-0500 c20013| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.570-0500 c20013| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.571-0500 c20013| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.572-0500 c20013| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.572-0500 c20013| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.573-0500 c20013| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.576-0500 c20013| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.576-0500 c20013| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.577-0500 c20013| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.579-0500 c20013| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.580-0500 c20013| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.580-0500 c20013| 2016-04-06T02:52:26.823-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:40.581-0500 c20013| 2016-04-06T02:52:26.823-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.582-0500 c20013| 2016-04-06T02:52:26.823-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll-_id_-78.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:40.583-0500 c20013| 2016-04-06T02:52:26.824-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll-_id_-77.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:40.585-0500 c20013| 2016-04-06T02:52:26.824-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.588-0500 c20013| 2016-04-06T02:52:26.824-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.595-0500 c20013| 2016-04-06T02:52:26.824-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.597-0500 c20013| 2016-04-06T02:52:26.824-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.600-0500 c20013| 2016-04-06T02:52:26.824-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.601-0500 c20013| 2016-04-06T02:52:26.824-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.601-0500 c20013| 2016-04-06T02:52:26.824-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.603-0500 c20013| 2016-04-06T02:52:26.824-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.604-0500 c20013| 2016-04-06T02:52:26.824-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.606-0500 c20013| 2016-04-06T02:52:26.824-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.607-0500 c20013| 2016-04-06T02:52:26.824-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.608-0500 c20013| 2016-04-06T02:52:26.824-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.610-0500 c20013| 2016-04-06T02:52:26.824-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.610-0500 c20013| 2016-04-06T02:52:26.824-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.610-0500 c20013| 2016-04-06T02:52:26.824-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.611-0500 c20013| 2016-04-06T02:52:26.824-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.612-0500 c20013| 2016-04-06T02:52:26.824-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:40.615-0500 c20013| 2016-04-06T02:52:26.824-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.618-0500 c20013| 2016-04-06T02:52:26.824-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1173 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.619-0500 c20013| 2016-04-06T02:52:26.824-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1173 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.619-0500 c20013| 2016-04-06T02:52:26.824-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1173 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.621-0500 c20013| 2016-04-06T02:52:26.825-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1175 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.825-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|1, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:40.623-0500 c20013| 2016-04-06T02:52:26.825-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1175 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.627-0500 c20013| 2016-04-06T02:52:26.827-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.630-0500 c20013| 2016-04-06T02:52:26.827-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1176 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.630-0500 c20013| 2016-04-06T02:52:26.827-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1176 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.639-0500 c20013| 2016-04-06T02:52:26.827-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1176 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.643-0500 c20013| 2016-04-06T02:52:26.830-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1175 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929146000|3, t: 2, h: 7943051809962790375, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:26.828-0500-5704c03a65c17830b843f1ac", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929146828), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -78.0 }, max: { _id: MaxKey } }, left: { min: { _id: -78.0 }, max: { _id: -77.0 }, lastmod: Timestamp 1000|47, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -77.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|48, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.645-0500 c20013| 2016-04-06T02:52:26.831-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|2, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.647-0500 c20013| 2016-04-06T02:52:26.831-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929146000|3 and ending at ts: Timestamp 1459929146000|3 [js_test:multi_coll_drop] 2016-04-06T02:53:40.648-0500 c20013| 2016-04-06T02:52:26.831-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:40.649-0500 c20013| 2016-04-06T02:52:26.831-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.649-0500 c20013| 2016-04-06T02:52:26.831-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.650-0500 c20013| 2016-04-06T02:52:26.831-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.651-0500 c20013| 2016-04-06T02:52:26.831-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.651-0500 c20013| 2016-04-06T02:52:26.831-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.652-0500 c20013| 2016-04-06T02:52:26.831-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.653-0500 c20013| 2016-04-06T02:52:26.831-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.655-0500 c20013| 2016-04-06T02:52:26.831-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.656-0500 c20013| 2016-04-06T02:52:26.832-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.657-0500 c20013| 2016-04-06T02:52:26.832-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.657-0500 c20013| 2016-04-06T02:52:26.832-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.658-0500 c20013| 2016-04-06T02:52:26.832-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.660-0500 c20013| 2016-04-06T02:52:26.832-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.661-0500 c20013| 2016-04-06T02:52:26.832-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.664-0500 c20013| 2016-04-06T02:52:26.832-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.665-0500 c20013| 2016-04-06T02:52:26.832-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:40.666-0500 c20013| 2016-04-06T02:52:26.832-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.668-0500 c20013| 2016-04-06T02:52:26.832-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.669-0500 c20013| 2016-04-06T02:52:26.832-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.673-0500 c20013| 2016-04-06T02:52:26.832-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.675-0500 c20013| 2016-04-06T02:52:26.832-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.676-0500 c20013| 2016-04-06T02:52:26.832-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.680-0500 c20013| 2016-04-06T02:52:26.832-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.682-0500 c20013| 2016-04-06T02:52:26.832-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.685-0500 c20013| 2016-04-06T02:52:26.832-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.687-0500 c20013| 2016-04-06T02:52:26.832-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.688-0500 c20013| 2016-04-06T02:52:26.832-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.689-0500 c20013| 2016-04-06T02:52:26.832-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.692-0500 c20013| 2016-04-06T02:52:26.832-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.694-0500 c20013| 2016-04-06T02:52:26.832-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.696-0500 c20013| 2016-04-06T02:52:26.832-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.696-0500 c20013| 2016-04-06T02:52:26.832-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.697-0500 c20013| 2016-04-06T02:52:26.832-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.702-0500 c20013| 2016-04-06T02:52:26.833-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:40.705-0500 c20013| 2016-04-06T02:52:26.833-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.713-0500 c20013| 2016-04-06T02:52:26.833-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1179 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|2, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.715-0500 c20013| 2016-04-06T02:52:26.833-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1179 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.717-0500 c20013| 2016-04-06T02:52:26.833-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1180 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.833-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|2, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:40.719-0500 c20013| 2016-04-06T02:52:26.833-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1180 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.720-0500 c20013| 2016-04-06T02:52:26.833-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1179 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.724-0500 c20013| 2016-04-06T02:52:26.833-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1180 finished with response: { cursor: { nextBatch: [], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.724-0500 c20013| 2016-04-06T02:52:26.833-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|3, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.725-0500 c20013| 2016-04-06T02:52:26.833-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:40.725-0500 c20013| 2016-04-06T02:52:26.833-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1183 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.833-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|3, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:40.726-0500 c20013| 2016-04-06T02:52:26.833-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1183 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.729-0500 c20013| 2016-04-06T02:52:26.834-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1183 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929146000|4, t: 2, h: 9033909893478134583, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.731-0500 c20013| 2016-04-06T02:52:26.834-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929146000|4 and ending at ts: Timestamp 1459929146000|4 [js_test:multi_coll_drop] 2016-04-06T02:53:40.732-0500 c20013| 2016-04-06T02:52:26.835-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:40.732-0500 c20013| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.733-0500 c20013| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.733-0500 c20013| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.734-0500 c20013| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.734-0500 c20013| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.736-0500 c20013| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.737-0500 c20013| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.738-0500 c20013| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.739-0500 c20013| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.740-0500 c20013| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.740-0500 c20013| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.741-0500 c20013| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.742-0500 c20013| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.743-0500 c20013| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.745-0500 c20013| 2016-04-06T02:52:26.835-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:40.745-0500 c20013| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.746-0500 c20013| 2016-04-06T02:52:26.835-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:40.751-0500 c20013| 2016-04-06T02:52:26.835-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.757-0500 c20013| 2016-04-06T02:52:26.835-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1185 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.758-0500 c20013| 2016-04-06T02:52:26.835-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1185 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.760-0500 c20013| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.760-0500 c20013| 2016-04-06T02:52:26.835-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1185 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.762-0500 c20012| 2016-04-06T02:53:08.701-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.762-0500 c20012| 2016-04-06T02:53:08.701-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.763-0500 c20012| 2016-04-06T02:53:08.701-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.763-0500 c20012| 2016-04-06T02:53:08.701-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.765-0500 c20012| 2016-04-06T02:53:08.701-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:40.766-0500 c20012| 2016-04-06T02:53:08.701-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.767-0500 c20012| 2016-04-06T02:53:08.702-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.768-0500 c20012| 2016-04-06T02:53:08.702-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.770-0500 c20012| 2016-04-06T02:53:08.702-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll-_id_-64.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:40.772-0500 s20015| 2016-04-06T02:53:18.987-0500 D ASIO [Balancer] startCommand: RemoteCommand 111 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:48.987-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929198987), up: 71, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.773-0500 s20015| 2016-04-06T02:53:18.987-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 111 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:40.775-0500 s20015| 2016-04-06T02:53:18.995-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 111 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929198000|4, t: 5 }, electionId: ObjectId('7fffffff0000000000000005') } [js_test:multi_coll_drop] 2016-04-06T02:53:40.776-0500 c20011| 2016-04-06T02:52:44.596-0500 I COMMAND [conn34] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:40.778-0500 c20011| 2016-04-06T02:52:44.609-0500 D NETWORK [conn44] SocketException: remote: 192.168.100.28:32849 error: 9001 socket exception [CLOSED] server [192.168.100.28:32849] [js_test:multi_coll_drop] 2016-04-06T02:53:40.779-0500 c20011| 2016-04-06T02:52:44.609-0500 I NETWORK [conn44] end connection 192.168.100.28:32849 (16 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:40.782-0500 c20011| 2016-04-06T02:52:45.724-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 310 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:55.724-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.784-0500 c20013| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.785-0500 c20012| 2016-04-06T02:53:08.702-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.787-0500 c20012| 2016-04-06T02:53:08.702-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll-_id_-63.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:40.787-0500 c20012| 2016-04-06T02:53:08.702-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.791-0500 c20012| 2016-04-06T02:53:08.701-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.793-0500 c20012| 2016-04-06T02:53:08.702-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.794-0500 c20012| 2016-04-06T02:53:08.702-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.795-0500 c20012| 2016-04-06T02:53:08.702-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.798-0500 c20012| 2016-04-06T02:53:08.702-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.799-0500 c20012| 2016-04-06T02:53:08.702-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.800-0500 c20012| 2016-04-06T02:53:08.702-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.800-0500 c20012| 2016-04-06T02:53:08.702-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.803-0500 c20012| 2016-04-06T02:53:08.702-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.804-0500 c20012| 2016-04-06T02:53:08.702-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.814-0500 c20012| 2016-04-06T02:53:08.702-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.816-0500 c20012| 2016-04-06T02:53:08.702-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.818-0500 c20012| 2016-04-06T02:53:08.703-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.818-0500 c20012| 2016-04-06T02:53:08.703-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.819-0500 c20012| 2016-04-06T02:53:08.703-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.820-0500 c20012| 2016-04-06T02:53:08.703-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.821-0500 c20012| 2016-04-06T02:53:08.703-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.821-0500 c20012| 2016-04-06T02:53:08.703-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.823-0500 c20012| 2016-04-06T02:53:08.703-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:40.827-0500 c20012| 2016-04-06T02:53:08.703-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:40.828-0500 c20012| 2016-04-06T02:53:08.703-0500 D REPL [conn42] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29743132μs [js_test:multi_coll_drop] 2016-04-06T02:53:40.833-0500 c20012| 2016-04-06T02:53:08.703-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.834-0500 c20012| 2016-04-06T02:53:08.703-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.835-0500 c20012| 2016-04-06T02:53:08.703-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.837-0500 c20012| 2016-04-06T02:53:08.703-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.837-0500 c20012| 2016-04-06T02:53:08.703-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.838-0500 c20012| 2016-04-06T02:53:08.703-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.838-0500 c20012| 2016-04-06T02:53:08.703-0500 D REPL [rsSync] replication batch size is 5 [js_test:multi_coll_drop] 2016-04-06T02:53:40.840-0500 c20012| 2016-04-06T02:53:08.703-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.840-0500 c20012| 2016-04-06T02:53:08.703-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.841-0500 c20012| 2016-04-06T02:53:08.703-0500 D QUERY [repl writer worker 9] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:40.842-0500 c20012| 2016-04-06T02:53:08.704-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.842-0500 c20012| 2016-04-06T02:53:08.704-0500 D QUERY [repl writer worker 11] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:40.843-0500 c20012| 2016-04-06T02:53:08.703-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.844-0500 c20012| 2016-04-06T02:53:08.704-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.845-0500 c20012| 2016-04-06T02:53:08.704-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.850-0500 c20012| 2016-04-06T02:53:08.704-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.853-0500 c20012| 2016-04-06T02:53:08.704-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1226 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.854-0500 c20012| 2016-04-06T02:53:08.704-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.869-0500 c20012| 2016-04-06T02:53:08.704-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1226 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:40.869-0500 c20012| 2016-04-06T02:53:08.704-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.869-0500 c20012| 2016-04-06T02:53:08.704-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.871-0500 c20012| 2016-04-06T02:53:08.704-0500 D QUERY [repl writer worker 8] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:40.871-0500 c20012| 2016-04-06T02:53:08.704-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.873-0500 c20012| 2016-04-06T02:53:08.704-0500 D QUERY [repl writer worker 8] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:40.875-0500 c20012| 2016-04-06T02:53:08.705-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.875-0500 c20012| 2016-04-06T02:53:08.705-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.879-0500 c20012| 2016-04-06T02:53:08.705-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.885-0500 c20012| 2016-04-06T02:53:08.705-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1226 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.890-0500 c20012| 2016-04-06T02:53:08.705-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.893-0500 c20012| 2016-04-06T02:53:08.705-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1227 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.894-0500 c20012| 2016-04-06T02:53:08.705-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1227 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:40.895-0500 c20012| 2016-04-06T02:53:08.705-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1227 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.895-0500 c20012| 2016-04-06T02:53:08.705-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.899-0500 c20012| 2016-04-06T02:53:08.705-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.900-0500 c20012| 2016-04-06T02:53:08.705-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.902-0500 c20012| 2016-04-06T02:53:08.705-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.902-0500 c20012| 2016-04-06T02:53:08.705-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.903-0500 s20014| 2016-04-06T02:53:18.974-0500 D NETWORK [conn1] polling for status of connection to 192.168.100.28:20010, no events [js_test:multi_coll_drop] 2016-04-06T02:53:40.903-0500 c20013| 2016-04-06T02:52:26.835-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.905-0500 c20013| 2016-04-06T02:52:26.836-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.907-0500 c20013| 2016-04-06T02:52:26.836-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.909-0500 c20013| 2016-04-06T02:52:26.836-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1187 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.836-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|3, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:40.910-0500 c20013| 2016-04-06T02:52:26.836-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1187 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.910-0500 c20013| 2016-04-06T02:52:26.837-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1187 finished with response: { cursor: { nextBatch: [], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.915-0500 c20013| 2016-04-06T02:52:26.837-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|4, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.916-0500 c20013| 2016-04-06T02:52:26.837-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:40.918-0500 c20013| 2016-04-06T02:52:26.837-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1189 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.837-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|4, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:40.918-0500 c20013| 2016-04-06T02:52:26.838-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1189 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.920-0500 c20013| 2016-04-06T02:52:26.839-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|46 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|4, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.921-0500 c20013| 2016-04-06T02:52:26.839-0500 D REPL [conn10] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929146000|4, t: 2 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929146000|3, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.922-0500 c20013| 2016-04-06T02:52:26.839-0500 D REPL [conn10] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999983μs [js_test:multi_coll_drop] 2016-04-06T02:53:40.923-0500 c20013| 2016-04-06T02:52:26.839-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.923-0500 c20013| 2016-04-06T02:52:26.839-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.924-0500 c20013| 2016-04-06T02:52:26.839-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.926-0500 c20013| 2016-04-06T02:52:26.839-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.928-0500 c20013| 2016-04-06T02:52:26.839-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.930-0500 c20013| 2016-04-06T02:52:26.839-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.931-0500 c20013| 2016-04-06T02:52:26.839-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.934-0500 c20013| 2016-04-06T02:52:26.839-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.935-0500 c20013| 2016-04-06T02:52:26.839-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.936-0500 c20013| 2016-04-06T02:52:26.839-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.938-0500 c20013| 2016-04-06T02:52:26.839-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.939-0500 c20013| 2016-04-06T02:52:26.839-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.942-0500 c20013| 2016-04-06T02:52:26.840-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:40.946-0500 c20013| 2016-04-06T02:52:26.840-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.951-0500 c20013| 2016-04-06T02:52:26.840-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1190 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|3, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.956-0500 c20013| 2016-04-06T02:52:26.840-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|4, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:40.957-0500 c20013| 2016-04-06T02:52:26.840-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1190 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.959-0500 c20013| 2016-04-06T02:52:26.840-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|46 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|4, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.960-0500 c20013| 2016-04-06T02:52:26.840-0500 D QUERY [conn10] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:40.963-0500 c20013| 2016-04-06T02:52:26.840-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|46 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|4, t: 2 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:712 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:40.966-0500 c20013| 2016-04-06T02:52:26.840-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1190 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.971-0500 c20013| 2016-04-06T02:52:26.841-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.974-0500 c20013| 2016-04-06T02:52:26.841-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1192 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:40.976-0500 c20013| 2016-04-06T02:52:26.841-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1192 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:40.978-0500 c20013| 2016-04-06T02:52:26.841-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1192 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.984-0500 c20013| 2016-04-06T02:52:26.842-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1189 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929146000|5, t: 2, h: -9208531786049148683, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c03a65c17830b843f1ad'), state: 2, when: new Date(1459929146841), why: "splitting chunk [{ _id: -77.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:40.987-0500 c20013| 2016-04-06T02:52:26.842-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929146000|5 and ending at ts: Timestamp 1459929146000|5 [js_test:multi_coll_drop] 2016-04-06T02:53:40.988-0500 c20013| 2016-04-06T02:52:26.842-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:40.991-0500 c20013| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.991-0500 c20013| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.992-0500 s20014| 2016-04-06T02:53:18.974-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:53:40.994-0500 c20012| 2016-04-06T02:53:08.706-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:40.995-0500 s20014| 2016-04-06T02:53:18.974-0500 D NETWORK [Balancer] connected to server mongovm16:20013 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:53:40.998-0500 s20014| 2016-04-06T02:53:18.975-0500 D NETWORK [Balancer] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:53:41.012-0500 s20014| 2016-04-06T02:53:18.976-0500 D NETWORK [Balancer] polling for status of connection to 192.168.100.28:20011, event detected [js_test:multi_coll_drop] 2016-04-06T02:53:41.015-0500 c20011| 2016-04-06T02:52:45.724-0500 I ASIO [ReplicationExecutor] dropping unhealthy pooled connection to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:41.025-0500 c20012| 2016-04-06T02:53:08.706-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.026-0500 c20012| 2016-04-06T02:53:08.706-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.028-0500 c20012| 2016-04-06T02:53:08.707-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.028-0500 c20012| 2016-04-06T02:53:08.707-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.028-0500 c20012| 2016-04-06T02:53:08.707-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.038-0500 c20011| 2016-04-06T02:52:45.724-0500 I ASIO [ReplicationExecutor] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:53:41.043-0500 c20013| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.045-0500 c20013| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.049-0500 c20013| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.051-0500 c20013| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.052-0500 c20013| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.053-0500 c20013| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.054-0500 c20013| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.060-0500 c20013| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.061-0500 c20013| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.064-0500 c20013| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.065-0500 c20013| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.066-0500 c20013| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.066-0500 c20013| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.066-0500 c20013| 2016-04-06T02:52:26.842-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:41.067-0500 c20013| 2016-04-06T02:52:26.842-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.068-0500 c20013| 2016-04-06T02:52:26.842-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:41.072-0500 c20013| 2016-04-06T02:52:26.843-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.072-0500 c20013| 2016-04-06T02:52:26.843-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.077-0500 c20013| 2016-04-06T02:52:26.844-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.077-0500 c20013| 2016-04-06T02:52:26.844-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1195 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.844-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|4, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:41.080-0500 c20013| 2016-04-06T02:52:26.844-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.081-0500 c20013| 2016-04-06T02:52:26.844-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.086-0500 c20013| 2016-04-06T02:52:26.844-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1195 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:41.088-0500 c20013| 2016-04-06T02:52:26.844-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.088-0500 c20013| 2016-04-06T02:52:26.844-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.090-0500 c20013| 2016-04-06T02:52:26.844-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.091-0500 c20013| 2016-04-06T02:52:26.844-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.091-0500 c20013| 2016-04-06T02:52:26.844-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.091-0500 c20013| 2016-04-06T02:52:26.844-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.093-0500 c20013| 2016-04-06T02:52:26.844-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.094-0500 c20013| 2016-04-06T02:52:26.844-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.094-0500 c20013| 2016-04-06T02:52:26.844-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.095-0500 c20013| 2016-04-06T02:52:26.844-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.097-0500 c20013| 2016-04-06T02:52:26.844-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.100-0500 c20013| 2016-04-06T02:52:26.844-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:41.100-0500 c20011| 2016-04-06T02:52:45.724-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:41.101-0500 c20012| 2016-04-06T02:53:08.708-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.104-0500 s20014| 2016-04-06T02:53:18.976-0500 I NETWORK [Balancer] Socket closed remotely, no longer connected (idle 13 secs, remote host 192.168.100.28:20011) [js_test:multi_coll_drop] 2016-04-06T02:53:41.104-0500 s20014| 2016-04-06T02:53:18.976-0500 D NETWORK [Balancer] creating new connection to:mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:41.105-0500 s20014| 2016-04-06T02:53:18.976-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:53:41.109-0500 s20014| 2016-04-06T02:53:18.977-0500 D NETWORK [Balancer] connected to server mongovm16:20011 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:53:41.109-0500 s20014| 2016-04-06T02:53:18.977-0500 D NETWORK [Balancer] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:53:41.111-0500 s20014| 2016-04-06T02:53:18.977-0500 D ASIO [Balancer] startCommand: RemoteCommand 426 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:48.977-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929198273), up: 71, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.114-0500 s20014| 2016-04-06T02:53:18.977-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 426 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:41.115-0500 s20014| 2016-04-06T02:53:18.990-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 426 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929198000|3, t: 5 }, electionId: ObjectId('7fffffff0000000000000005') } [js_test:multi_coll_drop] 2016-04-06T02:53:41.117-0500 s20014| 2016-04-06T02:53:18.990-0500 D ASIO [Balancer] startCommand: RemoteCommand 428 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:48.990-0500 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|3, t: 5 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.119-0500 s20014| 2016-04-06T02:53:18.990-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 428 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:41.125-0500 s20014| 2016-04-06T02:53:18.991-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -60.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:41.128-0500 s20014| 2016-04-06T02:53:18.991-0500 D ASIO [conn1] startCommand: RemoteCommand 429 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:48.991-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|3, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.129-0500 s20014| 2016-04-06T02:53:18.991-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:41.130-0500 s20014| 2016-04-06T02:53:18.992-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 430 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:41.132-0500 s20014| 2016-04-06T02:53:18.992-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:41.135-0500 s20014| 2016-04-06T02:53:18.992-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 430 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:41.136-0500 s20014| 2016-04-06T02:53:18.992-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 429 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:41.140-0500 s20014| 2016-04-06T02:53:18.995-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 428 finished with response: { waitedMS: 4, cursor: { firstBatch: [ { _id: "shard0000", host: "mongovm16:20010" } ], id: 0, ns: "config.shards" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.142-0500 s20014| 2016-04-06T02:53:18.995-0500 D SHARDING [Balancer] found 1 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp 1459929198000|3, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.147-0500 s20014| 2016-04-06T02:53:18.995-0500 D ASIO [Balancer] startCommand: RemoteCommand 432 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:48.995-0500 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|3, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.149-0500 s20014| 2016-04-06T02:53:18.995-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 429 finished with response: { waitedMS: 2, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.149-0500 s20014| 2016-04-06T02:53:18.995-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:41.150-0500 s20014| 2016-04-06T02:53:18.996-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:41.151-0500 s20014| 2016-04-06T02:53:18.996-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 433 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:41.153-0500 s20014| 2016-04-06T02:53:18.996-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:41.157-0500 s20014| 2016-04-06T02:53:18.996-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 433 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:41.158-0500 s20014| 2016-04-06T02:53:18.996-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 432 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:41.161-0500 s20014| 2016-04-06T02:53:18.999-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -59.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:41.167-0500 s20014| 2016-04-06T02:53:18.999-0500 D ASIO [conn1] startCommand: RemoteCommand 435 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:48.999-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.170-0500 s20014| 2016-04-06T02:53:18.999-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 435 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:41.174-0500 s20014| 2016-04-06T02:53:19.015-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 435 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.175-0500 s20014| 2016-04-06T02:53:19.015-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:41.180-0500 s20014| 2016-04-06T02:53:19.035-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -58.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:41.184-0500 s20014| 2016-04-06T02:53:19.035-0500 D ASIO [conn1] startCommand: RemoteCommand 437 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:49.035-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.185-0500 s20014| 2016-04-06T02:53:19.035-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 437 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:41.189-0500 s20014| 2016-04-06T02:53:19.036-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 437 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.190-0500 s20014| 2016-04-06T02:53:19.036-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:41.193-0500 s20014| 2016-04-06T02:53:19.040-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -57.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:41.196-0500 s20014| 2016-04-06T02:53:19.040-0500 D ASIO [conn1] startCommand: RemoteCommand 439 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:49.040-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.197-0500 s20014| 2016-04-06T02:53:19.040-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:41.198-0500 s20014| 2016-04-06T02:53:19.041-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 440 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:41.198-0500 s20014| 2016-04-06T02:53:19.041-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:41.199-0500 s20014| 2016-04-06T02:53:19.041-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 440 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:41.200-0500 s20014| 2016-04-06T02:53:19.041-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 439 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:41.201-0500 c20011| 2016-04-06T02:52:45.724-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 311 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:41.203-0500 c20011| 2016-04-06T02:52:45.725-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:41.203-0500 c20011| 2016-04-06T02:52:45.725-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 311 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:41.207-0500 c20011| 2016-04-06T02:52:45.725-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 310 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:41.210-0500 c20011| 2016-04-06T02:52:45.725-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 310 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20013", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, opTime: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:41.213-0500 c20011| 2016-04-06T02:52:45.725-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:47.725Z [js_test:multi_coll_drop] 2016-04-06T02:53:41.218-0500 c20011| 2016-04-06T02:52:46.729-0500 D COMMAND [conn29] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.219-0500 c20011| 2016-04-06T02:52:46.729-0500 D COMMAND [conn29] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:41.220-0500 c20011| 2016-04-06T02:52:46.730-0500 I COMMAND [conn29] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } numYields:0 reslen:500 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:41.224-0500 c20011| 2016-04-06T02:52:47.725-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 313 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:57.725-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.225-0500 c20011| 2016-04-06T02:52:47.725-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 313 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:41.228-0500 c20011| 2016-04-06T02:52:47.726-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 313 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20013", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, opTime: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:41.229-0500 s20014| 2016-04-06T02:53:21.974-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Failed to execute command: RemoteCommand 432 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:48.995-0500 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|3, t: 5 } }, limit: 1, maxTimeMS: 30000 } reason: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:41.233-0500 c20012| 2016-04-06T02:53:08.708-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.236-0500 c20012| 2016-04-06T02:53:08.708-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:41.238-0500 c20012| 2016-04-06T02:53:08.708-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:41.240-0500 c20012| 2016-04-06T02:53:08.708-0500 D REPL [conn42] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29737788μs [js_test:multi_coll_drop] 2016-04-06T02:53:41.243-0500 c20012| 2016-04-06T02:53:08.709-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.246-0500 c20012| 2016-04-06T02:53:08.709-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.249-0500 c20012| 2016-04-06T02:53:08.709-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.252-0500 c20012| 2016-04-06T02:53:08.709-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.253-0500 c20012| 2016-04-06T02:53:08.709-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.253-0500 c20012| 2016-04-06T02:53:08.709-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.257-0500 c20012| 2016-04-06T02:53:08.709-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.258-0500 c20012| 2016-04-06T02:53:08.709-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.261-0500 c20012| 2016-04-06T02:53:08.709-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:41.265-0500 c20012| 2016-04-06T02:53:08.709-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.269-0500 c20012| 2016-04-06T02:53:08.709-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.272-0500 c20012| 2016-04-06T02:53:08.709-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-63.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:41.277-0500 c20012| 2016-04-06T02:53:08.709-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.282-0500 c20012| 2016-04-06T02:53:08.709-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.284-0500 c20012| 2016-04-06T02:53:08.709-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-62.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:41.285-0500 c20012| 2016-04-06T02:53:08.709-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.287-0500 c20012| 2016-04-06T02:53:08.710-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.288-0500 c20012| 2016-04-06T02:53:08.710-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.296-0500 c20012| 2016-04-06T02:53:08.711-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|5, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:41.302-0500 c20012| 2016-04-06T02:53:08.711-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1230 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|5, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:41.304-0500 c20012| 2016-04-06T02:53:08.711-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1230 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:41.305-0500 c20012| 2016-04-06T02:53:08.711-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1230 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.307-0500 c20012| 2016-04-06T02:53:08.711-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.309-0500 c20012| 2016-04-06T02:53:08.712-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.310-0500 c20012| 2016-04-06T02:53:08.712-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.312-0500 c20012| 2016-04-06T02:53:08.712-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.314-0500 c20012| 2016-04-06T02:53:08.712-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.315-0500 c20012| 2016-04-06T02:53:08.712-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.316-0500 c20012| 2016-04-06T02:53:08.712-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.317-0500 c20012| 2016-04-06T02:53:08.712-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.318-0500 c20012| 2016-04-06T02:53:08.712-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.319-0500 c20012| 2016-04-06T02:53:08.712-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.320-0500 c20012| 2016-04-06T02:53:08.713-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.322-0500 c20012| 2016-04-06T02:53:08.713-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.322-0500 c20012| 2016-04-06T02:53:08.713-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.323-0500 c20012| 2016-04-06T02:53:08.713-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.333-0500 c20012| 2016-04-06T02:53:08.713-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.338-0500 c20012| 2016-04-06T02:53:08.714-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.360-0500 c20012| 2016-04-06T02:53:08.714-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.361-0500 c20012| 2016-04-06T02:53:08.715-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:41.365-0500 c20012| 2016-04-06T02:53:08.715-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:41.366-0500 c20012| 2016-04-06T02:53:08.715-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.368-0500 c20012| 2016-04-06T02:53:08.715-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.368-0500 c20012| 2016-04-06T02:53:08.715-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.370-0500 c20011| 2016-04-06T02:52:47.726-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:49.726Z [js_test:multi_coll_drop] 2016-04-06T02:53:41.370-0500 c20011| 2016-04-06T02:52:48.717-0500 D COMMAND [conn37] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.371-0500 c20011| 2016-04-06T02:52:48.718-0500 I COMMAND [conn37] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:41.374-0500 c20011| 2016-04-06T02:52:48.733-0500 D COMMAND [conn29] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.375-0500 c20011| 2016-04-06T02:52:48.733-0500 D COMMAND [conn29] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:41.378-0500 c20011| 2016-04-06T02:52:48.733-0500 I COMMAND [conn29] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } numYields:0 reslen:500 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:41.381-0500 c20011| 2016-04-06T02:52:49.726-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 315 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:59.726-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.382-0500 c20011| 2016-04-06T02:52:49.726-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 315 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:41.384-0500 c20011| 2016-04-06T02:52:49.726-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 315 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20013", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, opTime: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:41.387-0500 c20011| 2016-04-06T02:52:49.726-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:51.726Z [js_test:multi_coll_drop] 2016-04-06T02:53:41.388-0500 c20011| 2016-04-06T02:52:50.733-0500 D COMMAND [conn29] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.389-0500 c20011| 2016-04-06T02:52:50.733-0500 D COMMAND [conn29] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:41.391-0500 c20011| 2016-04-06T02:52:50.733-0500 I COMMAND [conn29] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } numYields:0 reslen:500 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:41.392-0500 c20011| 2016-04-06T02:52:51.720-0500 D COMMAND [conn32] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.395-0500 c20011| 2016-04-06T02:52:51.721-0500 D COMMAND [conn41] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.396-0500 c20011| 2016-04-06T02:52:51.721-0500 I COMMAND [conn41] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:41.400-0500 c20011| 2016-04-06T02:52:51.721-0500 I COMMAND [conn32] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:41.401-0500 s20014| 2016-04-06T02:53:21.974-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 432 finished with response: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:41.407-0500 s20014| 2016-04-06T02:53:21.974-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Failed to execute command: RemoteCommand 439 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:49.040-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } reason: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:41.408-0500 s20014| 2016-04-06T02:53:21.974-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 439 finished with response: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:41.409-0500 s20014| 2016-04-06T02:53:21.974-0500 D NETWORK [Balancer] Marking host mongovm16:20013 as failed [js_test:multi_coll_drop] 2016-04-06T02:53:41.413-0500 s20014| 2016-04-06T02:53:21.975-0500 D ASIO [Balancer] startCommand: RemoteCommand 443 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:51.975-0500 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.414-0500 s20014| 2016-04-06T02:53:21.975-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 443 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:41.414-0500 s20014| 2016-04-06T02:53:21.975-0500 D NETWORK [conn1] Marking host mongovm16:20013 as failed [js_test:multi_coll_drop] 2016-04-06T02:53:41.418-0500 s20014| 2016-04-06T02:53:21.975-0500 D ASIO [conn1] startCommand: RemoteCommand 444 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:51.975-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.421-0500 s20014| 2016-04-06T02:53:21.975-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 444 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:41.423-0500 s20014| 2016-04-06T02:53:21.975-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 443 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "chunksize", value: 50 } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.426-0500 s20014| 2016-04-06T02:53:21.975-0500 D SHARDING [Balancer] Refreshing MaxChunkSize: 50MB [js_test:multi_coll_drop] 2016-04-06T02:53:41.433-0500 s20014| 2016-04-06T02:53:21.975-0500 D ASIO [Balancer] startCommand: RemoteCommand 446 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:51.975-0500 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.433-0500 s20014| 2016-04-06T02:53:21.976-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 446 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:41.438-0500 s20014| 2016-04-06T02:53:21.976-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 444 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.439-0500 s20014| 2016-04-06T02:53:21.976-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:41.439-0500 c20012| 2016-04-06T02:53:08.715-0500 D REPL [conn42] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29730773μs [js_test:multi_coll_drop] 2016-04-06T02:53:41.440-0500 c20012| 2016-04-06T02:53:08.715-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.441-0500 c20012| 2016-04-06T02:53:08.715-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.443-0500 c20012| 2016-04-06T02:53:08.715-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.443-0500 c20012| 2016-04-06T02:53:08.715-0500 D REPL [rsSync] replication batch size is 2 [js_test:multi_coll_drop] 2016-04-06T02:53:41.443-0500 c20012| 2016-04-06T02:53:08.716-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.444-0500 c20012| 2016-04-06T02:53:08.716-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:41.447-0500 c20012| 2016-04-06T02:53:08.716-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.448-0500 c20012| 2016-04-06T02:53:08.716-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.448-0500 c20012| 2016-04-06T02:53:08.716-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.450-0500 c20012| 2016-04-06T02:53:08.716-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.454-0500 c20012| 2016-04-06T02:53:08.716-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.455-0500 c20012| 2016-04-06T02:53:08.716-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.457-0500 c20012| 2016-04-06T02:53:08.716-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.459-0500 c20012| 2016-04-06T02:53:08.716-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.463-0500 c20012| 2016-04-06T02:53:08.716-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.464-0500 c20012| 2016-04-06T02:53:08.716-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.466-0500 c20012| 2016-04-06T02:53:08.716-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.467-0500 c20012| 2016-04-06T02:53:08.716-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.472-0500 c20012| 2016-04-06T02:53:08.716-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.473-0500 c20012| 2016-04-06T02:53:08.718-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.478-0500 c20012| 2016-04-06T02:53:08.718-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.479-0500 c20012| 2016-04-06T02:53:08.718-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.482-0500 c20012| 2016-04-06T02:53:08.718-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.485-0500 c20012| 2016-04-06T02:53:08.718-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|6, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:41.490-0500 c20012| 2016-04-06T02:53:08.718-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1232 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|6, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:41.492-0500 c20012| 2016-04-06T02:53:08.718-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1232 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:41.495-0500 c20012| 2016-04-06T02:53:08.718-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1232 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.497-0500 c20012| 2016-04-06T02:53:08.719-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.499-0500 c20012| 2016-04-06T02:53:08.719-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.500-0500 c20012| 2016-04-06T02:53:08.719-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.501-0500 c20012| 2016-04-06T02:53:08.719-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.504-0500 c20012| 2016-04-06T02:53:08.719-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.506-0500 c20012| 2016-04-06T02:53:08.719-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.508-0500 c20012| 2016-04-06T02:53:08.719-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.510-0500 c20012| 2016-04-06T02:53:08.719-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.514-0500 c20012| 2016-04-06T02:53:08.719-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:41.524-0500 c20012| 2016-04-06T02:53:08.720-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|8, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:41.529-0500 c20012| 2016-04-06T02:53:08.720-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1234 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|8, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:41.529-0500 c20012| 2016-04-06T02:53:08.720-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1234 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:41.532-0500 c20012| 2016-04-06T02:53:08.720-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1234 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.540-0500 c20012| 2016-04-06T02:53:08.720-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|8, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:41.550-0500 c20012| 2016-04-06T02:53:08.720-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1235 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|8, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:41.551-0500 c20012| 2016-04-06T02:53:08.720-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1235 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:41.554-0500 c20012| 2016-04-06T02:53:08.721-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1235 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.572-0500 c20012| 2016-04-06T02:53:08.724-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|8, t: 4 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:41.578-0500 c20012| 2016-04-06T02:53:08.724-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|8, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.582-0500 c20012| 2016-04-06T02:53:08.725-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:41.586-0500 c20012| 2016-04-06T02:53:08.725-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|8, t: 4 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 278ms [js_test:multi_coll_drop] 2016-04-06T02:53:41.590-0500 c20012| 2016-04-06T02:53:08.731-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1221 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929188000|9, t: 4, h: 6545560476923728443, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c06465c17830b843f1cb'), state: 2, when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 21969886375, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.592-0500 c20012| 2016-04-06T02:53:08.732-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929188000|9 and ending at ts: Timestamp 1459929188000|9 [js_test:multi_coll_drop] 2016-04-06T02:53:41.596-0500 c20012| 2016-04-06T02:53:08.732-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:41.601-0500 c20012| 2016-04-06T02:53:08.732-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.602-0500 c20012| 2016-04-06T02:53:08.732-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.604-0500 c20012| 2016-04-06T02:53:08.732-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.605-0500 c20012| 2016-04-06T02:53:08.733-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.608-0500 c20012| 2016-04-06T02:53:08.733-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.612-0500 c20012| 2016-04-06T02:53:08.733-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.615-0500 c20012| 2016-04-06T02:53:08.733-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.616-0500 c20012| 2016-04-06T02:53:08.733-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.622-0500 c20012| 2016-04-06T02:53:08.733-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:41.624-0500 c20012| 2016-04-06T02:53:08.733-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.628-0500 c20012| 2016-04-06T02:53:08.733-0500 D QUERY [repl writer worker 7] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:41.631-0500 c20012| 2016-04-06T02:53:08.733-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.637-0500 c20012| 2016-04-06T02:53:08.733-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.639-0500 c20012| 2016-04-06T02:53:08.733-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.640-0500 c20012| 2016-04-06T02:53:08.733-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.642-0500 c20012| 2016-04-06T02:53:08.733-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.644-0500 c20012| 2016-04-06T02:53:08.733-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.648-0500 c20012| 2016-04-06T02:53:08.733-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.649-0500 c20012| 2016-04-06T02:53:08.733-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.651-0500 c20012| 2016-04-06T02:53:08.733-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.652-0500 c20012| 2016-04-06T02:53:08.733-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.654-0500 c20012| 2016-04-06T02:53:08.733-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.655-0500 c20012| 2016-04-06T02:53:08.733-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.656-0500 c20012| 2016-04-06T02:53:08.733-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.657-0500 c20012| 2016-04-06T02:53:08.733-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.658-0500 c20012| 2016-04-06T02:53:08.733-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.660-0500 c20012| 2016-04-06T02:53:08.733-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.661-0500 c20012| 2016-04-06T02:53:08.733-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.664-0500 c20012| 2016-04-06T02:53:08.734-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.665-0500 c20013| 2016-04-06T02:52:26.844-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:41.673-0500 c20013| 2016-04-06T02:52:26.845-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1196 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|4, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:41.677-0500 c20013| 2016-04-06T02:52:26.845-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1196 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:41.678-0500 c20013| 2016-04-06T02:52:26.845-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1196 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.681-0500 c20013| 2016-04-06T02:52:26.846-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1195 finished with response: { cursor: { nextBatch: [], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.684-0500 c20013| 2016-04-06T02:52:26.846-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|5, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.685-0500 c20013| 2016-04-06T02:52:26.846-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:41.688-0500 c20013| 2016-04-06T02:52:26.846-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1199 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.846-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|5, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:41.688-0500 c20013| 2016-04-06T02:52:26.846-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1199 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:41.695-0500 c20013| 2016-04-06T02:52:26.846-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:41.699-0500 c20013| 2016-04-06T02:52:26.846-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1200 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:41.700-0500 c20013| 2016-04-06T02:52:26.846-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1200 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:41.703-0500 c20013| 2016-04-06T02:52:26.846-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1200 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.709-0500 c20013| 2016-04-06T02:52:26.850-0500 D COMMAND [conn15] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|48 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|5, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.711-0500 c20013| 2016-04-06T02:52:26.850-0500 D COMMAND [conn15] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|5, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:41.713-0500 c20013| 2016-04-06T02:52:26.850-0500 D COMMAND [conn15] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|48 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|5, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.718-0500 c20013| 2016-04-06T02:52:26.850-0500 D QUERY [conn15] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:41.723-0500 c20013| 2016-04-06T02:52:26.850-0500 I COMMAND [conn15] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|48 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|5, t: 2 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:41.729-0500 c20013| 2016-04-06T02:52:26.853-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1199 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929146000|6, t: 2, h: -5811817306687838428, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-77.0", lastmod: Timestamp 1000|49, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -77.0 }, max: { _id: -76.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-77.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-76.0", lastmod: Timestamp 1000|50, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -76.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-76.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.731-0500 c20013| 2016-04-06T02:52:26.853-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929146000|6 and ending at ts: Timestamp 1459929146000|6 [js_test:multi_coll_drop] 2016-04-06T02:53:41.746-0500 c20013| 2016-04-06T02:52:26.854-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:41.746-0500 c20013| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.747-0500 c20013| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.750-0500 c20013| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.756-0500 c20013| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.757-0500 c20013| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.759-0500 c20013| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.759-0500 c20013| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.761-0500 c20013| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.763-0500 c20013| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.766-0500 c20013| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.768-0500 c20013| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.768-0500 c20013| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.774-0500 c20013| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.774-0500 c20013| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.776-0500 c20013| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.790-0500 c20013| 2016-04-06T02:52:26.854-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:41.793-0500 c20013| 2016-04-06T02:52:26.854-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.794-0500 c20013| 2016-04-06T02:52:26.854-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll-_id_-77.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:41.794-0500 c20013| 2016-04-06T02:52:26.854-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll-_id_-76.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:41.797-0500 c20013| 2016-04-06T02:52:26.855-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.799-0500 c20013| 2016-04-06T02:52:26.855-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.802-0500 c20013| 2016-04-06T02:52:26.855-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.804-0500 c20013| 2016-04-06T02:52:26.855-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.809-0500 c20013| 2016-04-06T02:52:26.855-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.810-0500 c20013| 2016-04-06T02:52:26.855-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.811-0500 c20013| 2016-04-06T02:52:26.855-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.812-0500 c20013| 2016-04-06T02:52:26.855-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.813-0500 c20013| 2016-04-06T02:52:26.855-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.815-0500 c20013| 2016-04-06T02:52:26.855-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.816-0500 c20013| 2016-04-06T02:52:26.855-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.818-0500 c20013| 2016-04-06T02:52:26.855-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.820-0500 c20013| 2016-04-06T02:52:26.855-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.820-0500 c20013| 2016-04-06T02:52:26.855-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.821-0500 c20013| 2016-04-06T02:52:26.855-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.822-0500 c20013| 2016-04-06T02:52:26.855-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.824-0500 c20013| 2016-04-06T02:52:26.855-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:41.829-0500 c20013| 2016-04-06T02:52:26.855-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:41.834-0500 c20013| 2016-04-06T02:52:26.855-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1203 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|5, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:41.836-0500 c20013| 2016-04-06T02:52:26.855-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1203 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:41.837-0500 c20013| 2016-04-06T02:52:26.855-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1203 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.839-0500 c20013| 2016-04-06T02:52:26.855-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1205 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.855-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|5, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:41.842-0500 c20013| 2016-04-06T02:52:26.856-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1205 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:41.849-0500 c20013| 2016-04-06T02:52:26.862-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:41.853-0500 c20013| 2016-04-06T02:52:26.862-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1206 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:41.855-0500 c20013| 2016-04-06T02:52:26.862-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1206 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:41.855-0500 c20013| 2016-04-06T02:52:26.862-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1206 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.861-0500 c20013| 2016-04-06T02:52:26.864-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1205 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929146000|7, t: 2, h: -8448965826059055622, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:26.862-0500-5704c03a65c17830b843f1ae", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929146862), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -77.0 }, max: { _id: MaxKey } }, left: { min: { _id: -77.0 }, max: { _id: -76.0 }, lastmod: Timestamp 1000|49, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -76.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|50, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.865-0500 c20013| 2016-04-06T02:52:26.864-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|6, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.870-0500 c20013| 2016-04-06T02:52:26.864-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929146000|7 and ending at ts: Timestamp 1459929146000|7 [js_test:multi_coll_drop] 2016-04-06T02:53:41.871-0500 c20013| 2016-04-06T02:52:26.864-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:41.873-0500 c20013| 2016-04-06T02:52:26.864-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.873-0500 c20013| 2016-04-06T02:52:26.864-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.874-0500 c20013| 2016-04-06T02:52:26.864-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.876-0500 c20013| 2016-04-06T02:52:26.864-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.878-0500 c20013| 2016-04-06T02:52:26.864-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.879-0500 c20013| 2016-04-06T02:52:26.864-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.883-0500 c20013| 2016-04-06T02:52:26.864-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.883-0500 c20013| 2016-04-06T02:52:26.864-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.884-0500 c20013| 2016-04-06T02:52:26.864-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.886-0500 c20013| 2016-04-06T02:52:26.864-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.887-0500 c20013| 2016-04-06T02:52:26.864-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.888-0500 c20013| 2016-04-06T02:52:26.864-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.890-0500 c20013| 2016-04-06T02:52:26.864-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.890-0500 c20013| 2016-04-06T02:52:26.864-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:41.891-0500 c20013| 2016-04-06T02:52:26.864-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.891-0500 c20013| 2016-04-06T02:52:26.865-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.899-0500 c20013| 2016-04-06T02:52:26.865-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.900-0500 c20013| 2016-04-06T02:52:26.865-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.902-0500 c20013| 2016-04-06T02:52:26.865-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.904-0500 c20013| 2016-04-06T02:52:26.865-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.905-0500 c20013| 2016-04-06T02:52:26.865-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.907-0500 c20013| 2016-04-06T02:52:26.865-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.909-0500 c20013| 2016-04-06T02:52:26.865-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.910-0500 c20013| 2016-04-06T02:52:26.865-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.910-0500 c20013| 2016-04-06T02:52:26.865-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.913-0500 c20013| 2016-04-06T02:52:26.865-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.913-0500 c20013| 2016-04-06T02:52:26.865-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.914-0500 c20013| 2016-04-06T02:52:26.865-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.915-0500 c20013| 2016-04-06T02:52:26.865-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.918-0500 c20013| 2016-04-06T02:52:26.866-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1209 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.866-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|6, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:41.918-0500 c20013| 2016-04-06T02:52:26.869-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.919-0500 c20013| 2016-04-06T02:52:26.869-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.920-0500 c20013| 2016-04-06T02:52:26.869-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1209 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:41.922-0500 c20013| 2016-04-06T02:52:26.870-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.922-0500 c20013| 2016-04-06T02:52:26.870-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:41.923-0500 c20013| 2016-04-06T02:52:26.870-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:41.926-0500 c20013| 2016-04-06T02:52:26.870-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:41.933-0500 c20013| 2016-04-06T02:52:26.870-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1210 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|6, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:41.935-0500 c20013| 2016-04-06T02:52:26.870-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1210 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:41.938-0500 c20013| 2016-04-06T02:52:26.870-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1210 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.945-0500 c20013| 2016-04-06T02:52:26.876-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:41.953-0500 c20013| 2016-04-06T02:52:26.876-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1212 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:41.956-0500 c20013| 2016-04-06T02:52:26.876-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1212 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:41.959-0500 c20013| 2016-04-06T02:52:26.876-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1212 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.960-0500 c20013| 2016-04-06T02:52:26.876-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1209 finished with response: { cursor: { nextBatch: [], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.963-0500 c20013| 2016-04-06T02:52:26.876-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|7, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.967-0500 s20014| 2016-04-06T02:53:21.977-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 446 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "balancer", stopped: true } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.970-0500 s20014| 2016-04-06T02:53:21.977-0500 D SHARDING [Balancer] skipping balancing round because balancing is disabled [js_test:multi_coll_drop] 2016-04-06T02:53:41.976-0500 s20014| 2016-04-06T02:53:21.977-0500 D ASIO [Balancer] startCommand: RemoteCommand 449 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:51.977-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929201977), up: 74, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.979-0500 s20014| 2016-04-06T02:53:21.977-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 449 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:41.985-0500 s20014| 2016-04-06T02:53:21.983-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -56.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:41.988-0500 s20014| 2016-04-06T02:53:21.983-0500 D ASIO [conn1] startCommand: RemoteCommand 450 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:51.983-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:41.989-0500 s20014| 2016-04-06T02:53:21.983-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:41.989-0500 s20014| 2016-04-06T02:53:21.986-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 451 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:41.991-0500 s20014| 2016-04-06T02:53:21.986-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:41.995-0500 s20014| 2016-04-06T02:53:21.986-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 451 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:41.999-0500 s20014| 2016-04-06T02:53:21.986-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 450 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:42.005-0500 s20014| 2016-04-06T02:53:21.987-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 450 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.006-0500 s20014| 2016-04-06T02:53:21.987-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:42.006-0500 s20014| 2016-04-06T02:53:21.992-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 449 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929201000|1, t: 5 }, electionId: ObjectId('7fffffff0000000000000005') } [js_test:multi_coll_drop] 2016-04-06T02:53:42.009-0500 s20014| 2016-04-06T02:53:21.994-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -55.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:42.011-0500 s20014| 2016-04-06T02:53:21.994-0500 D ASIO [conn1] startCommand: RemoteCommand 454 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:51.994-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.012-0500 s20014| 2016-04-06T02:53:21.994-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 454 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:42.013-0500 s20014| 2016-04-06T02:53:21.995-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 454 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.014-0500 s20014| 2016-04-06T02:53:21.995-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:42.016-0500 s20014| 2016-04-06T02:53:21.997-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -54.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:42.024-0500 s20014| 2016-04-06T02:53:21.997-0500 D ASIO [conn1] startCommand: RemoteCommand 456 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:51.997-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.026-0500 s20014| 2016-04-06T02:53:21.997-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 456 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:42.039-0500 s20014| 2016-04-06T02:53:21.998-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 456 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.041-0500 s20014| 2016-04-06T02:53:21.998-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:42.044-0500 s20014| 2016-04-06T02:53:22.024-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -53.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:42.052-0500 s20014| 2016-04-06T02:53:22.025-0500 D ASIO [conn1] startCommand: RemoteCommand 458 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.025-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.052-0500 s20014| 2016-04-06T02:53:22.025-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 458 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:42.060-0500 s20014| 2016-04-06T02:53:22.026-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 458 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.063-0500 s20014| 2016-04-06T02:53:22.026-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:42.071-0500 s20014| 2016-04-06T02:53:22.031-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -52.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:42.075-0500 s20014| 2016-04-06T02:53:22.031-0500 D ASIO [conn1] startCommand: RemoteCommand 460 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.031-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.084-0500 s20014| 2016-04-06T02:53:22.031-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 460 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:42.088-0500 s20014| 2016-04-06T02:53:22.036-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 460 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.090-0500 s20014| 2016-04-06T02:53:22.036-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:42.090-0500 c20012| 2016-04-06T02:53:08.734-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.092-0500 c20012| 2016-04-06T02:53:08.734-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.092-0500 c20012| 2016-04-06T02:53:08.734-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.093-0500 c20012| 2016-04-06T02:53:08.734-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.095-0500 c20012| 2016-04-06T02:53:08.734-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.102-0500 c20012| 2016-04-06T02:53:08.734-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:42.109-0500 c20012| 2016-04-06T02:53:08.734-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|9, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:42.112-0500 c20012| 2016-04-06T02:53:08.734-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1239 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|9, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:42.115-0500 c20012| 2016-04-06T02:53:08.734-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1239 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:42.117-0500 c20012| 2016-04-06T02:53:08.735-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1239 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.120-0500 c20012| 2016-04-06T02:53:08.735-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1241 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.735-0500 cmd:{ getMore: 21969886375, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|8, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.126-0500 c20012| 2016-04-06T02:53:08.735-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929188000|8, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|9, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:42.128-0500 c20012| 2016-04-06T02:53:08.735-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1241 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:42.137-0500 c20012| 2016-04-06T02:53:08.735-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1242 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929188000|8, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|9, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:42.138-0500 c20012| 2016-04-06T02:53:08.735-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1242 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:42.139-0500 c20012| 2016-04-06T02:53:08.735-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1242 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.143-0500 c20012| 2016-04-06T02:53:08.755-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929188000|9, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|9, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:42.150-0500 c20012| 2016-04-06T02:53:08.755-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1244 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929188000|9, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|9, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:42.150-0500 c20012| 2016-04-06T02:53:08.755-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1244 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:42.151-0500 c20012| 2016-04-06T02:53:08.755-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1244 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.153-0500 c20012| 2016-04-06T02:53:08.756-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1241 finished with response: { cursor: { nextBatch: [], id: 21969886375, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.154-0500 c20012| 2016-04-06T02:53:08.759-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929188000|9, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.155-0500 c20012| 2016-04-06T02:53:08.759-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:42.158-0500 c20012| 2016-04-06T02:53:08.759-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1247 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.759-0500 cmd:{ getMore: 21969886375, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|9, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.158-0500 c20012| 2016-04-06T02:53:08.760-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1247 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:42.166-0500 c20012| 2016-04-06T02:53:08.773-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1247 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929188000|10, t: 4, h: -7436856840318092141, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-62.0", lastmod: Timestamp 1000|79, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -62.0 }, max: { _id: -61.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-62.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-61.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 21969886375, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.167-0500 c20012| 2016-04-06T02:53:08.773-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929188000|10 and ending at ts: Timestamp 1459929188000|10 [js_test:multi_coll_drop] 2016-04-06T02:53:42.170-0500 c20012| 2016-04-06T02:53:08.774-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:42.182-0500 c20012| 2016-04-06T02:53:08.774-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.183-0500 c20012| 2016-04-06T02:53:08.774-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.184-0500 c20012| 2016-04-06T02:53:08.774-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.184-0500 c20012| 2016-04-06T02:53:08.774-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.187-0500 c20012| 2016-04-06T02:53:08.774-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.190-0500 c20012| 2016-04-06T02:53:08.774-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.191-0500 c20012| 2016-04-06T02:53:08.774-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.192-0500 c20012| 2016-04-06T02:53:08.774-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:42.194-0500 c20012| 2016-04-06T02:53:08.774-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.195-0500 c20012| 2016-04-06T02:53:08.774-0500 D QUERY [repl writer worker 5] Using idhack: { _id: "multidrop.coll-_id_-62.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:42.199-0500 c20012| 2016-04-06T02:53:08.775-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.200-0500 c20012| 2016-04-06T02:53:08.775-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.202-0500 c20012| 2016-04-06T02:53:08.775-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.203-0500 c20012| 2016-04-06T02:53:08.775-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.204-0500 c20012| 2016-04-06T02:53:08.775-0500 D QUERY [repl writer worker 5] Using idhack: { _id: "multidrop.coll-_id_-61.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:42.205-0500 c20012| 2016-04-06T02:53:08.775-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.207-0500 c20012| 2016-04-06T02:53:08.775-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.208-0500 c20012| 2016-04-06T02:53:08.775-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.209-0500 c20012| 2016-04-06T02:53:08.775-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.210-0500 c20012| 2016-04-06T02:53:08.775-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.211-0500 c20012| 2016-04-06T02:53:08.775-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.213-0500 c20012| 2016-04-06T02:53:08.775-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.214-0500 c20012| 2016-04-06T02:53:08.775-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.215-0500 c20012| 2016-04-06T02:53:08.775-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.215-0500 c20012| 2016-04-06T02:53:08.775-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.216-0500 c20012| 2016-04-06T02:53:08.776-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.217-0500 c20012| 2016-04-06T02:53:08.776-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.218-0500 c20012| 2016-04-06T02:53:08.776-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.222-0500 c20012| 2016-04-06T02:53:08.776-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.222-0500 c20012| 2016-04-06T02:53:08.776-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.222-0500 c20012| 2016-04-06T02:53:08.776-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.223-0500 c20012| 2016-04-06T02:53:08.776-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.224-0500 c20012| 2016-04-06T02:53:08.777-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.225-0500 c20012| 2016-04-06T02:53:08.777-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.225-0500 c20012| 2016-04-06T02:53:08.777-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.226-0500 c20012| 2016-04-06T02:53:08.777-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:42.227-0500 c20013| 2016-04-06T02:52:26.876-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:42.234-0500 c20013| 2016-04-06T02:52:26.877-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1215 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.877-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|7, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.235-0500 c20013| 2016-04-06T02:52:26.877-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1215 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:42.237-0500 c20013| 2016-04-06T02:52:26.877-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1215 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929146000|8, t: 2, h: -1200371352031369196, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.239-0500 c20013| 2016-04-06T02:52:26.877-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929146000|8 and ending at ts: Timestamp 1459929146000|8 [js_test:multi_coll_drop] 2016-04-06T02:53:42.240-0500 c20013| 2016-04-06T02:52:26.877-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:42.241-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.242-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.243-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.245-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.246-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.247-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.248-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.249-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.249-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.250-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.251-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.253-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.254-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.254-0500 c20013| 2016-04-06T02:52:26.878-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:42.254-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.255-0500 c20013| 2016-04-06T02:52:26.878-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:42.257-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.258-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.259-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.260-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.260-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.261-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.264-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.264-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.265-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.267-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.268-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.270-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.271-0500 c20013| 2016-04-06T02:52:26.878-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.272-0500 c20013| 2016-04-06T02:52:26.879-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.272-0500 c20013| 2016-04-06T02:52:26.879-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.277-0500 c20013| 2016-04-06T02:52:26.879-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.277-0500 c20013| 2016-04-06T02:52:26.879-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.289-0500 c20013| 2016-04-06T02:52:26.879-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.292-0500 c20013| 2016-04-06T02:52:26.879-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:42.298-0500 c20012| 2016-04-06T02:53:08.777-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929188000|9, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|10, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:42.311-0500 c20012| 2016-04-06T02:53:08.777-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1249 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929188000|9, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|10, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:42.316-0500 c20012| 2016-04-06T02:53:08.777-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1249 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:42.318-0500 c20012| 2016-04-06T02:53:08.778-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1249 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.319-0500 c20012| 2016-04-06T02:53:08.780-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1251 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.780-0500 cmd:{ getMore: 21969886375, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|9, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.320-0500 c20012| 2016-04-06T02:53:08.780-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1251 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:42.324-0500 c20012| 2016-04-06T02:53:08.784-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929188000|10, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|10, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:42.332-0500 c20012| 2016-04-06T02:53:08.784-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1252 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929188000|10, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|10, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:42.334-0500 c20012| 2016-04-06T02:53:08.784-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1252 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:42.335-0500 c20012| 2016-04-06T02:53:08.784-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1252 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.339-0500 c20012| 2016-04-06T02:53:08.785-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1251 finished with response: { cursor: { nextBatch: [], id: 21969886375, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.340-0500 c20012| 2016-04-06T02:53:08.785-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929188000|10, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.344-0500 c20012| 2016-04-06T02:53:08.785-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:42.349-0500 c20012| 2016-04-06T02:53:08.785-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1255 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.785-0500 cmd:{ getMore: 21969886375, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|10, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.351-0500 c20012| 2016-04-06T02:53:08.785-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1255 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:42.357-0500 c20012| 2016-04-06T02:53:08.787-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1255 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929188000|11, t: 4, h: -766951703923615705, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:53:08.786-0500-5704c06465c17830b843f1cc", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929188786), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -62.0 }, max: { _id: MaxKey } }, left: { min: { _id: -62.0 }, max: { _id: -61.0 }, lastmod: Timestamp 1000|79, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -61.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 21969886375, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.359-0500 c20012| 2016-04-06T02:53:08.788-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929188000|11 and ending at ts: Timestamp 1459929188000|11 [js_test:multi_coll_drop] 2016-04-06T02:53:42.361-0500 c20012| 2016-04-06T02:53:08.788-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:42.361-0500 c20012| 2016-04-06T02:53:08.788-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.362-0500 c20012| 2016-04-06T02:53:08.788-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.363-0500 c20012| 2016-04-06T02:53:08.788-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.368-0500 c20012| 2016-04-06T02:53:08.788-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.369-0500 c20012| 2016-04-06T02:53:08.788-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.371-0500 c20012| 2016-04-06T02:53:08.788-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.373-0500 c20012| 2016-04-06T02:53:08.788-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.374-0500 c20012| 2016-04-06T02:53:08.788-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.377-0500 c20012| 2016-04-06T02:53:08.788-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.378-0500 c20012| 2016-04-06T02:53:08.788-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.378-0500 c20012| 2016-04-06T02:53:08.788-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.379-0500 c20012| 2016-04-06T02:53:08.788-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:42.382-0500 c20012| 2016-04-06T02:53:08.788-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.384-0500 c20012| 2016-04-06T02:53:08.789-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.385-0500 c20012| 2016-04-06T02:53:08.789-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.386-0500 c20012| 2016-04-06T02:53:08.789-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.387-0500 c20012| 2016-04-06T02:53:08.789-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.387-0500 c20012| 2016-04-06T02:53:08.789-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.390-0500 c20012| 2016-04-06T02:53:08.789-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.390-0500 c20012| 2016-04-06T02:53:08.789-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.391-0500 c20012| 2016-04-06T02:53:08.789-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.391-0500 c20012| 2016-04-06T02:53:08.789-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.397-0500 c20012| 2016-04-06T02:53:08.789-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.398-0500 c20012| 2016-04-06T02:53:08.789-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.400-0500 c20012| 2016-04-06T02:53:08.789-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.403-0500 c20012| 2016-04-06T02:53:08.790-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1257 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.790-0500 cmd:{ getMore: 21969886375, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|10, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.404-0500 c20012| 2016-04-06T02:53:08.790-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1257 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:42.405-0500 c20012| 2016-04-06T02:53:08.790-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.406-0500 c20012| 2016-04-06T02:53:08.790-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.407-0500 c20012| 2016-04-06T02:53:08.791-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.408-0500 c20012| 2016-04-06T02:53:08.791-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.412-0500 c20012| 2016-04-06T02:53:08.792-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.414-0500 c20012| 2016-04-06T02:53:08.792-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.415-0500 c20012| 2016-04-06T02:53:08.792-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.415-0500 c20012| 2016-04-06T02:53:08.792-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.418-0500 c20012| 2016-04-06T02:53:08.793-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:42.422-0500 c20012| 2016-04-06T02:53:08.793-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929188000|10, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:42.427-0500 c20012| 2016-04-06T02:53:08.793-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1258 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929188000|10, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:42.428-0500 c20012| 2016-04-06T02:53:08.793-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1258 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:42.428-0500 c20012| 2016-04-06T02:53:08.794-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1258 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.430-0500 c20012| 2016-04-06T02:53:08.798-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1257 finished with response: { cursor: { nextBatch: [], id: 21969886375, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.433-0500 c20012| 2016-04-06T02:53:08.798-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929188000|11, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.434-0500 c20012| 2016-04-06T02:53:08.798-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:42.437-0500 c20012| 2016-04-06T02:53:08.798-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1261 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.798-0500 cmd:{ getMore: 21969886375, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.438-0500 c20012| 2016-04-06T02:53:08.798-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1261 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:42.444-0500 c20012| 2016-04-06T02:53:08.799-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:42.450-0500 c20012| 2016-04-06T02:53:08.799-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1262 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:42.451-0500 c20012| 2016-04-06T02:53:08.799-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1262 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:42.455-0500 c20012| 2016-04-06T02:53:08.799-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1262 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.457-0500 c20012| 2016-04-06T02:53:09.558-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:_mdb_catalog -> { numRecords: 17, dataSize: 6553 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.458-0500 c20012| 2016-04-06T02:53:09.558-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-11-6577373056560964212 -> { numRecords: 3, dataSize: 198 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.458-0500 c20012| 2016-04-06T02:53:09.558-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-13-6577373056560964212 -> { numRecords: 1, dataSize: 83 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.459-0500 c20012| 2016-04-06T02:53:09.558-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-15-6577373056560964212 -> { numRecords: 2, dataSize: 72 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.461-0500 c20012| 2016-04-06T02:53:09.558-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-17-6577373056560964212 -> { numRecords: 41, dataSize: 7038 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.462-0500 c20012| 2016-04-06T02:53:09.558-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-22-6577373056560964212 -> { numRecords: 1, dataSize: 50 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.466-0500 c20012| 2016-04-06T02:53:09.558-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-25-6577373056560964212 -> { numRecords: 3, dataSize: 644 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.467-0500 c20012| 2016-04-06T02:53:09.558-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-30-6577373056560964212 -> { numRecords: 0, dataSize: 0 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.468-0500 c20012| 2016-04-06T02:53:09.558-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-33-6577373056560964212 -> { numRecords: 2, dataSize: 204 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.469-0500 c20012| 2016-04-06T02:53:09.558-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-35-6577373056560964212 -> { numRecords: 43, dataSize: 18739 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.473-0500 c20012| 2016-04-06T02:53:09.558-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-37-6577373056560964212 -> { numRecords: 1, dataSize: 61 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.474-0500 c20012| 2016-04-06T02:53:09.558-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-39-6577373056560964212 -> { numRecords: 1, dataSize: 114 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.475-0500 c20012| 2016-04-06T02:53:09.558-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-6-6577373056560964212 -> { numRecords: 220, dataSize: 74132 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.477-0500 c20012| 2016-04-06T02:53:09.558-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-7-6577373056560964212 -> { numRecords: 1, dataSize: 45 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.477-0500 c20012| 2016-04-06T02:53:09.558-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-9-6577373056560964212 -> { numRecords: 1, dataSize: 60 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.481-0500 c20012| 2016-04-06T02:53:10.250-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.481-0500 c20012| 2016-04-06T02:53:10.250-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:42.483-0500 c20012| 2016-04-06T02:53:10.250-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 4 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:42.487-0500 c20012| 2016-04-06T02:53:10.685-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1264 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:20.685-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.489-0500 c20012| 2016-04-06T02:53:10.685-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1264 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:42.492-0500 c20012| 2016-04-06T02:53:10.696-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1265 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:20.696-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.493-0500 c20012| 2016-04-06T02:53:10.697-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1265 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:42.499-0500 c20012| 2016-04-06T02:53:10.697-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1265 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20013", term: 4, primaryId: 2, durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, opTime: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.500-0500 c20012| 2016-04-06T02:53:10.698-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:12.698Z [js_test:multi_coll_drop] 2016-04-06T02:53:42.506-0500 c20012| 2016-04-06T02:53:11.299-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:42.509-0500 c20012| 2016-04-06T02:53:11.299-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1267 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:42.510-0500 c20012| 2016-04-06T02:53:11.299-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1267 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:42.511-0500 c20012| 2016-04-06T02:53:12.250-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.516-0500 c20012| 2016-04-06T02:53:12.250-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:42.517-0500 c20012| 2016-04-06T02:53:12.251-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 4 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:42.519-0500 c20012| 2016-04-06T02:53:12.698-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1268 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:22.698-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.525-0500 c20012| 2016-04-06T02:53:12.698-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1268 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:42.528-0500 c20012| 2016-04-06T02:53:12.698-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1268 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20013", term: 4, primaryId: 2, durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, opTime: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.532-0500 c20012| 2016-04-06T02:53:12.699-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:14.699Z [js_test:multi_coll_drop] 2016-04-06T02:53:42.540-0500 c20012| 2016-04-06T02:53:13.799-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1261 timed out, adjusted timeout after getting connection from pool was 5000ms, op was id: 12, states: [ UNINITIALIZED, IN_PROGRESS ], start_time: 2016-04-06T02:53:08.798-0500, request: RemoteCommand 1261 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.798-0500 cmd:{ getMore: 21969886375, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.544-0500 c20012| 2016-04-06T02:53:13.799-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Operation timing out; original request was: RemoteCommand 1261 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.798-0500 cmd:{ getMore: 21969886375, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.548-0500 c20012| 2016-04-06T02:53:13.799-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Failed to execute command: RemoteCommand 1261 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.798-0500 cmd:{ getMore: 21969886375, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|11, t: 4 } } reason: ExceededTimeLimit: Operation timed out, request was RemoteCommand 1261 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.798-0500 cmd:{ getMore: 21969886375, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.554-0500 c20012| 2016-04-06T02:53:13.799-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1261 finished with response: ExceededTimeLimit: Operation timed out, request was RemoteCommand 1261 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.798-0500 cmd:{ getMore: 21969886375, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.556-0500 c20012| 2016-04-06T02:53:13.800-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.556-0500 c20012| 2016-04-06T02:53:13.800-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:42.559-0500 c20012| 2016-04-06T02:53:13.800-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 4 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:42.561-0500 c20012| 2016-04-06T02:53:13.801-0500 D REPL [rsBackgroundSync-0] Error returned from oplog query: ExceededTimeLimit: Operation timed out, request was RemoteCommand 1261 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.798-0500 cmd:{ getMore: 21969886375, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.562-0500 c20012| 2016-04-06T02:53:13.802-0500 D REPL [rsBackgroundSync] fetcher stopped reading remote oplog on mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:42.563-0500 c20012| 2016-04-06T02:53:13.802-0500 I REPL [ReplicationExecutor] could not find member to sync from [js_test:multi_coll_drop] 2016-04-06T02:53:42.569-0500 c20012| 2016-04-06T02:53:13.802-0500 D ASIO [ReplicationExecutor] Canceling operation; original request was: RemoteCommand 1264 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:20.685-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.571-0500 c20012| 2016-04-06T02:53:13.802-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:13.802Z [js_test:multi_coll_drop] 2016-04-06T02:53:42.572-0500 c20012| 2016-04-06T02:53:13.802-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:13.802Z [js_test:multi_coll_drop] 2016-04-06T02:53:42.575-0500 c20012| 2016-04-06T02:53:13.802-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to execute command: RemoteCommand 1264 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:20.685-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } reason: CallbackCanceled: Callback canceled [js_test:multi_coll_drop] 2016-04-06T02:53:42.577-0500 c20012| 2016-04-06T02:53:13.802-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1271 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:23.802-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.578-0500 c20012| 2016-04-06T02:53:13.802-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1264 finished with response: CallbackCanceled: Callback canceled [js_test:multi_coll_drop] 2016-04-06T02:53:42.578-0500 c20012| 2016-04-06T02:53:13.802-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1272 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:20.685-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.579-0500 c20012| 2016-04-06T02:53:13.802-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1271 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:42.579-0500 c20012| 2016-04-06T02:53:13.802-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1272 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:42.582-0500 c20012| 2016-04-06T02:53:13.804-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1271 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 4, primaryId: 2, durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, opTime: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.583-0500 c20012| 2016-04-06T02:53:13.804-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:16.304Z [js_test:multi_coll_drop] 2016-04-06T02:53:42.585-0500 c20012| 2016-04-06T02:53:14.159-0500 D COMMAND [conn31] run command admin.$cmd { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 4, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.585-0500 c20012| 2016-04-06T02:53:14.159-0500 D COMMAND [conn31] command: replSetRequestVotes [js_test:multi_coll_drop] 2016-04-06T02:53:42.589-0500 c20012| 2016-04-06T02:53:14.159-0500 D QUERY [conn31] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:42.596-0500 c20012| 2016-04-06T02:53:14.160-0500 I COMMAND [conn31] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 4, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929188000|11, t: 4 } } numYields:0 reslen:143 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:42.603-0500 c20012| 2016-04-06T02:53:14.160-0500 D COMMAND [conn31] run command admin.$cmd { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 5, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.603-0500 c20012| 2016-04-06T02:53:14.160-0500 D COMMAND [conn31] command: replSetRequestVotes [js_test:multi_coll_drop] 2016-04-06T02:53:42.606-0500 c20012| 2016-04-06T02:53:14.162-0500 D QUERY [conn31] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:42.608-0500 c20012| 2016-04-06T02:53:14.162-0500 I COMMAND [conn31] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 5, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929188000|11, t: 4 } } numYields:0 reslen:143 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:42.612-0500 c20012| 2016-04-06T02:53:14.163-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.613-0500 c20012| 2016-04-06T02:53:14.163-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:42.617-0500 c20012| 2016-04-06T02:53:14.163-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 5 } numYields:0 reslen:478 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:42.618-0500 c20012| 2016-04-06T02:53:16.166-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.619-0500 c20012| 2016-04-06T02:53:16.166-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:42.621-0500 c20012| 2016-04-06T02:53:16.191-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 5 } numYields:0 reslen:478 locks:{} protocol:op_command 24ms [js_test:multi_coll_drop] 2016-04-06T02:53:42.623-0500 c20012| 2016-04-06T02:53:16.304-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1275 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:26.304-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.626-0500 c20012| 2016-04-06T02:53:16.305-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1275 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:42.644-0500 c20012| 2016-04-06T02:53:16.310-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1275 finished with response: { ok: 1.0, electionTime: new Date(6270348142705639425), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 5, primaryId: 0, durableOpTime: { ts: Timestamp 1459929194000|2, t: 5 }, opTime: { ts: Timestamp 1459929194000|2, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.647-0500 c20012| 2016-04-06T02:53:16.310-0500 I REPL [ReplicationExecutor] Member mongovm16:20011 is now in state PRIMARY [js_test:multi_coll_drop] 2016-04-06T02:53:42.647-0500 c20012| 2016-04-06T02:53:16.310-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:18.810Z [js_test:multi_coll_drop] 2016-04-06T02:53:42.651-0500 c20012| 2016-04-06T02:53:16.822-0500 I REPL [ReplicationExecutor] syncing from: mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:42.653-0500 c20012| 2016-04-06T02:53:16.822-0500 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 1277 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:53:46.822-0500 cmd:{ find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.654-0500 c20012| 2016-04-06T02:53:16.822-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1277 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:42.658-0500 c20012| 2016-04-06T02:53:16.823-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1277 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1459929117000|1, h: 1169182228640141205, v: 2, op: "n", ns: "", o: { msg: "initiating set" } } ], id: 0, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.660-0500 c20012| 2016-04-06T02:53:16.823-0500 D REPL [rsBackgroundSync] scheduling fetcher to read remote oplog on mongovm16:20011 starting at filter: { ts: { $gte: Timestamp 1459929188000|11 } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.662-0500 c20012| 2016-04-06T02:53:16.823-0500 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 1279 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:53:21.823-0500 cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929188000|11 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.668-0500 c20012| 2016-04-06T02:53:16.823-0500 I ASIO [rsBackgroundSync] dropping unhealthy pooled connection to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:42.670-0500 c20012| 2016-04-06T02:53:16.823-0500 I ASIO [rsBackgroundSync] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:53:42.671-0500 c20012| 2016-04-06T02:53:16.823-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:42.671-0500 c20012| 2016-04-06T02:53:16.823-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1280 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:42.672-0500 c20012| 2016-04-06T02:53:16.828-0500 I ASIO [NetworkInterfaceASIO-BGSync-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:42.673-0500 c20012| 2016-04-06T02:53:16.828-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1280 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:42.674-0500 c20012| 2016-04-06T02:53:16.828-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1279 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:42.682-0500 c20012| 2016-04-06T02:53:16.829-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1279 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1459929188000|11, t: 4, h: -766951703923615705, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:53:08.786-0500-5704c06465c17830b843f1cc", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929188786), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -62.0 }, max: { _id: MaxKey } }, left: { min: { _id: -62.0 }, max: { _id: -61.0 }, lastmod: Timestamp 1000|79, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -61.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } }, { ts: Timestamp 1459929194000|2, t: 5, h: -5008400190369061014, v: 2, op: "n", ns: "", o: { msg: "new primary" } } ], id: 19461455963, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.686-0500 c20012| 2016-04-06T02:53:16.830-0500 D REPL [rsBackgroundSync-0] fetcher read 2 operations from remote oplog starting at ts: Timestamp 1459929188000|11 and ending at ts: Timestamp 1459929194000|2 [js_test:multi_coll_drop] 2016-04-06T02:53:42.692-0500 c20012| 2016-04-06T02:53:16.832-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1282 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:53:21.832-0500 cmd:{ getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.692-0500 c20012| 2016-04-06T02:53:16.832-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1282 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:42.694-0500 c20012| 2016-04-06T02:53:16.834-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:42.699-0500 c20012| 2016-04-06T02:53:16.835-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.700-0500 c20012| 2016-04-06T02:53:16.835-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.701-0500 c20012| 2016-04-06T02:53:16.835-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.703-0500 c20012| 2016-04-06T02:53:16.835-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.705-0500 c20012| 2016-04-06T02:53:16.835-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.706-0500 c20012| 2016-04-06T02:53:16.835-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.742-0500 c20012| 2016-04-06T02:53:16.835-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.745-0500 c20012| 2016-04-06T02:53:16.835-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.746-0500 c20012| 2016-04-06T02:53:16.835-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.748-0500 c20012| 2016-04-06T02:53:16.835-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.751-0500 c20012| 2016-04-06T02:53:16.835-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.755-0500 c20012| 2016-04-06T02:53:16.835-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.756-0500 c20012| 2016-04-06T02:53:16.835-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.758-0500 c20012| 2016-04-06T02:53:16.835-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.761-0500 c20012| 2016-04-06T02:53:16.835-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.762-0500 c20012| 2016-04-06T02:53:16.835-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:42.763-0500 c20012| 2016-04-06T02:53:16.835-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.764-0500 c20012| 2016-04-06T02:53:16.835-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.764-0500 c20012| 2016-04-06T02:53:16.835-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.767-0500 c20012| 2016-04-06T02:53:16.835-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.768-0500 c20012| 2016-04-06T02:53:16.836-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.769-0500 c20012| 2016-04-06T02:53:16.836-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.771-0500 c20012| 2016-04-06T02:53:16.836-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.772-0500 c20012| 2016-04-06T02:53:16.836-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.773-0500 c20012| 2016-04-06T02:53:16.836-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.776-0500 c20012| 2016-04-06T02:53:16.836-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.777-0500 c20012| 2016-04-06T02:53:16.836-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.778-0500 c20012| 2016-04-06T02:53:16.836-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.779-0500 c20012| 2016-04-06T02:53:16.836-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.780-0500 c20012| 2016-04-06T02:53:16.836-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.785-0500 c20012| 2016-04-06T02:53:16.836-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.786-0500 c20012| 2016-04-06T02:53:16.836-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.788-0500 c20012| 2016-04-06T02:53:16.836-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.789-0500 c20012| 2016-04-06T02:53:16.836-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:42.792-0500 c20012| 2016-04-06T02:53:18.196-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.792-0500 c20012| 2016-04-06T02:53:18.197-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:42.795-0500 c20012| 2016-04-06T02:53:18.210-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 5 } numYields:0 reslen:509 locks:{} protocol:op_command 13ms [js_test:multi_coll_drop] 2016-04-06T02:53:42.797-0500 c20012| 2016-04-06T02:53:18.210-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1282 finished with response: { cursor: { nextBatch: [], id: 19461455963, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.798-0500 c20012| 2016-04-06T02:53:18.212-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929194000|2, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.799-0500 c20012| 2016-04-06T02:53:18.212-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:42.801-0500 c20012| 2016-04-06T02:53:18.213-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1284 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:53:23.213-0500 cmd:{ getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929194000|2, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.803-0500 c20012| 2016-04-06T02:53:18.213-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1284 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:42.804-0500 c20012| 2016-04-06T02:53:18.810-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1285 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:28.810-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.805-0500 c20012| 2016-04-06T02:53:18.810-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1285 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:42.809-0500 c20012| 2016-04-06T02:53:18.810-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1285 finished with response: { ok: 1.0, electionTime: new Date(6270348142705639425), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 5, primaryId: 0, durableOpTime: { ts: Timestamp 1459929194000|2, t: 5 }, opTime: { ts: Timestamp 1459929194000|2, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.810-0500 c20012| 2016-04-06T02:53:18.810-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:20.810Z [js_test:multi_coll_drop] 2016-04-06T02:53:42.814-0500 c20012| 2016-04-06T02:53:18.967-0500 D COMMAND [conn37] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.814-0500 c20012| 2016-04-06T02:53:18.967-0500 D COMMAND [conn37] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:42.817-0500 c20012| 2016-04-06T02:53:18.967-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1267 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.824-0500 c20012| 2016-04-06T02:53:18.967-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter failed to prepare update command with status: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:42.825-0500 c20012| 2016-04-06T02:53:18.967-0500 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to mongovm16:20013: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:42.828-0500 c20012| 2016-04-06T02:53:18.967-0500 D REPL [SyncSourceFeedback] The replication progress command (replSetUpdatePosition) failed and will be retried: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:42.829-0500 c20012| 2016-04-06T02:53:18.967-0500 D REPL [SyncSourceFeedback] setting syncSourceFeedback to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:42.834-0500 c20012| 2016-04-06T02:53:18.967-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929194000|2, t: 5 }, appliedOpTime: { ts: Timestamp 1459929194000|2, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:42.836-0500 c20012| 2016-04-06T02:53:18.967-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1288 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929194000|2, t: 5 }, appliedOpTime: { ts: Timestamp 1459929194000|2, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:42.838-0500 c20012| 2016-04-06T02:53:18.967-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] dropping unhealthy pooled connection to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:42.843-0500 c20012| 2016-04-06T02:53:18.967-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:53:42.845-0500 c20012| 2016-04-06T02:53:18.967-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:42.847-0500 c20012| 2016-04-06T02:53:18.967-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1289 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:42.849-0500 c20012| 2016-04-06T02:53:18.968-0500 I COMMAND [conn37] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } numYields:0 reslen:509 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:42.851-0500 c20012| 2016-04-06T02:53:18.968-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to execute command: RemoteCommand 1272 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:20.685-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } reason: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:42.853-0500 c20012| 2016-04-06T02:53:18.968-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1272 finished with response: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:42.853-0500 c20012| 2016-04-06T02:53:18.969-0500 D COMMAND [conn34] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.854-0500 c20012| 2016-04-06T02:53:18.969-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.856-0500 c20012| 2016-04-06T02:53:18.969-0500 I REPL [ReplicationExecutor] Error in heartbeat request to mongovm16:20013; HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:42.857-0500 c20012| 2016-04-06T02:53:18.969-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:18.969Z [js_test:multi_coll_drop] 2016-04-06T02:53:42.859-0500 c20012| 2016-04-06T02:53:18.969-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1291 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:20.685-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.859-0500 c20012| 2016-04-06T02:53:18.969-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:42.859-0500 c20012| 2016-04-06T02:53:18.969-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:42.861-0500 c20012| 2016-04-06T02:53:18.969-0500 I COMMAND [conn34] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:42.862-0500 c20012| 2016-04-06T02:53:18.970-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1292 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:42.864-0500 c20012| 2016-04-06T02:53:18.970-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|12, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.867-0500 c20012| 2016-04-06T02:53:18.970-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|12, t: 4 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.869-0500 c20012| 2016-04-06T02:53:18.970-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|12, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.870-0500 c20012| 2016-04-06T02:53:18.970-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:42.873-0500 c20012| 2016-04-06T02:53:18.970-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|12, t: 4 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:42.877-0500 c20012| 2016-04-06T02:53:18.970-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1284 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929198000|1, t: 5, h: -3268348888765280945, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20015" }, o: { $set: { ping: new Date(1459929198271), up: 71, waiting: false } } } ], id: 19461455963, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.878-0500 c20012| 2016-04-06T02:53:18.971-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929198000|1 and ending at ts: Timestamp 1459929198000|1 [js_test:multi_coll_drop] 2016-04-06T02:53:42.879-0500 c20012| 2016-04-06T02:53:18.972-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929194000|2, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.881-0500 c20012| 2016-04-06T02:53:18.972-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929194000|2, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.884-0500 c20012| 2016-04-06T02:53:18.972-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929194000|2, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.887-0500 c20012| 2016-04-06T02:53:18.972-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:42.887-0500 c20012| 2016-04-06T02:53:18.973-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:42.889-0500 c20012| 2016-04-06T02:53:18.973-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1292 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:42.892-0500 c20012| 2016-04-06T02:53:18.973-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1291 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:42.894-0500 c20012| 2016-04-06T02:53:18.973-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:42.898-0500 c20012| 2016-04-06T02:53:18.973-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929194000|2, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:42.901-0500 c20012| 2016-04-06T02:53:18.973-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.901-0500 c20012| 2016-04-06T02:53:18.973-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.902-0500 c20012| 2016-04-06T02:53:18.973-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.904-0500 c20012| 2016-04-06T02:53:18.973-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.904-0500 c20012| 2016-04-06T02:53:18.974-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.910-0500 c20012| 2016-04-06T02:53:18.974-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.912-0500 c20012| 2016-04-06T02:53:18.974-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.914-0500 c20012| 2016-04-06T02:53:18.974-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.916-0500 c20012| 2016-04-06T02:53:18.974-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.917-0500 c20012| 2016-04-06T02:53:18.974-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.917-0500 c20012| 2016-04-06T02:53:18.974-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.918-0500 c20012| 2016-04-06T02:53:18.974-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.918-0500 c20012| 2016-04-06T02:53:18.974-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:42.919-0500 c20012| 2016-04-06T02:53:18.974-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.921-0500 c20012| 2016-04-06T02:53:18.974-0500 D QUERY [repl writer worker 6] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:42.921-0500 c20012| 2016-04-06T02:53:18.974-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.922-0500 c20012| 2016-04-06T02:53:18.974-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.926-0500 c20012| 2016-04-06T02:53:18.974-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1294 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:53:23.974-0500 cmd:{ getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929194000|2, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.927-0500 c20012| 2016-04-06T02:53:18.974-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1294 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:42.933-0500 c20012| 2016-04-06T02:53:18.975-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1294 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929198000|2, t: 5, h: -4206783531453163132, v: 2, op: "u", ns: "config.lockpings", o2: { _id: "mongovm16:20010:1459929128:185613966" }, o: { $set: { ping: new Date(1459929191721) } } } ], id: 19461455963, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:42.957-0500 c20012| 2016-04-06T02:53:18.975-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1291 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 4, primaryId: 0, durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, opTime: { ts: Timestamp 1459929198000|2, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:42.964-0500 c20012| 2016-04-06T02:53:18.975-0500 I REPL [ReplicationExecutor] Member mongovm16:20013 is now in state SECONDARY [js_test:multi_coll_drop] 2016-04-06T02:53:42.981-0500 c20012| 2016-04-06T02:53:18.975-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:20.975Z [js_test:multi_coll_drop] 2016-04-06T02:53:42.986-0500 c20012| 2016-04-06T02:53:18.976-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929198000|2 and ending at ts: Timestamp 1459929198000|2 [js_test:multi_coll_drop] 2016-04-06T02:53:42.987-0500 c20012| 2016-04-06T02:53:18.976-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.990-0500 c20012| 2016-04-06T02:53:18.976-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.991-0500 c20012| 2016-04-06T02:53:18.976-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.993-0500 c20012| 2016-04-06T02:53:18.976-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.995-0500 c20012| 2016-04-06T02:53:18.976-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.995-0500 c20012| 2016-04-06T02:53:18.976-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:42.998-0500 c20012| 2016-04-06T02:53:18.976-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.011-0500 c20012| 2016-04-06T02:53:18.976-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.012-0500 c20012| 2016-04-06T02:53:18.976-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.012-0500 c20012| 2016-04-06T02:53:18.976-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.012-0500 c20012| 2016-04-06T02:53:18.976-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.013-0500 c20012| 2016-04-06T02:53:18.976-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.013-0500 c20012| 2016-04-06T02:53:18.976-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.014-0500 c20012| 2016-04-06T02:53:18.976-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.014-0500 c20012| 2016-04-06T02:53:18.976-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.014-0500 c20012| 2016-04-06T02:53:18.977-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:43.015-0500 c20012| 2016-04-06T02:53:18.977-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1289 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:43.015-0500 c20012| 2016-04-06T02:53:18.977-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1288 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:43.016-0500 c20012| 2016-04-06T02:53:18.977-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1288 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.016-0500 c20012| 2016-04-06T02:53:18.977-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.016-0500 c20012| 2016-04-06T02:53:18.977-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.019-0500 c20012| 2016-04-06T02:53:18.978-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:43.020-0500 c20012| 2016-04-06T02:53:18.978-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:43.020-0500 c20012| 2016-04-06T02:53:18.978-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.021-0500 c20012| 2016-04-06T02:53:18.978-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.021-0500 c20012| 2016-04-06T02:53:18.978-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.021-0500 c20012| 2016-04-06T02:53:18.978-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.021-0500 c20012| 2016-04-06T02:53:18.978-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.022-0500 c20012| 2016-04-06T02:53:18.978-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.022-0500 c20012| 2016-04-06T02:53:18.978-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.023-0500 c20012| 2016-04-06T02:53:18.978-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.023-0500 c20012| 2016-04-06T02:53:18.978-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.023-0500 c20012| 2016-04-06T02:53:18.978-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.024-0500 c20012| 2016-04-06T02:53:18.978-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.024-0500 c20012| 2016-04-06T02:53:18.978-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.025-0500 c20012| 2016-04-06T02:53:18.978-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.028-0500 c20012| 2016-04-06T02:53:18.978-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.029-0500 c20012| 2016-04-06T02:53:18.978-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:43.031-0500 c20012| 2016-04-06T02:53:18.978-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.032-0500 c20012| 2016-04-06T02:53:18.978-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.034-0500 c20012| 2016-04-06T02:53:18.978-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "mongovm16:20010:1459929128:185613966" } [js_test:multi_coll_drop] 2016-04-06T02:53:43.036-0500 c20012| 2016-04-06T02:53:18.978-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929194000|2, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|1, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:43.041-0500 c20012| 2016-04-06T02:53:18.978-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1298 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929194000|2, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|1, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:43.041-0500 c20012| 2016-04-06T02:53:18.978-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1298 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:43.043-0500 c20012| 2016-04-06T02:53:18.978-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1299 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:53:23.978-0500 cmd:{ getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929194000|2, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.044-0500 c20012| 2016-04-06T02:53:18.978-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1299 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:43.057-0500 c20012| 2016-04-06T02:53:18.979-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1299 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929198000|3, t: 5, h: 6440260587993245876, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20014" }, o: { $set: { ping: new Date(1459929198273), up: 71, waiting: false } } } ], id: 19461455963, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.058-0500 c20012| 2016-04-06T02:53:18.979-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.061-0500 c20012| 2016-04-06T02:53:18.979-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.061-0500 c20012| 2016-04-06T02:53:18.979-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.062-0500 c20012| 2016-04-06T02:53:18.979-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1298 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.062-0500 c20012| 2016-04-06T02:53:18.980-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.064-0500 c20012| 2016-04-06T02:53:18.980-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929198000|3 and ending at ts: Timestamp 1459929198000|3 [js_test:multi_coll_drop] 2016-04-06T02:53:43.067-0500 c20012| 2016-04-06T02:53:18.980-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.069-0500 c20012| 2016-04-06T02:53:18.980-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.070-0500 c20012| 2016-04-06T02:53:18.980-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.070-0500 c20012| 2016-04-06T02:53:18.980-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.072-0500 c20012| 2016-04-06T02:53:18.980-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.074-0500 c20012| 2016-04-06T02:53:18.980-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.075-0500 c20012| 2016-04-06T02:53:18.980-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.075-0500 c20012| 2016-04-06T02:53:18.980-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.076-0500 c20012| 2016-04-06T02:53:18.980-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.077-0500 c20012| 2016-04-06T02:53:18.980-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.078-0500 c20012| 2016-04-06T02:53:18.980-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.079-0500 c20012| 2016-04-06T02:53:18.980-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.080-0500 c20012| 2016-04-06T02:53:18.981-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:43.082-0500 c20012| 2016-04-06T02:53:18.981-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:43.083-0500 c20012| 2016-04-06T02:53:18.981-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.085-0500 c20012| 2016-04-06T02:53:18.981-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.086-0500 c20012| 2016-04-06T02:53:18.981-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.088-0500 c20012| 2016-04-06T02:53:18.981-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.088-0500 c20012| 2016-04-06T02:53:18.981-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.089-0500 c20012| 2016-04-06T02:53:18.981-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.090-0500 c20012| 2016-04-06T02:53:18.981-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.101-0500 c20012| 2016-04-06T02:53:18.982-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.102-0500 c20012| 2016-04-06T02:53:18.983-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1302 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:53:23.983-0500 cmd:{ getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929194000|2, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.102-0500 c20012| 2016-04-06T02:53:18.983-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1302 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:43.103-0500 c20012| 2016-04-06T02:53:18.983-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.103-0500 c20012| 2016-04-06T02:53:18.983-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.105-0500 c20012| 2016-04-06T02:53:18.983-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.106-0500 c20012| 2016-04-06T02:53:18.983-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.108-0500 c20012| 2016-04-06T02:53:18.983-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.110-0500 c20012| 2016-04-06T02:53:18.983-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.111-0500 c20012| 2016-04-06T02:53:18.983-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.112-0500 c20012| 2016-04-06T02:53:18.983-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:43.116-0500 c20012| 2016-04-06T02:53:18.983-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.117-0500 c20012| 2016-04-06T02:53:18.983-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:43.119-0500 c20012| 2016-04-06T02:53:18.983-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929194000|2, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:43.126-0500 c20012| 2016-04-06T02:53:18.983-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1303 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929194000|2, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:43.127-0500 c20012| 2016-04-06T02:53:18.983-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1303 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:43.127-0500 c20012| 2016-04-06T02:53:18.984-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1303 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.128-0500 c20012| 2016-04-06T02:53:18.984-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.129-0500 c20012| 2016-04-06T02:53:18.984-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.130-0500 c20012| 2016-04-06T02:53:18.984-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.130-0500 c20012| 2016-04-06T02:53:18.984-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.131-0500 c20012| 2016-04-06T02:53:18.984-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.133-0500 c20012| 2016-04-06T02:53:18.984-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.134-0500 c20012| 2016-04-06T02:53:18.984-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.134-0500 c20012| 2016-04-06T02:53:18.984-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.135-0500 c20012| 2016-04-06T02:53:18.984-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.135-0500 c20012| 2016-04-06T02:53:18.984-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.136-0500 c20012| 2016-04-06T02:53:18.984-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.138-0500 c20012| 2016-04-06T02:53:18.984-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.138-0500 c20012| 2016-04-06T02:53:18.984-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.139-0500 c20012| 2016-04-06T02:53:18.984-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.140-0500 c20012| 2016-04-06T02:53:18.984-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.142-0500 c20012| 2016-04-06T02:53:18.984-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.144-0500 c20012| 2016-04-06T02:53:18.984-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:43.148-0500 c20012| 2016-04-06T02:53:18.985-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929194000|2, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|3, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:43.151-0500 c20012| 2016-04-06T02:53:18.985-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1305 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929194000|2, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|3, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:43.152-0500 c20012| 2016-04-06T02:53:18.985-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1305 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:43.153-0500 c20012| 2016-04-06T02:53:18.985-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1305 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.160-0500 c20012| 2016-04-06T02:53:18.986-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|3, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:43.176-0500 c20012| 2016-04-06T02:53:18.986-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1307 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|3, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:43.185-0500 c20012| 2016-04-06T02:53:18.986-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1307 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:43.190-0500 c20012| 2016-04-06T02:53:18.986-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1307 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.200-0500 c20012| 2016-04-06T02:53:18.986-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1302 finished with response: { cursor: { nextBatch: [], id: 19461455963, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.204-0500 c20012| 2016-04-06T02:53:18.986-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929198000|1, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.206-0500 c20012| 2016-04-06T02:53:18.986-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:43.209-0500 c20012| 2016-04-06T02:53:18.986-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1310 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:53:23.986-0500 cmd:{ getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929198000|1, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.215-0500 c20012| 2016-04-06T02:53:18.986-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1310 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:43.221-0500 c20012| 2016-04-06T02:53:18.988-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1310 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929198000|4, t: 5, h: 3594826636931372294, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20015" }, o: { $set: { ping: new Date(1459929198987), waiting: true } } } ], id: 19461455963, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.228-0500 c20012| 2016-04-06T02:53:18.989-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|3, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|3, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:43.234-0500 c20012| 2016-04-06T02:53:18.989-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1312 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|3, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|3, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:43.235-0500 c20012| 2016-04-06T02:53:18.989-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1312 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:43.236-0500 c20012| 2016-04-06T02:53:18.989-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1312 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.237-0500 c20012| 2016-04-06T02:53:18.990-0500 D COMMAND [conn42] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|3, t: 5 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.241-0500 c20012| 2016-04-06T02:53:18.990-0500 D REPL [conn42] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929198000|3, t: 5 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929194000|2, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.243-0500 c20012| 2016-04-06T02:53:18.990-0500 D REPL [conn42] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999983μs [js_test:multi_coll_drop] 2016-04-06T02:53:43.244-0500 c20012| 2016-04-06T02:53:18.990-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929198000|4 and ending at ts: Timestamp 1459929198000|4 [js_test:multi_coll_drop] 2016-04-06T02:53:43.247-0500 c20012| 2016-04-06T02:53:18.990-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:43.250-0500 c20012| 2016-04-06T02:53:18.991-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.251-0500 c20012| 2016-04-06T02:53:18.991-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.253-0500 c20012| 2016-04-06T02:53:18.991-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.263-0500 c20012| 2016-04-06T02:53:18.991-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.265-0500 c20012| 2016-04-06T02:53:18.991-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.268-0500 c20012| 2016-04-06T02:53:18.991-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.270-0500 c20012| 2016-04-06T02:53:18.991-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.271-0500 c20012| 2016-04-06T02:53:18.991-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.272-0500 c20012| 2016-04-06T02:53:18.991-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.273-0500 c20012| 2016-04-06T02:53:18.991-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.273-0500 c20012| 2016-04-06T02:53:18.991-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.273-0500 c20012| 2016-04-06T02:53:18.991-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.274-0500 c20012| 2016-04-06T02:53:18.991-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.274-0500 c20012| 2016-04-06T02:53:18.991-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:43.275-0500 c20012| 2016-04-06T02:53:18.991-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.276-0500 c20012| 2016-04-06T02:53:18.991-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:43.277-0500 c20012| 2016-04-06T02:53:18.991-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.278-0500 c20012| 2016-04-06T02:53:18.991-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.280-0500 c20012| 2016-04-06T02:53:18.991-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.280-0500 c20012| 2016-04-06T02:53:18.991-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.281-0500 c20012| 2016-04-06T02:53:18.992-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.282-0500 c20012| 2016-04-06T02:53:18.992-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.282-0500 c20012| 2016-04-06T02:53:18.991-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.283-0500 c20012| 2016-04-06T02:53:18.992-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.283-0500 c20012| 2016-04-06T02:53:18.992-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.284-0500 c20012| 2016-04-06T02:53:18.992-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.286-0500 c20012| 2016-04-06T02:53:18.992-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:40369 #43 (12 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:43.288-0500 c20012| 2016-04-06T02:53:18.992-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.290-0500 c20012| 2016-04-06T02:53:18.992-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.291-0500 c20012| 2016-04-06T02:53:18.992-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.291-0500 c20012| 2016-04-06T02:53:18.992-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.298-0500 c20012| 2016-04-06T02:53:18.992-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.301-0500 c20012| 2016-04-06T02:53:18.992-0500 D COMMAND [conn43] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:43.305-0500 c20012| 2016-04-06T02:53:18.992-0500 I COMMAND [conn43] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20014" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:43.322-0500 c20012| 2016-04-06T02:53:18.992-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.326-0500 c20012| 2016-04-06T02:53:18.992-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.330-0500 c20012| 2016-04-06T02:53:18.992-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.333-0500 c20012| 2016-04-06T02:53:18.992-0500 D COMMAND [conn43] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|3, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.343-0500 c20012| 2016-04-06T02:53:18.992-0500 D REPL [conn43] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929198000|3, t: 5 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929194000|2, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.344-0500 c20012| 2016-04-06T02:53:18.992-0500 D REPL [conn43] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999983μs [js_test:multi_coll_drop] 2016-04-06T02:53:43.345-0500 c20012| 2016-04-06T02:53:18.992-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:43.347-0500 c20012| 2016-04-06T02:53:18.992-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|3, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:43.350-0500 c20012| 2016-04-06T02:53:18.992-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1314 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|3, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:43.351-0500 c20012| 2016-04-06T02:53:18.992-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1314 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:43.353-0500 c20012| 2016-04-06T02:53:18.993-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1314 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.357-0500 c20012| 2016-04-06T02:53:18.993-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1316 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:53:23.993-0500 cmd:{ getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929198000|1, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.358-0500 c20012| 2016-04-06T02:53:18.993-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1316 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:43.359-0500 c20012| 2016-04-06T02:53:18.994-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1316 finished with response: { cursor: { nextBatch: [], id: 19461455963, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.360-0500 c20012| 2016-04-06T02:53:18.994-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929198000|3, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.362-0500 c20012| 2016-04-06T02:53:18.994-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:43.363-0500 c20012| 2016-04-06T02:53:18.994-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|3, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.369-0500 c20012| 2016-04-06T02:53:18.994-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|3, t: 5 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.379-0500 c20012| 2016-04-06T02:53:18.994-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:43.387-0500 c20012| 2016-04-06T02:53:18.994-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1318 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:43.390-0500 c20012| 2016-04-06T02:53:18.994-0500 D COMMAND [conn43] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|3, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.393-0500 c20012| 2016-04-06T02:53:18.994-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1318 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:43.397-0500 c20012| 2016-04-06T02:53:18.995-0500 D COMMAND [conn43] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|3, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.400-0500 c20012| 2016-04-06T02:53:18.995-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1318 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.405-0500 c20012| 2016-04-06T02:53:18.995-0500 D QUERY [conn43] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:43.408-0500 c20012| 2016-04-06T02:53:18.994-0500 D QUERY [conn42] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:43.411-0500 c20012| 2016-04-06T02:53:18.995-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1320 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:53:23.995-0500 cmd:{ getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929198000|3, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.414-0500 c20012| 2016-04-06T02:53:18.995-0500 I COMMAND [conn42] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|3, t: 5 } }, maxTimeMS: 30000 } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:423 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:53:43.418-0500 c20012| 2016-04-06T02:53:18.995-0500 I COMMAND [conn43] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|3, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:53:43.420-0500 c20012| 2016-04-06T02:53:18.995-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1320 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:43.422-0500 c20012| 2016-04-06T02:53:18.996-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1320 finished with response: { cursor: { nextBatch: [], id: 19461455963, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.423-0500 c20012| 2016-04-06T02:53:18.996-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929198000|4, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.423-0500 c20012| 2016-04-06T02:53:18.996-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:43.426-0500 c20012| 2016-04-06T02:53:18.996-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1322 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:53:23.996-0500 cmd:{ getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929198000|4, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.428-0500 c20012| 2016-04-06T02:53:18.996-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1322 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:43.430-0500 c20012| 2016-04-06T02:53:18.999-0500 D COMMAND [conn43] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.431-0500 c20012| 2016-04-06T02:53:18.999-0500 D COMMAND [conn43] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.433-0500 c20012| 2016-04-06T02:53:18.999-0500 D COMMAND [conn43] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.434-0500 c20012| 2016-04-06T02:53:18.999-0500 D QUERY [conn43] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:43.438-0500 c20012| 2016-04-06T02:53:19.015-0500 I COMMAND [conn43] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 15ms [js_test:multi_coll_drop] 2016-04-06T02:53:43.440-0500 c20012| 2016-04-06T02:53:19.035-0500 D COMMAND [conn43] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.441-0500 c20012| 2016-04-06T02:53:19.035-0500 D COMMAND [conn43] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.442-0500 c20012| 2016-04-06T02:53:19.035-0500 D COMMAND [conn43] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.455-0500 c20012| 2016-04-06T02:53:19.035-0500 D QUERY [conn43] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:43.462-0500 c20012| 2016-04-06T02:53:19.036-0500 I COMMAND [conn43] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:43.463-0500 c20012| 2016-04-06T02:53:20.211-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.464-0500 c20012| 2016-04-06T02:53:20.211-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:43.480-0500 c20012| 2016-04-06T02:53:20.211-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 5 } numYields:0 reslen:509 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:43.482-0500 c20012| 2016-04-06T02:53:20.731-0500 D COMMAND [conn35] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.485-0500 c20012| 2016-04-06T02:53:20.732-0500 I COMMAND [conn35] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:43.489-0500 c20012| 2016-04-06T02:53:20.810-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1323 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:30.810-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.490-0500 c20012| 2016-04-06T02:53:20.811-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1323 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:43.502-0500 c20012| 2016-04-06T02:53:20.811-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1323 finished with response: { ok: 1.0, electionTime: new Date(6270348142705639425), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 5, primaryId: 0, durableOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, opTime: { ts: Timestamp 1459929198000|4, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.503-0500 c20012| 2016-04-06T02:53:20.811-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:22.811Z [js_test:multi_coll_drop] 2016-04-06T02:53:43.505-0500 c20012| 2016-04-06T02:53:20.968-0500 D COMMAND [conn37] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.505-0500 c20012| 2016-04-06T02:53:20.968-0500 D COMMAND [conn37] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:43.508-0500 c20012| 2016-04-06T02:53:20.969-0500 I COMMAND [conn37] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } numYields:0 reslen:509 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:43.510-0500 c20012| 2016-04-06T02:53:20.975-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1325 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:30.975-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.512-0500 c20012| 2016-04-06T02:53:20.975-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1325 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:43.537-0500 c20012| 2016-04-06T02:53:20.975-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1325 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 5, primaryId: 0, durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, opTime: { ts: Timestamp 1459929198000|2, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.539-0500 c20012| 2016-04-06T02:53:20.975-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:22.975Z [js_test:multi_coll_drop] 2016-04-06T02:53:43.544-0500 c20012| 2016-04-06T02:53:21.495-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:43.548-0500 c20012| 2016-04-06T02:53:21.495-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1327 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:43.550-0500 c20012| 2016-04-06T02:53:21.495-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1327 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:43.551-0500 c20012| 2016-04-06T02:53:21.495-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1327 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.553-0500 c20012| 2016-04-06T02:53:21.497-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1322 finished with response: { cursor: { nextBatch: [], id: 19461455963, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.555-0500 c20012| 2016-04-06T02:53:21.498-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:43.559-0500 c20012| 2016-04-06T02:53:21.498-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1330 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:53:26.498-0500 cmd:{ getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929198000|4, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.560-0500 c20012| 2016-04-06T02:53:21.498-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1330 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:43.561-0500 c20012| 2016-04-06T02:53:21.975-0500 D COMMAND [conn43] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.564-0500 c20012| 2016-04-06T02:53:21.975-0500 D COMMAND [conn43] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.566-0500 c20012| 2016-04-06T02:53:21.975-0500 D COMMAND [conn43] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.569-0500 c20012| 2016-04-06T02:53:21.976-0500 D QUERY [conn43] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:43.574-0500 c20012| 2016-04-06T02:53:21.976-0500 I COMMAND [conn43] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:43.577-0500 c20012| 2016-04-06T02:53:21.976-0500 D COMMAND [conn42] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.577-0500 c20012| 2016-04-06T02:53:21.976-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.581-0500 c20012| 2016-04-06T02:53:21.976-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.593-0500 c20012| 2016-04-06T02:53:21.976-0500 D QUERY [conn42] Using idhack: query: { _id: "balancer" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:43.595-0500 c20012| 2016-04-06T02:53:21.976-0500 I COMMAND [conn42] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:408 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:43.597-0500 c20012| 2016-04-06T02:53:21.986-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1330 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929201000|1, t: 5, h: -1628857208926061585, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20014" }, o: { $set: { ping: new Date(1459929201977), up: 74, waiting: true } } } ], id: 19461455963, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.598-0500 c20012| 2016-04-06T02:53:21.987-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929201000|1 and ending at ts: Timestamp 1459929201000|1 [js_test:multi_coll_drop] 2016-04-06T02:53:43.598-0500 c20012| 2016-04-06T02:53:21.987-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:43.600-0500 c20012| 2016-04-06T02:53:21.987-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.603-0500 c20012| 2016-04-06T02:53:21.987-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.604-0500 c20012| 2016-04-06T02:53:21.987-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.605-0500 c20012| 2016-04-06T02:53:21.987-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.605-0500 c20012| 2016-04-06T02:53:21.987-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.606-0500 c20012| 2016-04-06T02:53:21.987-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.607-0500 c20012| 2016-04-06T02:53:21.987-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.607-0500 c20012| 2016-04-06T02:53:21.987-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.608-0500 c20012| 2016-04-06T02:53:21.987-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.608-0500 c20012| 2016-04-06T02:53:21.987-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.609-0500 c20012| 2016-04-06T02:53:21.988-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.609-0500 c20012| 2016-04-06T02:53:21.988-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.609-0500 c20012| 2016-04-06T02:53:21.988-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.610-0500 c20012| 2016-04-06T02:53:21.988-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.610-0500 c20012| 2016-04-06T02:53:21.988-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.611-0500 c20012| 2016-04-06T02:53:21.988-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.611-0500 c20012| 2016-04-06T02:53:21.988-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:43.612-0500 c20012| 2016-04-06T02:53:21.988-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:43.613-0500 c20012| 2016-04-06T02:53:21.988-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.614-0500 c20012| 2016-04-06T02:53:21.988-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.617-0500 c20012| 2016-04-06T02:53:21.989-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.617-0500 c20012| 2016-04-06T02:53:21.989-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.618-0500 c20012| 2016-04-06T02:53:21.989-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.618-0500 c20012| 2016-04-06T02:53:21.989-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.619-0500 c20012| 2016-04-06T02:53:21.989-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.620-0500 c20012| 2016-04-06T02:53:21.989-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.621-0500 c20012| 2016-04-06T02:53:21.989-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.623-0500 c20012| 2016-04-06T02:53:21.989-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.623-0500 c20012| 2016-04-06T02:53:21.989-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.625-0500 c20012| 2016-04-06T02:53:21.989-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.626-0500 c20012| 2016-04-06T02:53:21.989-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.627-0500 c20012| 2016-04-06T02:53:21.989-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.628-0500 c20012| 2016-04-06T02:53:21.989-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.629-0500 c20012| 2016-04-06T02:53:21.989-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:43.632-0500 c20012| 2016-04-06T02:53:21.989-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1332 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:53:26.989-0500 cmd:{ getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929198000|4, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.634-0500 c20012| 2016-04-06T02:53:21.989-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1332 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:43.636-0500 c20012| 2016-04-06T02:53:21.990-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:43.640-0500 c20012| 2016-04-06T02:53:21.990-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:43.643-0500 c20012| 2016-04-06T02:53:21.990-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1333 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:43.644-0500 c20012| 2016-04-06T02:53:21.990-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1333 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:43.646-0500 c20012| 2016-04-06T02:53:21.990-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1333 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.651-0500 c20012| 2016-04-06T02:53:21.992-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:43.654-0500 c20012| 2016-04-06T02:53:21.992-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1335 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:43.655-0500 c20012| 2016-04-06T02:53:21.992-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1335 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:43.656-0500 c20012| 2016-04-06T02:53:21.992-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1335 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.658-0500 c20012| 2016-04-06T02:53:21.992-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1332 finished with response: { cursor: { nextBatch: [], id: 19461455963, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.658-0500 c20012| 2016-04-06T02:53:21.993-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929201000|1, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.659-0500 c20012| 2016-04-06T02:53:21.993-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:43.662-0500 c20012| 2016-04-06T02:53:21.993-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1338 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:53:26.993-0500 cmd:{ getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.664-0500 c20012| 2016-04-06T02:53:21.993-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1338 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:43.666-0500 c20012| 2016-04-06T02:53:22.042-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.670-0500 c20012| 2016-04-06T02:53:22.042-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.672-0500 c20012| 2016-04-06T02:53:22.042-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.674-0500 c20012| 2016-04-06T02:53:22.043-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:43.683-0500 c20012| 2016-04-06T02:53:22.043-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:43.688-0500 c20012| 2016-04-06T02:53:22.046-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.691-0500 c20012| 2016-04-06T02:53:22.046-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.695-0500 c20012| 2016-04-06T02:53:22.046-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.698-0500 c20012| 2016-04-06T02:53:22.046-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:43.710-0500 c20012| 2016-04-06T02:53:22.046-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:43.715-0500 c20012| 2016-04-06T02:53:22.049-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.718-0500 c20012| 2016-04-06T02:53:22.049-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.721-0500 c20012| 2016-04-06T02:53:22.050-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.724-0500 c20012| 2016-04-06T02:53:22.050-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:43.728-0500 c20012| 2016-04-06T02:53:22.050-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:43.732-0500 c20012| 2016-04-06T02:53:22.058-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.735-0500 c20012| 2016-04-06T02:53:22.058-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.741-0500 c20012| 2016-04-06T02:53:22.058-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.743-0500 c20012| 2016-04-06T02:53:22.058-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:43.749-0500 c20012| 2016-04-06T02:53:22.058-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:43.751-0500 c20012| 2016-04-06T02:53:22.074-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.755-0500 c20012| 2016-04-06T02:53:22.074-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.776-0500 c20012| 2016-04-06T02:53:22.074-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.780-0500 c20012| 2016-04-06T02:53:22.074-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:43.784-0500 c20012| 2016-04-06T02:53:22.074-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:43.786-0500 c20012| 2016-04-06T02:53:22.081-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.788-0500 c20012| 2016-04-06T02:53:22.081-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.790-0500 c20012| 2016-04-06T02:53:22.081-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.791-0500 c20012| 2016-04-06T02:53:22.081-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:43.800-0500 c20012| 2016-04-06T02:53:22.082-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:43.802-0500 c20012| 2016-04-06T02:53:22.097-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.804-0500 c20012| 2016-04-06T02:53:22.098-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.806-0500 c20012| 2016-04-06T02:53:22.098-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.812-0500 c20012| 2016-04-06T02:53:22.098-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:43.819-0500 c20012| 2016-04-06T02:53:22.098-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:43.820-0500 c20012| 2016-04-06T02:53:22.129-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.824-0500 c20012| 2016-04-06T02:53:22.129-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.827-0500 c20012| 2016-04-06T02:53:22.129-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.829-0500 c20012| 2016-04-06T02:53:22.129-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:43.834-0500 c20012| 2016-04-06T02:53:22.129-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:43.836-0500 c20012| 2016-04-06T02:53:22.134-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.838-0500 c20012| 2016-04-06T02:53:22.134-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.843-0500 c20012| 2016-04-06T02:53:22.134-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.846-0500 c20012| 2016-04-06T02:53:22.134-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:43.852-0500 c20012| 2016-04-06T02:53:22.134-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:43.853-0500 c20012| 2016-04-06T02:53:22.150-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.858-0500 c20012| 2016-04-06T02:53:22.150-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.863-0500 c20012| 2016-04-06T02:53:22.150-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.866-0500 c20012| 2016-04-06T02:53:22.150-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:43.881-0500 c20012| 2016-04-06T02:53:22.150-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:43.887-0500 c20012| 2016-04-06T02:53:22.212-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.888-0500 c20012| 2016-04-06T02:53:22.212-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:43.890-0500 c20012| 2016-04-06T02:53:22.216-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.898-0500 c20012| 2016-04-06T02:53:22.216-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.901-0500 c20012| 2016-04-06T02:53:22.216-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.905-0500 c20012| 2016-04-06T02:53:22.216-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:43.915-0500 c20012| 2016-04-06T02:53:22.216-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:43.917-0500 c20012| 2016-04-06T02:53:22.217-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 5 } numYields:0 reslen:509 locks:{} protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:53:43.923-0500 c20012| 2016-04-06T02:53:22.225-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.928-0500 c20012| 2016-04-06T02:53:22.225-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.930-0500 c20012| 2016-04-06T02:53:22.225-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.933-0500 c20012| 2016-04-06T02:53:22.225-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:43.937-0500 c20012| 2016-04-06T02:53:22.225-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:43.939-0500 c20012| 2016-04-06T02:53:22.230-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.940-0500 c20012| 2016-04-06T02:53:22.230-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.942-0500 c20012| 2016-04-06T02:53:22.230-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.942-0500 c20012| 2016-04-06T02:53:22.231-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:43.946-0500 c20012| 2016-04-06T02:53:22.231-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:43.949-0500 *** Stepping down connection to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:43.950-0500 c20012| 2016-04-06T02:53:22.259-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.952-0500 c20012| 2016-04-06T02:53:22.259-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.958-0500 c20012| 2016-04-06T02:53:22.259-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.959-0500 c20012| 2016-04-06T02:53:22.260-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:43.965-0500 c20012| 2016-04-06T02:53:22.260-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:43.970-0500 c20012| 2016-04-06T02:53:22.295-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.972-0500 c20012| 2016-04-06T02:53:22.295-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.974-0500 c20012| 2016-04-06T02:53:22.295-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.978-0500 c20012| 2016-04-06T02:53:22.295-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:43.982-0500 c20012| 2016-04-06T02:53:22.295-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:43.984-0500 c20012| 2016-04-06T02:53:22.305-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.989-0500 c20012| 2016-04-06T02:53:22.305-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:43.992-0500 c20012| 2016-04-06T02:53:22.305-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:43.996-0500 c20012| 2016-04-06T02:53:22.306-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:44.014-0500 c20012| 2016-04-06T02:53:22.310-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.016-0500 c20012| 2016-04-06T02:53:22.324-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.018-0500 c20012| 2016-04-06T02:53:22.324-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.022-0500 c20012| 2016-04-06T02:53:22.325-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.025-0500 c20012| 2016-04-06T02:53:22.325-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:44.032-0500 c20012| 2016-04-06T02:53:22.327-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.036-0500 c20012| 2016-04-06T02:53:22.331-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.040-0500 c20012| 2016-04-06T02:53:22.331-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.042-0500 c20012| 2016-04-06T02:53:22.331-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.045-0500 c20012| 2016-04-06T02:53:22.331-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:44.051-0500 c20012| 2016-04-06T02:53:22.334-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.052-0500 c20012| 2016-04-06T02:53:22.346-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.057-0500 c20012| 2016-04-06T02:53:22.346-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.059-0500 c20012| 2016-04-06T02:53:22.346-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.064-0500 c20012| 2016-04-06T02:53:22.346-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:44.068-0500 c20012| 2016-04-06T02:53:22.346-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.071-0500 c20012| 2016-04-06T02:53:22.357-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.071-0500 c20012| 2016-04-06T02:53:22.357-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.076-0500 c20012| 2016-04-06T02:53:22.357-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.080-0500 c20012| 2016-04-06T02:53:22.357-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:44.085-0500 c20012| 2016-04-06T02:53:22.357-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.087-0500 c20012| 2016-04-06T02:53:22.378-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.090-0500 c20012| 2016-04-06T02:53:22.378-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.093-0500 c20012| 2016-04-06T02:53:22.378-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.094-0500 c20012| 2016-04-06T02:53:22.378-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:44.099-0500 c20012| 2016-04-06T02:53:22.378-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.107-0500 c20012| 2016-04-06T02:53:22.382-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.110-0500 c20012| 2016-04-06T02:53:22.382-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.111-0500 c20012| 2016-04-06T02:53:22.382-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.118-0500 c20012| 2016-04-06T02:53:22.382-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:44.122-0500 c20012| 2016-04-06T02:53:22.382-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.125-0500 c20012| 2016-04-06T02:53:22.391-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.133-0500 c20012| 2016-04-06T02:53:22.391-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.143-0500 c20012| 2016-04-06T02:53:22.391-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.145-0500 c20012| 2016-04-06T02:53:22.391-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:44.155-0500 c20012| 2016-04-06T02:53:22.391-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.157-0500 c20012| 2016-04-06T02:53:22.414-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.164-0500 c20012| 2016-04-06T02:53:22.414-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.169-0500 c20012| 2016-04-06T02:53:22.414-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.170-0500 c20012| 2016-04-06T02:53:22.414-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:44.182-0500 c20012| 2016-04-06T02:53:22.414-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.186-0500 c20012| 2016-04-06T02:53:22.416-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.190-0500 c20012| 2016-04-06T02:53:22.416-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.194-0500 c20012| 2016-04-06T02:53:22.416-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.194-0500 c20012| 2016-04-06T02:53:22.416-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:44.200-0500 c20012| 2016-04-06T02:53:22.416-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.203-0500 c20012| 2016-04-06T02:53:22.420-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.207-0500 c20012| 2016-04-06T02:53:22.420-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.211-0500 c20012| 2016-04-06T02:53:22.420-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.213-0500 c20012| 2016-04-06T02:53:22.420-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:44.215-0500 c20012| 2016-04-06T02:53:22.420-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.217-0500 c20012| 2016-04-06T02:53:22.425-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.219-0500 c20012| 2016-04-06T02:53:22.425-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.222-0500 c20012| 2016-04-06T02:53:22.425-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.228-0500 c20012| 2016-04-06T02:53:22.425-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:44.231-0500 c20012| 2016-04-06T02:53:22.425-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.235-0500 c20012| 2016-04-06T02:53:22.433-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.238-0500 c20012| 2016-04-06T02:53:22.433-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.242-0500 c20012| 2016-04-06T02:53:22.433-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.245-0500 c20012| 2016-04-06T02:53:22.433-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:44.248-0500 c20012| 2016-04-06T02:53:22.433-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.254-0500 c20012| 2016-04-06T02:53:22.441-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.254-0500 c20012| 2016-04-06T02:53:22.441-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.263-0500 c20012| 2016-04-06T02:53:22.441-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.266-0500 c20012| 2016-04-06T02:53:22.442-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:44.269-0500 c20012| 2016-04-06T02:53:22.442-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.273-0500 c20012| 2016-04-06T02:53:22.446-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.278-0500 c20012| 2016-04-06T02:53:22.446-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.281-0500 c20012| 2016-04-06T02:53:22.446-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.282-0500 c20012| 2016-04-06T02:53:22.446-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:44.286-0500 c20012| 2016-04-06T02:53:22.446-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.287-0500 c20012| 2016-04-06T02:53:22.461-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.292-0500 c20012| 2016-04-06T02:53:22.461-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.296-0500 c20012| 2016-04-06T02:53:22.461-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.297-0500 c20012| 2016-04-06T02:53:22.461-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:44.300-0500 c20012| 2016-04-06T02:53:22.461-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.302-0500 c20012| 2016-04-06T02:53:22.469-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.304-0500 c20012| 2016-04-06T02:53:22.469-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.307-0500 c20012| 2016-04-06T02:53:22.469-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.308-0500 c20012| 2016-04-06T02:53:22.469-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:44.316-0500 c20012| 2016-04-06T02:53:22.469-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.318-0500 c20012| 2016-04-06T02:53:22.476-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.320-0500 c20012| 2016-04-06T02:53:22.476-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.324-0500 c20012| 2016-04-06T02:53:22.476-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.326-0500 c20012| 2016-04-06T02:53:22.477-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:44.332-0500 c20012| 2016-04-06T02:53:22.480-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.335-0500 c20012| 2016-04-06T02:53:22.498-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.338-0500 c20012| 2016-04-06T02:53:22.498-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.340-0500 c20012| 2016-04-06T02:53:22.498-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.347-0500 c20012| 2016-04-06T02:53:22.499-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:44.349-0500 c20012| 2016-04-06T02:53:22.499-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.351-0500 c20012| 2016-04-06T02:53:22.516-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.352-0500 c20012| 2016-04-06T02:53:22.516-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.355-0500 c20012| 2016-04-06T02:53:22.516-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.357-0500 c20012| 2016-04-06T02:53:22.516-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:44.363-0500 c20012| 2016-04-06T02:53:22.516-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.366-0500 c20012| 2016-04-06T02:53:22.531-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.366-0500 c20012| 2016-04-06T02:53:22.531-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.368-0500 c20012| 2016-04-06T02:53:22.531-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.370-0500 c20012| 2016-04-06T02:53:22.531-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:44.374-0500 c20012| 2016-04-06T02:53:22.531-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.379-0500 c20012| 2016-04-06T02:53:22.557-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.380-0500 c20012| 2016-04-06T02:53:22.557-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.386-0500 c20012| 2016-04-06T02:53:22.557-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.389-0500 c20012| 2016-04-06T02:53:22.557-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:44.392-0500 c20012| 2016-04-06T02:53:22.557-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.394-0500 c20012| 2016-04-06T02:53:22.811-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1339 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:32.811-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.395-0500 c20012| 2016-04-06T02:53:22.811-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1339 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:44.399-0500 c20012| 2016-04-06T02:53:22.976-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1340 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:32.976-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.401-0500 c20012| 2016-04-06T02:53:22.976-0500 I ASIO [ReplicationExecutor] dropping unhealthy pooled connection to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:44.402-0500 c20012| 2016-04-06T02:53:22.976-0500 I ASIO [ReplicationExecutor] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:53:44.402-0500 c20012| 2016-04-06T02:53:22.976-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:44.403-0500 c20012| 2016-04-06T02:53:22.980-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1341 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:44.404-0500 c20012| 2016-04-06T02:53:22.981-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:44.407-0500 c20012| 2016-04-06T02:53:22.981-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1341 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:44.409-0500 c20012| 2016-04-06T02:53:22.981-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1340 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:44.411-0500 c20012| 2016-04-06T02:53:22.982-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1340 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 5, primaryId: 0, durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, opTime: { ts: Timestamp 1459929201000|1, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.411-0500 c20012| 2016-04-06T02:53:22.982-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:24.982Z [js_test:multi_coll_drop] 2016-04-06T02:53:44.413-0500 c20012| 2016-04-06T02:53:23.470-0500 D COMMAND [conn37] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.414-0500 c20012| 2016-04-06T02:53:23.470-0500 D COMMAND [conn37] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:44.418-0500 c20012| 2016-04-06T02:53:23.470-0500 I COMMAND [conn37] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 5 } numYields:0 reslen:509 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.421-0500 c20012| 2016-04-06T02:53:24.492-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:44.426-0500 c20012| 2016-04-06T02:53:24.492-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1343 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:44.427-0500 c20012| 2016-04-06T02:53:24.492-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1343 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:44.429-0500 c20012| 2016-04-06T02:53:24.666-0500 D COMMAND [conn32] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.429-0500 c20012| 2016-04-06T02:53:24.666-0500 I COMMAND [conn32] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.430-0500 c20012| 2016-04-06T02:53:24.982-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1344 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:34.982-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.430-0500 c20012| 2016-04-06T02:53:24.982-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1344 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:44.434-0500 c20012| 2016-04-06T02:53:24.983-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1344 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 5, primaryId: 0, durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, opTime: { ts: Timestamp 1459929201000|1, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.435-0500 c20012| 2016-04-06T02:53:24.983-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:26.983Z [js_test:multi_coll_drop] 2016-04-06T02:53:44.436-0500 c20012| 2016-04-06T02:53:25.471-0500 D COMMAND [conn37] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.437-0500 c20012| 2016-04-06T02:53:25.471-0500 D COMMAND [conn37] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:44.440-0500 c20012| 2016-04-06T02:53:25.471-0500 I COMMAND [conn37] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 5 } numYields:0 reslen:509 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.443-0500 c20012| 2016-04-06T02:53:26.983-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1346 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:36.983-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.446-0500 c20012| 2016-04-06T02:53:26.983-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1346 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:44.448-0500 c20012| 2016-04-06T02:53:26.984-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1346 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 5, primaryId: 0, durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, opTime: { ts: Timestamp 1459929201000|1, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.449-0500 c20012| 2016-04-06T02:53:26.984-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:28.984Z [js_test:multi_coll_drop] 2016-04-06T02:53:44.454-0500 c20012| 2016-04-06T02:53:26.993-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1338 timed out, adjusted timeout after getting connection from pool was 5000ms, op was id: 17, states: [ UNINITIALIZED, IN_PROGRESS ], start_time: 2016-04-06T02:53:21.993-0500, request: RemoteCommand 1338 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:53:26.993-0500 cmd:{ getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.460-0500 c20012| 2016-04-06T02:53:26.993-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Operation timing out; original request was: RemoteCommand 1338 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:53:26.993-0500 cmd:{ getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.463-0500 c20012| 2016-04-06T02:53:26.993-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Failed to execute command: RemoteCommand 1338 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:53:26.993-0500 cmd:{ getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } reason: ExceededTimeLimit: Operation timed out, request was RemoteCommand 1338 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:53:26.993-0500 cmd:{ getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.466-0500 c20012| 2016-04-06T02:53:26.993-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1338 finished with response: ExceededTimeLimit: Operation timed out, request was RemoteCommand 1338 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:53:26.993-0500 cmd:{ getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.468-0500 c20012| 2016-04-06T02:53:26.993-0500 D REPL [rsBackgroundSync-0] Error returned from oplog query: ExceededTimeLimit: Operation timed out, request was RemoteCommand 1338 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:53:26.993-0500 cmd:{ getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.472-0500 c20012| 2016-04-06T02:53:26.993-0500 D REPL [rsBackgroundSync] fetcher stopped reading remote oplog on mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:44.473-0500 c20012| 2016-04-06T02:53:26.994-0500 I REPL [ReplicationExecutor] could not find member to sync from [js_test:multi_coll_drop] 2016-04-06T02:53:44.479-0500 c20012| 2016-04-06T02:53:26.994-0500 D ASIO [ReplicationExecutor] Canceling operation; original request was: RemoteCommand 1339 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:32.811-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.481-0500 c20012| 2016-04-06T02:53:26.994-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:26.994Z [js_test:multi_coll_drop] 2016-04-06T02:53:44.483-0500 c20012| 2016-04-06T02:53:26.994-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:26.994Z [js_test:multi_coll_drop] 2016-04-06T02:53:44.487-0500 c20012| 2016-04-06T02:53:26.994-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to execute command: RemoteCommand 1339 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:32.811-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 5 } reason: CallbackCanceled: Callback canceled [js_test:multi_coll_drop] 2016-04-06T02:53:44.489-0500 c20012| 2016-04-06T02:53:26.994-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1339 finished with response: CallbackCanceled: Callback canceled [js_test:multi_coll_drop] 2016-04-06T02:53:44.491-0500 c20012| 2016-04-06T02:53:26.994-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1350 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:32.811-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.491-0500 c20012| 2016-04-06T02:53:26.994-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:44.493-0500 c20012| 2016-04-06T02:53:26.994-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1352 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:36.994-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.494-0500 c20012| 2016-04-06T02:53:26.994-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1352 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:44.495-0500 c20012| 2016-04-06T02:53:26.994-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1351 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:44.499-0500 c20012| 2016-04-06T02:53:26.994-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1352 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 5, primaryId: 0, durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, opTime: { ts: Timestamp 1459929201000|1, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.501-0500 c20012| 2016-04-06T02:53:26.995-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:29.494Z [js_test:multi_coll_drop] 2016-04-06T02:53:44.503-0500 c20012| 2016-04-06T02:53:26.995-0500 D COMMAND [conn37] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.503-0500 c20012| 2016-04-06T02:53:26.995-0500 D COMMAND [conn37] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:44.506-0500 c20012| 2016-04-06T02:53:26.995-0500 I COMMAND [conn37] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 5 } numYields:0 reslen:478 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.510-0500 c20012| 2016-04-06T02:53:27.285-0500 D COMMAND [conn37] run command admin.$cmd { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 5, candidateIndex: 2, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929201000|1, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.511-0500 c20012| 2016-04-06T02:53:27.285-0500 D COMMAND [conn37] command: replSetRequestVotes [js_test:multi_coll_drop] 2016-04-06T02:53:44.513-0500 c20012| 2016-04-06T02:53:27.286-0500 D QUERY [conn37] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:44.516-0500 c20012| 2016-04-06T02:53:27.286-0500 I COMMAND [conn37] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 5, candidateIndex: 2, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929201000|1, t: 5 } } numYields:0 reslen:143 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.521-0500 c20012| 2016-04-06T02:53:27.287-0500 D COMMAND [conn37] run command admin.$cmd { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 6, candidateIndex: 2, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929201000|1, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.522-0500 c20012| 2016-04-06T02:53:27.287-0500 D COMMAND [conn37] command: replSetRequestVotes [js_test:multi_coll_drop] 2016-04-06T02:53:44.528-0500 c20012| 2016-04-06T02:53:27.287-0500 D QUERY [conn37] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:44.529-0500 c20012| 2016-04-06T02:53:27.287-0500 I COMMAND [conn37] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 6, candidateIndex: 2, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929201000|1, t: 5 } } numYields:0 reslen:143 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.531-0500 c20012| 2016-04-06T02:53:27.288-0500 D COMMAND [conn37] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 6 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.532-0500 c20012| 2016-04-06T02:53:27.288-0500 D COMMAND [conn37] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:44.535-0500 c20012| 2016-04-06T02:53:27.289-0500 I COMMAND [conn37] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 6 } numYields:0 reslen:478 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.537-0500 s20015| 2016-04-06T02:53:28.975-0500 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:44.538-0500 s20015| 2016-04-06T02:53:28.976-0500 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:44.540-0500 s20015| 2016-04-06T02:53:28.976-0500 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.100.28:20011, no events [js_test:multi_coll_drop] 2016-04-06T02:53:44.543-0500 c20013| 2016-04-06T02:52:26.879-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:44.546-0500 c20013| 2016-04-06T02:52:26.879-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1217 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|7, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:44.549-0500 c20013| 2016-04-06T02:52:26.879-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1217 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:44.550-0500 c20013| 2016-04-06T02:52:26.879-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1217 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.552-0500 c20013| 2016-04-06T02:52:26.879-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1219 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.879-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|7, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.553-0500 c20013| 2016-04-06T02:52:26.879-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1219 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:44.561-0500 c20013| 2016-04-06T02:52:26.881-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:44.566-0500 c20013| 2016-04-06T02:52:26.881-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1220 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:44.567-0500 c20013| 2016-04-06T02:52:26.881-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1220 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:44.569-0500 c20013| 2016-04-06T02:52:26.881-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1220 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.572-0500 c20013| 2016-04-06T02:52:26.881-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1219 finished with response: { cursor: { nextBatch: [], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.574-0500 c20013| 2016-04-06T02:52:26.881-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|8, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.576-0500 c20013| 2016-04-06T02:52:26.881-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:44.579-0500 c20013| 2016-04-06T02:52:26.881-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1223 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.881-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|8, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.581-0500 c20013| 2016-04-06T02:52:26.881-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1223 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:44.585-0500 c20013| 2016-04-06T02:52:26.882-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|48 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|8, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.588-0500 c20013| 2016-04-06T02:52:26.882-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|8, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.591-0500 c20013| 2016-04-06T02:52:26.882-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|48 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|8, t: 2 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.595-0500 c20013| 2016-04-06T02:52:26.882-0500 D QUERY [conn10] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:44.597-0500 c20013| 2016-04-06T02:52:26.882-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|48 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|8, t: 2 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:712 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.598-0500 c20013| 2016-04-06T02:52:26.883-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|8, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.601-0500 c20013| 2016-04-06T02:52:26.883-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|8, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.605-0500 c20013| 2016-04-06T02:52:26.883-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|8, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.608-0500 c20013| 2016-04-06T02:52:26.883-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:44.609-0500 c20013| 2016-04-06T02:52:26.883-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|8, t: 2 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.616-0500 c20013| 2016-04-06T02:52:26.884-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1223 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929146000|9, t: 2, h: 622575099516940850, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c03a65c17830b843f1af'), state: 2, when: new Date(1459929146883), why: "splitting chunk [{ _id: -76.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.616-0500 c20011| 2016-04-06T02:52:51.721-0500 D COMMAND [conn39] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.619-0500 c20011| 2016-04-06T02:52:51.722-0500 I COMMAND [conn39] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.623-0500 c20011| 2016-04-06T02:52:51.726-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 317 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:53:01.726-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.624-0500 c20011| 2016-04-06T02:52:51.726-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 317 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:44.628-0500 c20011| 2016-04-06T02:52:51.728-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 317 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20013", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, opTime: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.630-0500 c20011| 2016-04-06T02:52:51.729-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:53.729Z [js_test:multi_coll_drop] 2016-04-06T02:53:44.634-0500 c20011| 2016-04-06T02:52:51.766-0500 D COMMAND [conn36] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929171765), up: 44, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.635-0500 c20011| 2016-04-06T02:52:51.766-0500 D QUERY [conn36] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:44.640-0500 c20011| 2016-04-06T02:52:51.766-0500 I WRITE [conn36] update config.mongos query: { _id: "mongovm16:20014" } update: { $set: { _id: "mongovm16:20014", ping: new Date(1459929171765), up: 44, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.644-0500 c20011| 2016-04-06T02:52:51.774-0500 D COMMAND [conn38] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929171773), up: 44, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.645-0500 c20011| 2016-04-06T02:52:51.774-0500 D QUERY [conn38] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:44.647-0500 c20011| 2016-04-06T02:52:51.774-0500 I WRITE [conn38] update config.mongos query: { _id: "mongovm16:20015" } update: { $set: { _id: "mongovm16:20015", ping: new Date(1459929171773), up: 44, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.651-0500 c20011| 2016-04-06T02:52:51.776-0500 D REPL [conn36] Required snapshot optime: { ts: Timestamp 1459929171000|1, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|8, t: 3 }, name-id: "251" } [js_test:multi_coll_drop] 2016-04-06T02:53:44.653-0500 c20011| 2016-04-06T02:52:51.785-0500 D REPL [conn38] Required snapshot optime: { ts: Timestamp 1459929171000|1, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|8, t: 3 }, name-id: "251" } [js_test:multi_coll_drop] 2016-04-06T02:53:44.656-0500 c20011| 2016-04-06T02:52:51.785-0500 D REPL [conn38] Required snapshot optime: { ts: Timestamp 1459929171000|2, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|8, t: 3 }, name-id: "251" } [js_test:multi_coll_drop] 2016-04-06T02:53:44.657-0500 c20011| 2016-04-06T02:52:52.734-0500 D COMMAND [conn29] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.659-0500 c20011| 2016-04-06T02:52:52.734-0500 D COMMAND [conn29] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:44.662-0500 c20011| 2016-04-06T02:52:52.735-0500 I COMMAND [conn29] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } numYields:0 reslen:500 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.664-0500 c20011| 2016-04-06T02:52:53.729-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 319 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:53:03.729-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.664-0500 c20011| 2016-04-06T02:52:53.730-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 319 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:44.669-0500 c20011| 2016-04-06T02:52:53.730-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 319 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20013", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, opTime: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.669-0500 c20011| 2016-04-06T02:52:53.731-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:55.731Z [js_test:multi_coll_drop] 2016-04-06T02:53:44.671-0500 c20011| 2016-04-06T02:52:54.213-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 309 timed out, adjusted timeout after getting connection from pool was 9999ms, op was id: 4, states: [ UNINITIALIZED, IN_PROGRESS ], start_time: 2016-04-06T02:52:44.213-0500, request: RemoteCommand 309 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:54.213-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.672-0500 c20011| 2016-04-06T02:52:54.213-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Operation timing out; original request was: RemoteCommand 309 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:54.213-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.676-0500 c20011| 2016-04-06T02:52:54.213-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to execute command: RemoteCommand 309 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:54.213-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } reason: ExceededTimeLimit: Operation timed out, request was RemoteCommand 309 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:54.213-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.680-0500 c20011| 2016-04-06T02:52:54.213-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 309 finished with response: ExceededTimeLimit: Operation timed out, request was RemoteCommand 309 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:54.213-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.682-0500 c20011| 2016-04-06T02:52:54.213-0500 I REPL [ReplicationExecutor] Error in heartbeat request to mongovm16:20013; ExceededTimeLimit: Operation timed out, request was RemoteCommand 309 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:54.213-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.686-0500 c20011| 2016-04-06T02:52:54.213-0500 D REPL [ReplicationExecutor] setDownValues: heartbeat response failed for member _id:2, msg: Operation timed out, request was RemoteCommand 309 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:52:54.213-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.687-0500 2016-04-06T02:53:29.110-0500 I NETWORK [thread2] trying reconnect to mongovm16:20011 (192.168.100.28) failed [js_test:multi_coll_drop] 2016-04-06T02:53:44.687-0500 c20011| 2016-04-06T02:52:54.213-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:52:56.213Z [js_test:multi_coll_drop] 2016-04-06T02:53:44.688-0500 c20011| 2016-04-06T02:52:54.736-0500 D COMMAND [conn29] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.689-0500 c20011| 2016-04-06T02:52:54.736-0500 D COMMAND [conn29] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:44.691-0500 c20011| 2016-04-06T02:52:54.736-0500 I COMMAND [conn29] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } numYields:0 reslen:500 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.692-0500 c20011| 2016-04-06T02:52:55.731-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 322 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:53:05.731-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.694-0500 c20011| 2016-04-06T02:52:55.733-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 322 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:44.696-0500 c20011| 2016-04-06T02:52:55.733-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 322 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20013", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, opTime: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.698-0500 c20011| 2016-04-06T02:52:55.733-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:57.733Z [js_test:multi_coll_drop] 2016-04-06T02:53:44.702-0500 c20011| 2016-04-06T02:52:56.214-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 324 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:06.214-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.703-0500 c20011| 2016-04-06T02:52:56.215-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:44.704-0500 c20011| 2016-04-06T02:52:56.216-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 325 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:44.705-0500 c20011| 2016-04-06T02:52:56.737-0500 D COMMAND [conn29] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.705-0500 c20011| 2016-04-06T02:52:56.737-0500 D COMMAND [conn29] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:44.706-0500 c20011| 2016-04-06T02:52:56.738-0500 I COMMAND [conn29] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } numYields:0 reslen:500 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.709-0500 c20011| 2016-04-06T02:52:56.920-0500 D - [PeriodicTaskRunner] cleaning up unused lock buckets of the global lock manager [js_test:multi_coll_drop] 2016-04-06T02:53:44.710-0500 c20011| 2016-04-06T02:52:57.733-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 326 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:53:07.733-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.712-0500 c20011| 2016-04-06T02:52:57.733-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 326 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:44.714-0500 c20011| 2016-04-06T02:52:57.733-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 326 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20013", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, opTime: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.721-0500 c20011| 2016-04-06T02:52:57.734-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:59.734Z [js_test:multi_coll_drop] 2016-04-06T02:53:44.726-0500 c20011| 2016-04-06T02:52:58.741-0500 D COMMAND [conn29] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.726-0500 c20011| 2016-04-06T02:52:58.741-0500 D COMMAND [conn29] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:44.728-0500 c20011| 2016-04-06T02:52:58.741-0500 I COMMAND [conn29] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } numYields:0 reslen:500 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.732-0500 c20011| 2016-04-06T02:52:59.734-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 328 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:53:09.734-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.732-0500 c20011| 2016-04-06T02:52:59.734-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 328 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:44.735-0500 c20011| 2016-04-06T02:52:59.735-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 328 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20013", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, opTime: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.736-0500 c20011| 2016-04-06T02:52:59.735-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:53:01.735Z [js_test:multi_coll_drop] 2016-04-06T02:53:44.738-0500 c20011| 2016-04-06T02:53:00.744-0500 D COMMAND [conn29] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.739-0500 c20011| 2016-04-06T02:53:00.744-0500 D COMMAND [conn29] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:44.743-0500 c20011| 2016-04-06T02:53:00.744-0500 I COMMAND [conn29] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } numYields:0 reslen:500 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.745-0500 c20011| 2016-04-06T02:53:00.770-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:_mdb_catalog -> { numRecords: 17, dataSize: 6594 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.747-0500 c20011| 2016-04-06T02:53:00.770-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-11--6404702321693896372 -> { numRecords: 1, dataSize: 83 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.750-0500 c20011| 2016-04-06T02:53:00.770-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-13--6404702321693896372 -> { numRecords: 2, dataSize: 72 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.754-0500 c20011| 2016-04-06T02:53:00.770-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-15--6404702321693896372 -> { numRecords: 38, dataSize: 6522 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.756-0500 c20011| 2016-04-06T02:53:00.770-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-20--6404702321693896372 -> { numRecords: 1, dataSize: 50 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.757-0500 c20011| 2016-04-06T02:53:00.770-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-23--6404702321693896372 -> { numRecords: 3, dataSize: 644 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.759-0500 c20011| 2016-04-06T02:53:00.770-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-28--6404702321693896372 -> { numRecords: 0, dataSize: 0 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.760-0500 c20011| 2016-04-06T02:53:00.770-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-31--6404702321693896372 -> { numRecords: 2, dataSize: 204 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.761-0500 c20011| 2016-04-06T02:53:00.770-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-33--6404702321693896372 -> { numRecords: 40, dataSize: 17395 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.762-0500 c20011| 2016-04-06T02:53:00.770-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-35--6404702321693896372 -> { numRecords: 1, dataSize: 61 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.763-0500 c20011| 2016-04-06T02:53:00.771-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-37--6404702321693896372 -> { numRecords: 1, dataSize: 114 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.765-0500 c20011| 2016-04-06T02:53:00.771-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-39--6404702321693896372 -> { numRecords: 1, dataSize: 45 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.766-0500 c20011| 2016-04-06T02:53:00.771-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-4--6404702321693896372 -> { numRecords: 207, dataSize: 69285 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.768-0500 c20011| 2016-04-06T02:53:00.771-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-5--6404702321693896372 -> { numRecords: 1, dataSize: 733 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.769-0500 c20011| 2016-04-06T02:53:00.771-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-7--6404702321693896372 -> { numRecords: 1, dataSize: 60 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.771-0500 c20011| 2016-04-06T02:53:00.771-0500 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-9--6404702321693896372 -> { numRecords: 3, dataSize: 198 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.773-0500 c20011| 2016-04-06T02:53:01.737-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 330 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:53:11.737-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.774-0500 c20011| 2016-04-06T02:53:01.737-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 330 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:44.777-0500 c20011| 2016-04-06T02:53:01.737-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 330 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20013", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, opTime: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.778-0500 c20011| 2016-04-06T02:53:01.737-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:53:03.737Z [js_test:multi_coll_drop] 2016-04-06T02:53:44.780-0500 c20011| 2016-04-06T02:53:02.744-0500 D COMMAND [conn29] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.781-0500 c20011| 2016-04-06T02:53:02.744-0500 D COMMAND [conn29] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:44.784-0500 c20011| 2016-04-06T02:53:02.744-0500 I COMMAND [conn29] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } numYields:0 reslen:500 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.784-0500 c20011| 2016-04-06T02:53:03.716-0500 D COMMAND [conn37] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.785-0500 c20011| 2016-04-06T02:53:03.716-0500 I COMMAND [conn37] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:44.801-0500 c20011| 2016-04-06T02:53:03.737-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 332 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:53:13.737-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.802-0500 c20011| 2016-04-06T02:53:03.737-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 332 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:44.805-0500 c20011| 2016-04-06T02:53:03.739-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 332 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20013", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, opTime: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.806-0500 c20011| 2016-04-06T02:53:03.739-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:53:05.739Z [js_test:multi_coll_drop] 2016-04-06T02:53:44.808-0500 c20011| 2016-04-06T02:53:04.658-0500 D COMMAND [conn30] run command local.$cmd { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:44.810-0500 c20011| 2016-04-06T02:53:04.659-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:33711 #45 (17 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:44.812-0500 c20011| 2016-04-06T02:53:04.659-0500 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:44.813-0500 c20011| 2016-04-06T02:53:04.659-0500 D COMMAND [conn28] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:44.814-0500 c20011| 2016-04-06T02:53:04.659-0500 D COMMAND [conn45] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20013" } [js_test:multi_coll_drop] 2016-04-06T02:53:45.225-0500 c20011| 2016-04-06T02:53:04.659-0500 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20013" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:45.230-0500 c20011| 2016-04-06T02:53:04.660-0500 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 3 } numYields:0 reslen:480 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:45.237-0500 c20011| 2016-04-06T02:53:04.661-0500 D COMMAND [conn35] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:45.240-0500 c20011| 2016-04-06T02:53:04.661-0500 D COMMAND [conn35] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:45.242-0500 c20011| 2016-04-06T02:53:04.661-0500 D REPL [conn35] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|3, t: 2 } and is durable through: { ts: Timestamp 1459929161000|1, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.259-0500 c20011| 2016-04-06T02:53:04.661-0500 D REPL [conn35] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|8, t: 3 } and is durable through: { ts: Timestamp 1459929163000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.264-0500 c20011| 2016-04-06T02:53:04.661-0500 I COMMAND [conn35] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:45.278-0500 c20011| 2016-04-06T02:53:04.661-0500 I COMMAND [conn30] command local.oplog.rs command: getMore { getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|8, t: 3 } } cursorid:19853084149 numYields:0 nreturned:2 reslen:692 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:45.278-0500 c20011| 2016-04-06T02:53:04.661-0500 D NETWORK [conn30] SocketException: remote: 192.168.100.28:59437 error: 9001 socket exception [CLOSED] server [192.168.100.28:59437] [js_test:multi_coll_drop] 2016-04-06T02:53:45.279-0500 c20011| 2016-04-06T02:53:04.661-0500 I NETWORK [conn30] end connection 192.168.100.28:59437 (16 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:45.281-0500 c20011| 2016-04-06T02:53:04.661-0500 D COMMAND [conn45] run command admin.$cmd { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 3, candidateIndex: 2, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929163000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:45.281-0500 c20011| 2016-04-06T02:53:04.661-0500 D COMMAND [conn45] command: replSetRequestVotes [js_test:multi_coll_drop] 2016-04-06T02:53:45.284-0500 c20011| 2016-04-06T02:53:04.663-0500 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.287-0500 c20011| 2016-04-06T02:53:04.663-0500 D COMMAND [conn28] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:45.288-0500 c20011| 2016-04-06T02:53:04.663-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:33713 #46 (17 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:45.292-0500 c20011| 2016-04-06T02:53:04.663-0500 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } numYields:0 reslen:480 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:45.293-0500 c20011| 2016-04-06T02:53:04.663-0500 D COMMAND [conn46] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20013" } [js_test:multi_coll_drop] 2016-04-06T02:53:45.298-0500 c20011| 2016-04-06T02:53:04.663-0500 I COMMAND [conn45] command admin.$cmd command: replSetRequestVotes { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 3, candidateIndex: 2, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929163000|8, t: 3 } } numYields:0 reslen:159 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:45.300-0500 c20011| 2016-04-06T02:53:04.663-0500 D NETWORK [conn45] SocketException: remote: 192.168.100.28:33711 error: 9001 socket exception [CLOSED] server [192.168.100.28:33711] [js_test:multi_coll_drop] 2016-04-06T02:53:45.304-0500 c20011| 2016-04-06T02:53:04.663-0500 I NETWORK [conn45] end connection 192.168.100.28:33711 (16 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:45.305-0500 c20011| 2016-04-06T02:53:04.663-0500 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20013" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:45.311-0500 c20011| 2016-04-06T02:53:04.663-0500 D COMMAND [conn28] run command admin.$cmd { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 4, candidateIndex: 2, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929163000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:45.312-0500 c20011| 2016-04-06T02:53:04.663-0500 D COMMAND [conn28] command: replSetRequestVotes [js_test:multi_coll_drop] 2016-04-06T02:53:45.313-0500 c20011| 2016-04-06T02:53:04.663-0500 I REPL [ReplicationExecutor] stepping down from primary, because a new term has begun: 4 [js_test:multi_coll_drop] 2016-04-06T02:53:45.313-0500 c20011| 2016-04-06T02:53:04.663-0500 I REPL [replExecDBWorker-2] transition to SECONDARY [js_test:multi_coll_drop] 2016-04-06T02:53:45.316-0500 c20011| 2016-04-06T02:53:04.663-0500 D COMMAND [conn40] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|74 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|8, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.318-0500 c20011| 2016-04-06T02:53:04.663-0500 D REPL [conn38] Required snapshot optime: { ts: Timestamp 1459929171000|2, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|8, t: 3 }, name-id: "251" } [js_test:multi_coll_drop] 2016-04-06T02:53:45.322-0500 c20011| 2016-04-06T02:53:04.663-0500 D REPL [conn36] Required snapshot optime: { ts: Timestamp 1459929171000|1, t: 3 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|8, t: 3 }, name-id: "251" } [js_test:multi_coll_drop] 2016-04-06T02:53:45.324-0500 c20011| 2016-04-06T02:53:04.663-0500 D NETWORK [conn35] SocketException: remote: 192.168.100.28:59592 error: 9001 socket exception [CLOSED] server [192.168.100.28:59592] [js_test:multi_coll_drop] 2016-04-06T02:53:45.324-0500 c20011| 2016-04-06T02:53:04.663-0500 I NETWORK [conn35] end connection 192.168.100.28:59592 (15 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:45.326-0500 c20011| 2016-04-06T02:53:04.663-0500 D NETWORK [conn33] SocketException: remote: 192.168.100.28:59567 error: 9001 socket exception [CLOSED] server [192.168.100.28:59567] [js_test:multi_coll_drop] 2016-04-06T02:53:45.327-0500 c20011| 2016-04-06T02:53:04.663-0500 I NETWORK [conn33] end connection 192.168.100.28:59567 (15 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:45.328-0500 c20011| 2016-04-06T02:53:04.663-0500 D NETWORK [conn39] SocketException: remote: 192.168.100.28:59637 error: 9001 socket exception [CLOSED] server [192.168.100.28:59637] [js_test:multi_coll_drop] 2016-04-06T02:53:45.329-0500 c20011| 2016-04-06T02:53:04.663-0500 I NETWORK [conn39] end connection 192.168.100.28:59637 (14 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:45.331-0500 c20011| 2016-04-06T02:53:04.663-0500 D NETWORK [conn43] SocketException: remote: 192.168.100.28:60975 error: 9001 socket exception [CLOSED] server [192.168.100.28:60975] [js_test:multi_coll_drop] 2016-04-06T02:53:45.331-0500 c20011| 2016-04-06T02:53:04.663-0500 I NETWORK [conn43] end connection 192.168.100.28:60975 (13 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:45.333-0500 c20011| 2016-04-06T02:53:04.663-0500 D NETWORK [conn31] SocketException: remote: 192.168.100.28:59438 error: 9001 socket exception [CLOSED] server [192.168.100.28:59438] [js_test:multi_coll_drop] 2016-04-06T02:53:45.337-0500 c20011| 2016-04-06T02:53:04.663-0500 I NETWORK [conn31] end connection 192.168.100.28:59438 (12 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:45.339-0500 c20011| 2016-04-06T02:53:04.663-0500 D NETWORK [conn41] SocketException: remote: 192.168.100.28:60039 error: 9001 socket exception [CLOSED] server [192.168.100.28:60039] [js_test:multi_coll_drop] 2016-04-06T02:53:45.341-0500 c20011| 2016-04-06T02:53:04.663-0500 I NETWORK [conn41] end connection 192.168.100.28:60039 (10 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:45.349-0500 c20011| 2016-04-06T02:53:04.663-0500 D NETWORK [conn46] SocketException: remote: 192.168.100.28:33713 error: 9001 socket exception [CLOSED] server [192.168.100.28:33713] [js_test:multi_coll_drop] 2016-04-06T02:53:45.350-0500 c20011| 2016-04-06T02:53:04.663-0500 D NETWORK [conn34] SocketException: remote: 192.168.100.28:59591 error: 9001 socket exception [CLOSED] server [192.168.100.28:59591] [js_test:multi_coll_drop] 2016-04-06T02:53:45.351-0500 c20011| 2016-04-06T02:53:04.664-0500 I NETWORK [conn34] end connection 192.168.100.28:59591 (9 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:45.353-0500 c20011| 2016-04-06T02:53:04.663-0500 I NETWORK [conn46] end connection 192.168.100.28:33713 (9 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:45.357-0500 c20011| 2016-04-06T02:53:04.664-0500 D NETWORK [conn42] SocketException: remote: 192.168.100.28:60973 error: 9001 socket exception [CLOSED] server [192.168.100.28:60973] [js_test:multi_coll_drop] 2016-04-06T02:53:45.358-0500 c20011| 2016-04-06T02:53:04.664-0500 I NETWORK [conn42] end connection 192.168.100.28:60973 (7 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:45.359-0500 c20013| 2016-04-06T02:52:26.884-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929146000|9 and ending at ts: Timestamp 1459929146000|9 [js_test:multi_coll_drop] 2016-04-06T02:53:45.360-0500 c20013| 2016-04-06T02:52:26.884-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:45.367-0500 c20013| 2016-04-06T02:52:26.884-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:45.368-0500 c20013| 2016-04-06T02:52:26.884-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:45.370-0500 c20013| 2016-04-06T02:52:26.884-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:45.372-0500 c20013| 2016-04-06T02:52:26.884-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:45.372-0500 c20013| 2016-04-06T02:52:26.884-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:45.373-0500 c20013| 2016-04-06T02:52:26.884-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:45.373-0500 c20013| 2016-04-06T02:52:26.885-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:45.375-0500 c20013| 2016-04-06T02:52:26.885-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:45.376-0500 c20013| 2016-04-06T02:52:26.885-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:45.377-0500 c20013| 2016-04-06T02:52:26.885-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:45.378-0500 c20013| 2016-04-06T02:52:26.885-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:45.378-0500 c20013| 2016-04-06T02:52:26.885-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:45.379-0500 d20010| 2016-04-06T02:53:29.109-0500 I NETWORK [ReplicaSetMonitorWatcher] Detected bad connection created at 1459929198970875 microSec, clearing pool for mongovm16:20011 of 0 connections [js_test:multi_coll_drop] 2016-04-06T02:53:45.380-0500 d20010| 2016-04-06T02:53:29.114-0500 I ASIO [conn5] dropping unhealthy pooled connection to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:45.380-0500 d20010| 2016-04-06T02:53:29.114-0500 I ASIO [conn5] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:53:45.381-0500 d20010| 2016-04-06T02:53:29.115-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:45.382-0500 d20010| 2016-04-06T02:53:29.117-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:45.386-0500 d20010| 2016-04-06T02:53:29.117-0500 I COMMAND [conn5] command admin.$cmd command: splitChunk { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 40.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } numYields:0 reslen:250 locks:{} protocol:op_command 6540ms [js_test:multi_coll_drop] 2016-04-06T02:53:45.391-0500 d20010| 2016-04-06T02:53:29.119-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 41.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:45.396-0500 s20014| 2016-04-06T02:53:22.041-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -51.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:45.400-0500 s20014| 2016-04-06T02:53:22.042-0500 D ASIO [conn1] startCommand: RemoteCommand 462 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.042-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.401-0500 s20014| 2016-04-06T02:53:22.042-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 462 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:45.404-0500 s20014| 2016-04-06T02:53:22.043-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 462 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.407-0500 s20014| 2016-04-06T02:53:22.043-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:45.411-0500 s20014| 2016-04-06T02:53:22.046-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -50.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:45.413-0500 s20014| 2016-04-06T02:53:22.046-0500 D ASIO [conn1] startCommand: RemoteCommand 464 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.046-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.413-0500 s20014| 2016-04-06T02:53:22.046-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 464 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:45.416-0500 s20014| 2016-04-06T02:53:22.047-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 464 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.416-0500 s20014| 2016-04-06T02:53:22.047-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:45.443-0500 s20014| 2016-04-06T02:53:22.049-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -49.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:45.446-0500 s20014| 2016-04-06T02:53:22.049-0500 D ASIO [conn1] startCommand: RemoteCommand 466 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.049-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.450-0500 s20014| 2016-04-06T02:53:22.049-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 466 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:45.456-0500 s20014| 2016-04-06T02:53:22.050-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 466 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.457-0500 s20014| 2016-04-06T02:53:22.050-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:45.461-0500 s20014| 2016-04-06T02:53:22.057-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -48.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:45.473-0500 s20014| 2016-04-06T02:53:22.058-0500 D ASIO [conn1] startCommand: RemoteCommand 468 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.058-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.476-0500 s20014| 2016-04-06T02:53:22.058-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 468 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:45.478-0500 s20014| 2016-04-06T02:53:22.058-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 468 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.479-0500 s20014| 2016-04-06T02:53:22.059-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:45.483-0500 s20014| 2016-04-06T02:53:22.063-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -47.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:45.487-0500 s20014| 2016-04-06T02:53:22.063-0500 D ASIO [conn1] startCommand: RemoteCommand 470 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.063-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.487-0500 s20014| 2016-04-06T02:53:22.064-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 470 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:45.490-0500 s20014| 2016-04-06T02:53:22.070-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 470 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.490-0500 s20014| 2016-04-06T02:53:22.070-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:45.495-0500 s20014| 2016-04-06T02:53:22.073-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -46.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:45.498-0500 s20014| 2016-04-06T02:53:22.073-0500 D ASIO [conn1] startCommand: RemoteCommand 472 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.073-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.503-0500 s20014| 2016-04-06T02:53:22.074-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 472 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:45.504-0500 s20014| 2016-04-06T02:53:22.075-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 472 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.505-0500 s20014| 2016-04-06T02:53:22.075-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:45.508-0500 s20014| 2016-04-06T02:53:22.081-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -45.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:45.514-0500 s20014| 2016-04-06T02:53:22.081-0500 D ASIO [conn1] startCommand: RemoteCommand 474 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.081-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.517-0500 s20014| 2016-04-06T02:53:22.081-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 474 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:45.528-0500 s20014| 2016-04-06T02:53:22.082-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 474 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.530-0500 s20014| 2016-04-06T02:53:22.082-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:45.538-0500 s20014| 2016-04-06T02:53:22.087-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -44.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:45.546-0500 s20014| 2016-04-06T02:53:22.088-0500 D ASIO [conn1] startCommand: RemoteCommand 476 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.088-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.550-0500 s20014| 2016-04-06T02:53:22.088-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 476 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:45.551-0500 s20014| 2016-04-06T02:53:22.091-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 476 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.552-0500 s20014| 2016-04-06T02:53:22.091-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:45.557-0500 s20014| 2016-04-06T02:53:22.097-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -43.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:45.577-0500 s20014| 2016-04-06T02:53:22.097-0500 D ASIO [conn1] startCommand: RemoteCommand 478 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.097-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.579-0500 s20014| 2016-04-06T02:53:22.097-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 478 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:45.583-0500 s20014| 2016-04-06T02:53:22.098-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 478 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.585-0500 s20014| 2016-04-06T02:53:22.098-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:45.593-0500 s20014| 2016-04-06T02:53:22.102-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -42.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:45.604-0500 s20014| 2016-04-06T02:53:22.102-0500 D ASIO [conn1] startCommand: RemoteCommand 480 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.102-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.607-0500 s20014| 2016-04-06T02:53:22.102-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 480 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:45.612-0500 s20014| 2016-04-06T02:53:22.102-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 480 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.614-0500 s20014| 2016-04-06T02:53:22.102-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:45.617-0500 s20014| 2016-04-06T02:53:22.105-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -41.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:45.635-0500 s20014| 2016-04-06T02:53:22.105-0500 D ASIO [conn1] startCommand: RemoteCommand 482 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.105-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.636-0500 s20014| 2016-04-06T02:53:22.105-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 482 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:45.639-0500 s20014| 2016-04-06T02:53:22.107-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 482 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.649-0500 s20014| 2016-04-06T02:53:22.107-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:45.679-0500 s20014| 2016-04-06T02:53:22.116-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -40.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:45.682-0500 s20014| 2016-04-06T02:53:22.117-0500 D ASIO [conn1] startCommand: RemoteCommand 484 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.117-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.683-0500 s20014| 2016-04-06T02:53:22.117-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 484 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:45.691-0500 s20014| 2016-04-06T02:53:22.118-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 484 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.693-0500 s20014| 2016-04-06T02:53:22.118-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:45.708-0500 s20014| 2016-04-06T02:53:22.123-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -39.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:45.727-0500 s20014| 2016-04-06T02:53:22.124-0500 D ASIO [conn1] startCommand: RemoteCommand 486 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.124-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.728-0500 s20014| 2016-04-06T02:53:22.124-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 486 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:45.733-0500 s20014| 2016-04-06T02:53:22.125-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 486 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.735-0500 s20014| 2016-04-06T02:53:22.126-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:45.739-0500 s20014| 2016-04-06T02:53:22.129-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -38.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:45.744-0500 s20014| 2016-04-06T02:53:22.129-0500 D ASIO [conn1] startCommand: RemoteCommand 488 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.129-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.745-0500 s20014| 2016-04-06T02:53:22.129-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 488 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:45.751-0500 s20014| 2016-04-06T02:53:22.129-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 488 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.763-0500 s20014| 2016-04-06T02:53:22.129-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:45.767-0500 s20014| 2016-04-06T02:53:22.134-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -37.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:45.768-0500 s20014| 2016-04-06T02:53:22.134-0500 D ASIO [conn1] startCommand: RemoteCommand 490 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.134-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.770-0500 s20014| 2016-04-06T02:53:22.134-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 490 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:45.773-0500 s20014| 2016-04-06T02:53:22.134-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 490 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.774-0500 s20014| 2016-04-06T02:53:22.134-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:45.781-0500 s20014| 2016-04-06T02:53:22.138-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -36.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:45.783-0500 s20014| 2016-04-06T02:53:22.138-0500 D ASIO [conn1] startCommand: RemoteCommand 492 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.138-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.783-0500 s20014| 2016-04-06T02:53:22.138-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 492 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:45.786-0500 s20014| 2016-04-06T02:53:22.139-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 492 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.788-0500 s20014| 2016-04-06T02:53:22.139-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:45.791-0500 s20014| 2016-04-06T02:53:22.143-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -35.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:45.794-0500 s20014| 2016-04-06T02:53:22.143-0500 D ASIO [conn1] startCommand: RemoteCommand 494 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.143-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.797-0500 s20014| 2016-04-06T02:53:22.143-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 494 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:45.801-0500 s20014| 2016-04-06T02:53:22.144-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 494 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.803-0500 s20014| 2016-04-06T02:53:22.144-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:45.808-0500 s20014| 2016-04-06T02:53:22.147-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -34.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:45.811-0500 s20015| 2016-04-06T02:53:28.995-0500 D ASIO [Balancer] startCommand: RemoteCommand 113 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:58.995-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929208995), up: 81, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.820-0500 s20015| 2016-04-06T02:53:28.995-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 113 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:45.826-0500 s20015| 2016-04-06T02:53:29.106-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Failed to execute command: RemoteCommand 113 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:58.995-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929208995), up: 81, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } reason: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:45.842-0500 s20015| 2016-04-06T02:53:29.106-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 113 finished with response: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:45.843-0500 s20015| 2016-04-06T02:53:29.106-0500 D NETWORK [Balancer] Marking host mongovm16:20011 as failed [js_test:multi_coll_drop] 2016-04-06T02:53:45.846-0500 s20015| 2016-04-06T02:53:29.106-0500 D SHARDING [Balancer] Command failed with retriable error and will be retried :: caused by :: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:45.860-0500 s20015| 2016-04-06T02:53:29.106-0500 D NETWORK [Balancer] polling for status of connection to 192.168.100.28:20012, no events [js_test:multi_coll_drop] 2016-04-06T02:53:45.861-0500 s20015| 2016-04-06T02:53:29.108-0500 D NETWORK [ReplicaSetMonitorWatcher] creating new connection to:mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:45.872-0500 s20015| 2016-04-06T02:53:29.109-0500 D ASIO [Balancer] startCommand: RemoteCommand 115 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:59.109-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929208995), up: 81, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.874-0500 s20015| 2016-04-06T02:53:29.109-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:45.878-0500 s20015| 2016-04-06T02:53:29.109-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 116 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:45.878-0500 s20015| 2016-04-06T02:53:29.114-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:53:45.879-0500 s20015| 2016-04-06T02:53:29.114-0500 D NETWORK [ReplicaSetMonitorWatcher] connected to server mongovm16:20013 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:53:45.882-0500 s20015| 2016-04-06T02:53:29.115-0500 D NETWORK [ReplicaSetMonitorWatcher] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:53:45.882-0500 s20015| 2016-04-06T02:53:29.134-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:45.883-0500 s20015| 2016-04-06T02:53:29.134-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 116 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:45.884-0500 s20015| 2016-04-06T02:53:29.134-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 115 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:45.885-0500 s20015| 2016-04-06T02:53:29.134-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 115 finished with response: { ok: 0.0, errmsg: "not master", code: 10107 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.886-0500 s20015| 2016-04-06T02:53:29.135-0500 D NETWORK [Balancer] Marking host mongovm16:20011 as failed [js_test:multi_coll_drop] 2016-04-06T02:53:45.887-0500 s20015| 2016-04-06T02:53:29.135-0500 D SHARDING [Balancer] Command failed with retriable error and will be retried :: caused by :: NotMaster: not master [js_test:multi_coll_drop] 2016-04-06T02:53:45.890-0500 s20015| 2016-04-06T02:53:29.135-0500 D ASIO [Balancer] startCommand: RemoteCommand 118 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:53:59.135-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929208995), up: 81, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.891-0500 s20015| 2016-04-06T02:53:29.135-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:45.893-0500 s20015| 2016-04-06T02:53:29.136-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 119 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:45.895-0500 s20015| 2016-04-06T02:53:29.139-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:45.904-0500 s20015| 2016-04-06T02:53:29.139-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 119 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:45.906-0500 s20015| 2016-04-06T02:53:29.139-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 118 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:45.909-0500 c20012| 2016-04-06T02:53:29.103-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.910-0500 c20012| 2016-04-06T02:53:29.103-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:45.911-0500 c20012| 2016-04-06T02:53:29.104-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 5 } numYields:0 reslen:478 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:45.916-0500 c20012| 2016-04-06T02:53:29.104-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1343 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.917-0500 c20012| 2016-04-06T02:53:29.108-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.925-0500 c20012| 2016-04-06T02:53:29.108-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:45.927-0500 c20012| 2016-04-06T02:53:29.108-0500 D COMMAND [conn34] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.927-0500 c20012| 2016-04-06T02:53:29.108-0500 I COMMAND [conn34] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:45.928-0500 c20012| 2016-04-06T02:53:29.109-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:45.928-0500 c20012| 2016-04-06T02:53:29.109-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1351 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:45.933-0500 c20012| 2016-04-06T02:53:29.109-0500 I ASIO [NetworkInterfaceASIO-Replication-0] dropping unhealthy pooled connection to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:45.940-0500 c20012| 2016-04-06T02:53:29.109-0500 I ASIO [NetworkInterfaceASIO-Replication-0] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:53:45.942-0500 c20012| 2016-04-06T02:53:29.109-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:45.946-0500 c20012| 2016-04-06T02:53:29.109-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1355 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:45.948-0500 c20012| 2016-04-06T02:53:29.111-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:45.951-0500 c20012| 2016-04-06T02:53:29.111-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1355 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:45.952-0500 c20012| 2016-04-06T02:53:29.111-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1350 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:45.956-0500 c20012| 2016-04-06T02:53:29.112-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1350 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 5, durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, opTime: { ts: Timestamp 1459929209000|1, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:45.957-0500 c20012| 2016-04-06T02:53:29.112-0500 I REPL [ReplicationExecutor] Member mongovm16:20011 is now in state SECONDARY [js_test:multi_coll_drop] 2016-04-06T02:53:45.960-0500 c20012| 2016-04-06T02:53:29.112-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:31.612Z [js_test:multi_coll_drop] 2016-04-06T02:53:45.963-0500 c20012| 2016-04-06T02:53:29.117-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.967-0500 c20012| 2016-04-06T02:53:29.117-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:45.972-0500 c20012| 2016-04-06T02:53:29.117-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:45.974-0500 2016-04-06T02:53:29.980-0500 I NETWORK [thread2] reconnect mongovm16:20011 (192.168.100.28) ok [js_test:multi_coll_drop] 2016-04-06T02:53:45.996-0500 c20012| 2016-04-06T02:53:29.118-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:46.026-0500 c20012| 2016-04-06T02:53:29.118-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:46.028-0500 c20012| 2016-04-06T02:53:29.292-0500 D COMMAND [conn37] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 6 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.029-0500 c20012| 2016-04-06T02:53:29.292-0500 D COMMAND [conn37] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:46.030-0500 c20012| 2016-04-06T02:53:29.293-0500 I COMMAND [conn37] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 6 } numYields:0 reslen:459 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:46.035-0500 c20012| 2016-04-06T02:53:29.494-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1357 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:39.494-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 6 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.038-0500 c20012| 2016-04-06T02:53:29.494-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1357 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:46.040-0500 c20012| 2016-04-06T02:53:29.495-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1357 finished with response: { ok: 1.0, electionTime: new Date(6270348198540214273), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 6, primaryId: 2, durableOpTime: { ts: Timestamp 1459929209000|1, t: 6 }, opTime: { ts: Timestamp 1459929209000|1, t: 6 } } [js_test:multi_coll_drop] 2016-04-06T02:53:46.042-0500 2016-04-06T02:53:30.733-0500 I NETWORK [ReplicaSetMonitorWatcherc20012| 2016-04-06T02:53:29.495-0500 I REPL [ReplicationExecutor] Member mongovm16:20013 is now in state PRIMARY [js_test:multi_coll_drop] 2016-04-06T02:53:46.050-0500 c20012| 2016-04-06T02:53:29.495-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:31.995Z [js_test:multi_coll_drop] 2016-04-06T02:53:46.058-0500 ] Socket closed remotely, no longer connected (idle 10 secs, remote host 192.168.100.28:20011) [js_test:multi_coll_drop] 2016-04-06T02:53:46.058-0500 c20012| 2016-04-06T02:53:29.995-0500 I REPL [ReplicationExecutor] syncing from: mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:46.066-0500 c20012| 2016-04-06T02:53:29.996-0500 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 1359 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:59.996-0500 cmd:{ find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:46.070-0500 c20012| 2016-04-06T02:53:29.996-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1359 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:46.081-0500 c20012| 2016-04-06T02:53:29.996-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1359 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1459929117000|1, h: 1169182228640141205, v: 2, op: "n", ns: "", o: { msg: "initiating set" } } ], id: 0, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.082-0500 c20012| 2016-04-06T02:53:29.996-0500 D REPL [rsBackgroundSync] scheduling fetcher to read remote oplog on mongovm16:20013 starting at filter: { ts: { $gte: Timestamp 1459929201000|1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:46.086-0500 c20012| 2016-04-06T02:53:29.996-0500 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 1361 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:34.996-0500 cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929201000|1 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 6 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.098-0500 c20012| 2016-04-06T02:53:29.996-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:46.110-0500 c20012| 2016-04-06T02:53:29.996-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter failed to prepare update command with status: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:46.110-0500 c20012| 2016-04-06T02:53:29.996-0500 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to mongovm16:20011: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:46.113-0500 c20012| 2016-04-06T02:53:29.996-0500 D REPL [SyncSourceFeedback] The replication progress command (replSetUpdatePosition) failed and will be retried: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:46.114-0500 c20012| 2016-04-06T02:53:29.997-0500 D REPL [SyncSourceFeedback] setting syncSourceFeedback to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:46.121-0500 c20012| 2016-04-06T02:53:29.997-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929209000|1, t: 5 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:46.131-0500 c20012| 2016-04-06T02:53:29.997-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1363 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929209000|1, t: 5 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:46.133-0500 c20012| 2016-04-06T02:53:29.997-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1362 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:46.135-0500 c20012| 2016-04-06T02:53:29.997-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] dropping unhealthy pooled connection to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:46.140-0500 c20012| 2016-04-06T02:53:29.997-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] dropping unhealthy pooled connection to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:46.141-0500 c20012| 2016-04-06T02:53:29.997-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:53:46.146-0500 c20012| 2016-04-06T02:53:29.997-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:46.148-0500 c20012| 2016-04-06T02:53:29.997-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1364 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:46.151-0500 c20012| 2016-04-06T02:53:29.998-0500 I ASIO [NetworkInterfaceASIO-BGSync-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:46.153-0500 c20012| 2016-04-06T02:53:29.998-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1362 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:46.155-0500 c20012| 2016-04-06T02:53:29.998-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1361 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:46.156-0500 c20012| 2016-04-06T02:53:29.998-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:46.156-0500 c20012| 2016-04-06T02:53:29.998-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1364 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:46.161-0500 c20012| 2016-04-06T02:53:29.998-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1363 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:46.162-0500 c20012| 2016-04-06T02:53:29.998-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1363 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.166-0500 c20012| 2016-04-06T02:53:29.998-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1361 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1459929201000|1, t: 5, h: -1628857208926061585, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20014" }, o: { $set: { ping: new Date(1459929201977), up: 74, waiting: true } } }, { ts: Timestamp 1459929207000|2, t: 6, h: 8199327370007018684, v: 2, op: "n", ns: "", o: { msg: "new primary" } }, { ts: Timestamp 1459929209000|1, t: 6, h: -5730911673521721576, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20015" }, o: { $set: { ping: new Date(1459929208995), up: 81, waiting: false } } } ], id: 22818882735, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.173-0500 c20012| 2016-04-06T02:53:29.999-0500 D REPL [rsBackgroundSync-0] fetcher read 3 operations from remote oplog starting at ts: Timestamp 1459929201000|1 and ending at ts: Timestamp 1459929209000|1 [js_test:multi_coll_drop] 2016-04-06T02:53:46.178-0500 c20012| 2016-04-06T02:53:29.999-0500 D REPL [rsBackgroundSync-0] bgsync buffer has 0 bytes [js_test:multi_coll_drop] 2016-04-06T02:53:46.180-0500 c20012| 2016-04-06T02:53:30.000-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:46.188-0500 c20012| 2016-04-06T02:53:30.005-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:46.189-0500 c20012| 2016-04-06T02:53:30.005-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.193-0500 c20012| 2016-04-06T02:53:30.005-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.193-0500 c20012| 2016-04-06T02:53:30.005-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.194-0500 c20012| 2016-04-06T02:53:30.005-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.195-0500 c20012| 2016-04-06T02:53:30.005-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.195-0500 c20012| 2016-04-06T02:53:30.005-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.199-0500 c20012| 2016-04-06T02:53:30.005-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.202-0500 c20012| 2016-04-06T02:53:30.005-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.203-0500 c20012| 2016-04-06T02:53:30.006-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.204-0500 c20012| 2016-04-06T02:53:30.006-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.205-0500 c20012| 2016-04-06T02:53:30.007-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.205-0500 c20012| 2016-04-06T02:53:30.007-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.205-0500 c20012| 2016-04-06T02:53:30.009-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.210-0500 c20012| 2016-04-06T02:53:30.009-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.213-0500 c20012| 2016-04-06T02:53:30.009-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.213-0500 c20012| 2016-04-06T02:53:30.009-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.225-0500 c20012| 2016-04-06T02:53:30.009-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.226-0500 c20012| 2016-04-06T02:53:30.010-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.242-0500 c20012| 2016-04-06T02:53:30.013-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.251-0500 c20012| 2016-04-06T02:53:30.013-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.253-0500 c20012| 2016-04-06T02:53:30.014-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.253-0500 c20012| 2016-04-06T02:53:30.014-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.271-0500 c20012| 2016-04-06T02:53:30.014-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.275-0500 c20012| 2016-04-06T02:53:30.015-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.285-0500 c20012| 2016-04-06T02:53:30.015-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.301-0500 c20012| 2016-04-06T02:53:30.016-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.309-0500 c20012| 2016-04-06T02:53:30.017-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.326-0500 c20012| 2016-04-06T02:53:30.019-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1367 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:35.019-0500 cmd:{ getMore: 22818882735, collection: "oplog.rs", maxTimeMS: 2500, term: 6, lastKnownCommittedOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:46.332-0500 c20012| 2016-04-06T02:53:30.019-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1367 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:46.333-0500 c20012| 2016-04-06T02:53:30.027-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.333-0500 c20012| 2016-04-06T02:53:30.028-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.333-0500 c20012| 2016-04-06T02:53:30.028-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.334-0500 c20012| 2016-04-06T02:53:30.030-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.334-0500 c20012| 2016-04-06T02:53:30.030-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.336-0500 c20012| 2016-04-06T02:53:30.031-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:46.339-0500 c20012| 2016-04-06T02:53:30.031-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:46.342-0500 c20012| 2016-04-06T02:53:30.031-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.343-0500 c20012| 2016-04-06T02:53:30.031-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.345-0500 c20012| 2016-04-06T02:53:30.032-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.355-0500 c20012| 2016-04-06T02:53:30.032-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.364-0500 c20012| 2016-04-06T02:53:30.032-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:46.365-0500 c20012| 2016-04-06T02:53:30.032-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.366-0500 c20012| 2016-04-06T02:53:30.032-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.370-0500 c20012| 2016-04-06T02:53:30.032-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.372-0500 c20012| 2016-04-06T02:53:30.032-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.374-0500 c20012| 2016-04-06T02:53:30.032-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:46.375-0500 c20012| 2016-04-06T02:53:30.032-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.376-0500 c20012| 2016-04-06T02:53:30.032-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.380-0500 c20012| 2016-04-06T02:53:30.032-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.395-0500 c20012| 2016-04-06T02:53:30.032-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929209000|1, t: 5 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929207000|2, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:46.402-0500 c20012| 2016-04-06T02:53:30.032-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1368 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929209000|1, t: 5 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929207000|2, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:46.403-0500 c20012| 2016-04-06T02:53:30.032-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1368 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:46.405-0500 c20012| 2016-04-06T02:53:30.032-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.407-0500 c20012| 2016-04-06T02:53:30.032-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.410-0500 c20012| 2016-04-06T02:53:30.033-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.412-0500 c20012| 2016-04-06T02:53:30.033-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1368 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.414-0500 c20012| 2016-04-06T02:53:30.034-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.415-0500 c20012| 2016-04-06T02:53:30.034-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.416-0500 c20012| 2016-04-06T02:53:30.034-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.416-0500 c20012| 2016-04-06T02:53:30.034-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.416-0500 c20012| 2016-04-06T02:53:30.034-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.417-0500 c20012| 2016-04-06T02:53:30.035-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.419-0500 c20012| 2016-04-06T02:53:30.036-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.419-0500 c20012| 2016-04-06T02:53:30.036-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.421-0500 c20012| 2016-04-06T02:53:30.036-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.422-0500 c20012| 2016-04-06T02:53:30.036-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.424-0500 c20012| 2016-04-06T02:53:30.036-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.424-0500 c20012| 2016-04-06T02:53:30.036-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.426-0500 c20012| 2016-04-06T02:53:30.036-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.427-0500 c20012| 2016-04-06T02:53:30.040-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.430-0500 c20012| 2016-04-06T02:53:30.040-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.435-0500 c20012| 2016-04-06T02:53:30.040-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.435-0500 c20012| 2016-04-06T02:53:30.040-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.437-0500 c20012| 2016-04-06T02:53:30.040-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.438-0500 c20012| 2016-04-06T02:53:30.041-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:46.440-0500 c20012| 2016-04-06T02:53:30.041-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929209000|1, t: 5 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929209000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:46.444-0500 c20012| 2016-04-06T02:53:30.041-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1370 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929209000|1, t: 5 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929209000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:46.448-0500 c20012| 2016-04-06T02:53:30.041-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1370 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:46.452-0500 c20012| 2016-04-06T02:53:30.041-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1370 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.457-0500 c20012| 2016-04-06T02:53:30.042-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929209000|1, t: 5 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929207000|2, t: 6 }, appliedOpTime: { ts: Timestamp 1459929209000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:46.458-0500 c20012| 2016-04-06T02:53:30.042-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1372 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929209000|1, t: 5 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929207000|2, t: 6 }, appliedOpTime: { ts: Timestamp 1459929209000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:46.459-0500 c20012| 2016-04-06T02:53:30.042-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1372 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:46.461-0500 c20012| 2016-04-06T02:53:30.042-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1372 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.463-0500 c20012| 2016-04-06T02:53:30.043-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1367 finished with response: { cursor: { nextBatch: [], id: 22818882735, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.466-0500 c20012| 2016-04-06T02:53:30.044-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929207000|2, t: 6 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.470-0500 c20012| 2016-04-06T02:53:30.044-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:46.476-0500 c20012| 2016-04-06T02:53:30.044-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929209000|1, t: 5 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929209000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929209000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:46.481-0500 c20012| 2016-04-06T02:53:30.044-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1375 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929209000|1, t: 5 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929209000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929209000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:46.485-0500 c20012| 2016-04-06T02:53:30.044-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1375 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:46.487-0500 c20012| 2016-04-06T02:53:30.044-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1375 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.496-0500 c20012| 2016-04-06T02:53:30.045-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1377 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:35.045-0500 cmd:{ getMore: 22818882735, collection: "oplog.rs", maxTimeMS: 2500, term: 6, lastKnownCommittedOpTime: { ts: Timestamp 1459929207000|2, t: 6 } } [js_test:multi_coll_drop] 2016-04-06T02:53:46.496-0500 c20012| 2016-04-06T02:53:30.045-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1377 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:46.499-0500 c20012| 2016-04-06T02:53:30.046-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1377 finished with response: { cursor: { nextBatch: [], id: 22818882735, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.502-0500 c20012| 2016-04-06T02:53:30.047-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929209000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.503-0500 c20012| 2016-04-06T02:53:30.047-0500 D REPL [conn42] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929209000|1, t: 6 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929201000|1, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.504-0500 c20012| 2016-04-06T02:53:30.047-0500 D REPL [conn42] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999982μs [js_test:multi_coll_drop] 2016-04-06T02:53:46.506-0500 c20012| 2016-04-06T02:53:30.048-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929209000|1, t: 6 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.507-0500 c20012| 2016-04-06T02:53:30.048-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:46.510-0500 c20012| 2016-04-06T02:53:30.048-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929209000|1, t: 6 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:46.514-0500 c20012| 2016-04-06T02:53:30.048-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929209000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.516-0500 c20012| 2016-04-06T02:53:30.048-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:46.519-0500 c20012| 2016-04-06T02:53:30.048-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1379 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:35.048-0500 cmd:{ getMore: 22818882735, collection: "oplog.rs", maxTimeMS: 2500, term: 6, lastKnownCommittedOpTime: { ts: Timestamp 1459929209000|1, t: 6 } } [js_test:multi_coll_drop] 2016-04-06T02:53:46.524-0500 c20012| 2016-04-06T02:53:30.048-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929209000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:46.539-0500 c20012| 2016-04-06T02:53:30.048-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1379 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:46.545-0500 c20012| 2016-04-06T02:53:30.055-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1379 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929210000|1, t: 6, h: 7748934581568548131, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20015" }, o: { $set: { ping: new Date(1459929210054), up: 83, waiting: true } } } ], id: 22818882735, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.550-0500 c20012| 2016-04-06T02:53:30.055-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929210000|1 and ending at ts: Timestamp 1459929210000|1 [js_test:multi_coll_drop] 2016-04-06T02:53:46.550-0500 c20012| 2016-04-06T02:53:30.055-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:46.553-0500 c20012| 2016-04-06T02:53:30.056-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.553-0500 c20012| 2016-04-06T02:53:30.056-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.554-0500 c20012| 2016-04-06T02:53:30.056-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.554-0500 c20012| 2016-04-06T02:53:30.056-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.556-0500 c20012| 2016-04-06T02:53:30.056-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.557-0500 c20012| 2016-04-06T02:53:30.056-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.557-0500 c20012| 2016-04-06T02:53:30.056-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.558-0500 c20012| 2016-04-06T02:53:30.056-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.560-0500 c20012| 2016-04-06T02:53:30.056-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.561-0500 c20012| 2016-04-06T02:53:30.056-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.562-0500 c20012| 2016-04-06T02:53:30.056-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.564-0500 c20012| 2016-04-06T02:53:30.056-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.565-0500 c20012| 2016-04-06T02:53:30.056-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.566-0500 c20012| 2016-04-06T02:53:30.056-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.567-0500 c20012| 2016-04-06T02:53:30.056-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:46.567-0500 c20012| 2016-04-06T02:53:30.056-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:46.571-0500 c20012| 2016-04-06T02:53:30.056-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.578-0500 c20012| 2016-04-06T02:53:30.056-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.579-0500 c20012| 2016-04-06T02:53:30.057-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.582-0500 c20012| 2016-04-06T02:53:30.057-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.583-0500 c20012| 2016-04-06T02:53:30.057-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.584-0500 c20012| 2016-04-06T02:53:30.057-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.585-0500 c20012| 2016-04-06T02:53:30.057-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.588-0500 c20012| 2016-04-06T02:53:30.057-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.592-0500 c20012| 2016-04-06T02:53:30.057-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1381 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:35.057-0500 cmd:{ getMore: 22818882735, collection: "oplog.rs", maxTimeMS: 2500, term: 6, lastKnownCommittedOpTime: { ts: Timestamp 1459929209000|1, t: 6 } } [js_test:multi_coll_drop] 2016-04-06T02:53:46.596-0500 c20012| 2016-04-06T02:53:30.057-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1381 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:46.599-0500 c20012| 2016-04-06T02:53:30.057-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.600-0500 c20012| 2016-04-06T02:53:30.057-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.603-0500 c20012| 2016-04-06T02:53:30.057-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.604-0500 c20012| 2016-04-06T02:53:30.057-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.605-0500 c20012| 2016-04-06T02:53:30.057-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.606-0500 c20012| 2016-04-06T02:53:30.057-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.610-0500 c20012| 2016-04-06T02:53:30.057-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.610-0500 c20012| 2016-04-06T02:53:30.057-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.612-0500 c20012| 2016-04-06T02:53:30.062-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.614-0500 c20012| 2016-04-06T02:53:30.063-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.618-0500 c20012| 2016-04-06T02:53:30.063-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:46.624-0500 c20012| 2016-04-06T02:53:30.063-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929209000|1, t: 5 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929209000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:46.633-0500 c20012| 2016-04-06T02:53:30.063-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1382 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929209000|1, t: 5 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929209000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:46.634-0500 c20012| 2016-04-06T02:53:30.063-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1382 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:46.634-0500 c20012| 2016-04-06T02:53:30.063-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1382 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.638-0500 c20012| 2016-04-06T02:53:30.066-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929209000|1, t: 5 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:46.642-0500 c20012| 2016-04-06T02:53:30.066-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1384 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929209000|1, t: 5 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:46.642-0500 c20012| 2016-04-06T02:53:30.066-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1384 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:46.644-0500 c20012| 2016-04-06T02:53:30.067-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1384 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.645-0500 c20012| 2016-04-06T02:53:30.067-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1381 finished with response: { cursor: { nextBatch: [], id: 22818882735, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.646-0500 c20012| 2016-04-06T02:53:30.067-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929210000|1, t: 6 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.646-0500 c20012| 2016-04-06T02:53:30.067-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:46.650-0500 c20012| 2016-04-06T02:53:30.067-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1387 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:35.067-0500 cmd:{ getMore: 22818882735, collection: "oplog.rs", maxTimeMS: 2500, term: 6, lastKnownCommittedOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } [js_test:multi_coll_drop] 2016-04-06T02:53:46.653-0500 c20012| 2016-04-06T02:53:30.067-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1387 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:46.654-0500 2016-04-06T02:53:30.897-0500 I NETWORK [thread2] trying reconnect to mongovm16:20012 (192.168.100.28) failed [js_test:multi_coll_drop] 2016-04-06T02:53:46.657-0500 c20012| 2016-04-06T02:53:30.901-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:41099 #44 (13 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:46.657-0500 c20013| 2016-04-06T02:52:26.885-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.658-0500 c20013| 2016-04-06T02:52:26.885-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.659-0500 c20013| 2016-04-06T02:52:26.885-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:46.660-0500 c20013| 2016-04-06T02:52:26.885-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.660-0500 c20013| 2016-04-06T02:52:26.885-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.661-0500 c20013| 2016-04-06T02:52:26.885-0500 D QUERY [repl writer worker 11] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:46.662-0500 c20013| 2016-04-06T02:52:26.885-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.662-0500 c20013| 2016-04-06T02:52:26.885-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.664-0500 c20013| 2016-04-06T02:52:26.885-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.666-0500 c20013| 2016-04-06T02:52:26.885-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.667-0500 c20013| 2016-04-06T02:52:26.885-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.669-0500 c20013| 2016-04-06T02:52:26.885-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.671-0500 c20013| 2016-04-06T02:52:26.885-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.671-0500 c20013| 2016-04-06T02:52:26.885-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.673-0500 c20013| 2016-04-06T02:52:26.885-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.674-0500 c20013| 2016-04-06T02:52:26.885-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.676-0500 c20013| 2016-04-06T02:52:26.886-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.679-0500 c20013| 2016-04-06T02:52:26.886-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1225 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.886-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|8, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:46.680-0500 c20013| 2016-04-06T02:52:26.886-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1225 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:46.680-0500 c20013| 2016-04-06T02:52:26.887-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.681-0500 c20013| 2016-04-06T02:52:26.887-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.681-0500 c20013| 2016-04-06T02:52:26.887-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.682-0500 c20013| 2016-04-06T02:52:26.887-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.682-0500 c20013| 2016-04-06T02:52:26.887-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.686-0500 c20013| 2016-04-06T02:52:26.888-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:46.689-0500 c20013| 2016-04-06T02:52:26.888-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:46.692-0500 c20013| 2016-04-06T02:52:26.888-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1226 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|8, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:46.693-0500 c20013| 2016-04-06T02:52:26.888-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1226 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:46.696-0500 c20013| 2016-04-06T02:52:26.888-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1226 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.702-0500 c20013| 2016-04-06T02:52:26.891-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:46.708-0500 c20013| 2016-04-06T02:52:26.891-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1228 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:46.713-0500 c20013| 2016-04-06T02:52:26.891-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1228 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:46.724-0500 c20013| 2016-04-06T02:52:26.891-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1228 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.727-0500 c20013| 2016-04-06T02:52:26.891-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1225 finished with response: { cursor: { nextBatch: [], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.728-0500 c20013| 2016-04-06T02:52:26.891-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.732-0500 c20013| 2016-04-06T02:52:26.892-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:46.736-0500 c20013| 2016-04-06T02:52:26.892-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1231 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.892-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:46.738-0500 c20013| 2016-04-06T02:52:26.892-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1231 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:46.742-0500 c20013| 2016-04-06T02:52:26.892-0500 D COMMAND [conn15] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|9, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.744-0500 c20013| 2016-04-06T02:52:26.892-0500 D COMMAND [conn15] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:46.747-0500 c20013| 2016-04-06T02:52:26.892-0500 D COMMAND [conn15] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|9, t: 2 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.751-0500 c20013| 2016-04-06T02:52:26.892-0500 D QUERY [conn15] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:46.762-0500 c20013| 2016-04-06T02:52:26.892-0500 I COMMAND [conn15] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929146000|9, t: 2 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:492 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:46.768-0500 c20011| 2016-04-06T02:53:04.664-0500 I COMMAND [conn38] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929171773), up: 44, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:562 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 12889ms [js_test:multi_coll_drop] 2016-04-06T02:53:46.773-0500 c20013| 2016-04-06T02:52:26.894-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1231 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929146000|10, t: 2, h: 8129632561130330747, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-76.0", lastmod: Timestamp 1000|51, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -76.0 }, max: { _id: -75.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-76.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-75.0", lastmod: Timestamp 1000|52, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -75.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-75.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 25449496203, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.778-0500 c20013| 2016-04-06T02:52:26.894-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929146000|10 and ending at ts: Timestamp 1459929146000|10 [js_test:multi_coll_drop] 2016-04-06T02:53:46.779-0500 c20013| 2016-04-06T02:52:26.895-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:46.779-0500 c20013| 2016-04-06T02:52:26.896-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.780-0500 c20013| 2016-04-06T02:52:26.896-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.781-0500 c20013| 2016-04-06T02:52:26.896-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.782-0500 c20013| 2016-04-06T02:52:26.896-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.783-0500 c20013| 2016-04-06T02:52:26.896-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.783-0500 c20013| 2016-04-06T02:52:26.896-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.784-0500 c20013| 2016-04-06T02:52:26.896-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.785-0500 c20013| 2016-04-06T02:52:26.896-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.785-0500 c20013| 2016-04-06T02:52:26.896-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.798-0500 c20013| 2016-04-06T02:52:26.896-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.822-0500 c20013| 2016-04-06T02:52:26.896-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1233 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.896-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:46.822-0500 c20013| 2016-04-06T02:52:26.896-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.823-0500 c20013| 2016-04-06T02:52:26.896-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1233 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:46.824-0500 c20013| 2016-04-06T02:52:26.896-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.827-0500 c20013| 2016-04-06T02:52:26.896-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.827-0500 c20013| 2016-04-06T02:52:26.896-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:46.831-0500 s20014| 2016-04-06T02:53:22.147-0500 D ASIO [conn1] startCommand: RemoteCommand 496 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.147-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.834-0500 s20014| 2016-04-06T02:53:22.147-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 496 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:46.840-0500 s20014| 2016-04-06T02:53:22.148-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 496 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.843-0500 s20014| 2016-04-06T02:53:22.148-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:46.870-0500 s20014| 2016-04-06T02:53:22.149-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -33.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:46.872-0500 s20014| 2016-04-06T02:53:22.150-0500 D ASIO [conn1] startCommand: RemoteCommand 498 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.149-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.874-0500 s20014| 2016-04-06T02:53:22.150-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 498 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:46.880-0500 s20014| 2016-04-06T02:53:22.150-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 498 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.883-0500 s20014| 2016-04-06T02:53:22.150-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:46.890-0500 s20014| 2016-04-06T02:53:22.152-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -32.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:46.893-0500 s20014| 2016-04-06T02:53:22.152-0500 D ASIO [conn1] startCommand: RemoteCommand 500 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.152-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.896-0500 s20014| 2016-04-06T02:53:22.152-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 500 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:46.902-0500 s20014| 2016-04-06T02:53:22.152-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 500 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.903-0500 s20014| 2016-04-06T02:53:22.152-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:46.908-0500 s20014| 2016-04-06T02:53:22.154-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -31.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:46.912-0500 s20014| 2016-04-06T02:53:22.155-0500 D ASIO [conn1] startCommand: RemoteCommand 502 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.155-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.913-0500 s20014| 2016-04-06T02:53:22.155-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 502 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:46.920-0500 s20014| 2016-04-06T02:53:22.155-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 502 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.922-0500 s20014| 2016-04-06T02:53:22.155-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:46.934-0500 s20014| 2016-04-06T02:53:22.157-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -30.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:46.956-0500 s20014| 2016-04-06T02:53:22.157-0500 D ASIO [conn1] startCommand: RemoteCommand 504 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.157-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.958-0500 s20014| 2016-04-06T02:53:22.157-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 504 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:46.962-0500 s20014| 2016-04-06T02:53:22.157-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 504 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.963-0500 s20014| 2016-04-06T02:53:22.157-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:46.971-0500 s20014| 2016-04-06T02:53:22.159-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -29.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:46.978-0500 s20014| 2016-04-06T02:53:22.159-0500 D ASIO [conn1] startCommand: RemoteCommand 506 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.159-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.980-0500 s20014| 2016-04-06T02:53:22.159-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 506 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:46.988-0500 s20014| 2016-04-06T02:53:22.159-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 506 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:46.990-0500 s20014| 2016-04-06T02:53:22.159-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:46.998-0500 s20014| 2016-04-06T02:53:22.161-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -28.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.011-0500 s20014| 2016-04-06T02:53:22.161-0500 D ASIO [conn1] startCommand: RemoteCommand 508 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.161-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.011-0500 s20014| 2016-04-06T02:53:22.161-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 508 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.012-0500 s20014| 2016-04-06T02:53:22.162-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 508 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.012-0500 s20014| 2016-04-06T02:53:22.162-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.016-0500 s20014| 2016-04-06T02:53:22.164-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -27.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.029-0500 s20014| 2016-04-06T02:53:22.164-0500 D ASIO [conn1] startCommand: RemoteCommand 510 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.164-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.029-0500 s20014| 2016-04-06T02:53:22.164-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 510 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.037-0500 s20014| 2016-04-06T02:53:22.165-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 510 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.037-0500 s20014| 2016-04-06T02:53:22.165-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.040-0500 s20014| 2016-04-06T02:53:22.167-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -26.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.042-0500 s20014| 2016-04-06T02:53:22.168-0500 D ASIO [conn1] startCommand: RemoteCommand 512 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.168-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.043-0500 s20014| 2016-04-06T02:53:22.168-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 512 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.048-0500 s20014| 2016-04-06T02:53:22.168-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 512 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.051-0500 s20014| 2016-04-06T02:53:22.168-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.057-0500 s20014| 2016-04-06T02:53:22.171-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -25.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.058-0500 s20014| 2016-04-06T02:53:22.171-0500 D ASIO [conn1] startCommand: RemoteCommand 514 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.171-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.058-0500 s20014| 2016-04-06T02:53:22.171-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 514 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.059-0500 s20014| 2016-04-06T02:53:22.173-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 514 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.061-0500 s20014| 2016-04-06T02:53:22.173-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.075-0500 s20014| 2016-04-06T02:53:22.184-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -24.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.088-0500 s20014| 2016-04-06T02:53:22.184-0500 D ASIO [conn1] startCommand: RemoteCommand 516 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.184-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.088-0500 s20014| 2016-04-06T02:53:22.184-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 516 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.095-0500 s20014| 2016-04-06T02:53:22.195-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 516 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.105-0500 s20014| 2016-04-06T02:53:22.195-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.110-0500 s20014| 2016-04-06T02:53:22.203-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -23.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.114-0500 s20014| 2016-04-06T02:53:22.204-0500 D ASIO [conn1] startCommand: RemoteCommand 518 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.204-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.115-0500 s20014| 2016-04-06T02:53:22.204-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 518 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.122-0500 s20014| 2016-04-06T02:53:22.204-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 518 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.122-0500 s20014| 2016-04-06T02:53:22.204-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.127-0500 s20014| 2016-04-06T02:53:22.215-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -22.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.129-0500 s20014| 2016-04-06T02:53:22.215-0500 D ASIO [conn1] startCommand: RemoteCommand 520 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.215-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.131-0500 s20014| 2016-04-06T02:53:22.215-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 520 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:47.137-0500 s20014| 2016-04-06T02:53:22.217-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 520 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.138-0500 s20014| 2016-04-06T02:53:22.217-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.141-0500 s20014| 2016-04-06T02:53:22.224-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -21.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.143-0500 s20014| 2016-04-06T02:53:22.224-0500 D ASIO [conn1] startCommand: RemoteCommand 522 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.224-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.144-0500 s20014| 2016-04-06T02:53:22.224-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 522 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:47.149-0500 s20014| 2016-04-06T02:53:22.226-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 522 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.155-0500 s20014| 2016-04-06T02:53:22.226-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.163-0500 s20014| 2016-04-06T02:53:22.229-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -20.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.166-0500 s20014| 2016-04-06T02:53:22.230-0500 D ASIO [conn1] startCommand: RemoteCommand 524 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.230-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.167-0500 s20014| 2016-04-06T02:53:22.230-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 524 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:47.169-0500 s20014| 2016-04-06T02:53:22.232-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 524 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.171-0500 s20014| 2016-04-06T02:53:22.232-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.176-0500 s20014| 2016-04-06T02:53:22.235-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -19.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.178-0500 s20014| 2016-04-06T02:53:22.235-0500 D ASIO [conn1] startCommand: RemoteCommand 526 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.235-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.179-0500 s20014| 2016-04-06T02:53:22.235-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 526 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.180-0500 s20014| 2016-04-06T02:53:22.241-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 526 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.180-0500 s20014| 2016-04-06T02:53:22.242-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.186-0500 s20014| 2016-04-06T02:53:22.244-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -18.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.189-0500 s20014| 2016-04-06T02:53:22.245-0500 D ASIO [conn1] startCommand: RemoteCommand 528 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.244-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.191-0500 s20014| 2016-04-06T02:53:22.245-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 528 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.194-0500 s20014| 2016-04-06T02:53:22.246-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 528 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.195-0500 s20014| 2016-04-06T02:53:22.246-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.200-0500 s20014| 2016-04-06T02:53:22.249-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -17.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.204-0500 s20014| 2016-04-06T02:53:22.249-0500 D ASIO [conn1] startCommand: RemoteCommand 530 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.249-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.205-0500 s20014| 2016-04-06T02:53:22.249-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 530 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.209-0500 s20014| 2016-04-06T02:53:22.250-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 530 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.209-0500 s20014| 2016-04-06T02:53:22.250-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.218-0500 s20014| 2016-04-06T02:53:22.255-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -16.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.219-0500 s20014| 2016-04-06T02:53:22.255-0500 D ASIO [conn1] startCommand: RemoteCommand 532 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.255-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.221-0500 s20014| 2016-04-06T02:53:22.255-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 532 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.224-0500 s20014| 2016-04-06T02:53:22.256-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 532 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.230-0500 s20014| 2016-04-06T02:53:22.256-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.240-0500 s20014| 2016-04-06T02:53:22.259-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -15.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.242-0500 s20014| 2016-04-06T02:53:22.259-0500 D ASIO [conn1] startCommand: RemoteCommand 534 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.259-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.244-0500 s20014| 2016-04-06T02:53:22.259-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 534 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:47.247-0500 s20014| 2016-04-06T02:53:22.260-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 534 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.257-0500 s20014| 2016-04-06T02:53:22.262-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.260-0500 s20014| 2016-04-06T02:53:22.266-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -14.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.261-0500 s20014| 2016-04-06T02:53:22.266-0500 D ASIO [conn1] startCommand: RemoteCommand 536 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.266-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.262-0500 s20014| 2016-04-06T02:53:22.266-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 536 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.266-0500 s20014| 2016-04-06T02:53:22.267-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 536 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.268-0500 s20014| 2016-04-06T02:53:22.267-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.271-0500 s20014| 2016-04-06T02:53:22.272-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -13.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.274-0500 s20014| 2016-04-06T02:53:22.272-0500 D ASIO [conn1] startCommand: RemoteCommand 538 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.272-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.279-0500 s20014| 2016-04-06T02:53:22.272-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 538 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.283-0500 s20014| 2016-04-06T02:53:22.273-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 538 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.285-0500 s20014| 2016-04-06T02:53:22.273-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.287-0500 s20014| 2016-04-06T02:53:22.276-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -12.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.290-0500 s20014| 2016-04-06T02:53:22.277-0500 D ASIO [conn1] startCommand: RemoteCommand 540 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.277-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.291-0500 s20014| 2016-04-06T02:53:22.277-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 540 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.293-0500 s20014| 2016-04-06T02:53:22.277-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 540 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.294-0500 s20014| 2016-04-06T02:53:22.277-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.297-0500 s20014| 2016-04-06T02:53:22.279-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -11.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.300-0500 s20014| 2016-04-06T02:53:22.279-0500 D ASIO [conn1] startCommand: RemoteCommand 542 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.279-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.300-0500 s20014| 2016-04-06T02:53:22.280-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 542 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.304-0500 s20014| 2016-04-06T02:53:22.280-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 542 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.305-0500 s20014| 2016-04-06T02:53:22.280-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.310-0500 s20014| 2016-04-06T02:53:22.282-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -10.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.313-0500 s20014| 2016-04-06T02:53:22.283-0500 D ASIO [conn1] startCommand: RemoteCommand 544 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.283-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.319-0500 s20014| 2016-04-06T02:53:22.283-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 544 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.327-0500 s20014| 2016-04-06T02:53:22.283-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 544 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.330-0500 s20014| 2016-04-06T02:53:22.283-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.336-0500 s20014| 2016-04-06T02:53:22.286-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -9.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.340-0500 s20014| 2016-04-06T02:53:22.286-0500 D ASIO [conn1] startCommand: RemoteCommand 546 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.286-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.344-0500 s20014| 2016-04-06T02:53:22.286-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 546 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.346-0500 s20014| 2016-04-06T02:53:22.286-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 546 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.346-0500 s20014| 2016-04-06T02:53:22.286-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.357-0500 s20014| 2016-04-06T02:53:22.289-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -8.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.359-0500 s20014| 2016-04-06T02:53:22.289-0500 D ASIO [conn1] startCommand: RemoteCommand 548 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.289-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.366-0500 s20014| 2016-04-06T02:53:22.289-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 548 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.368-0500 s20014| 2016-04-06T02:53:22.289-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 548 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.370-0500 s20014| 2016-04-06T02:53:22.289-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.384-0500 s20014| 2016-04-06T02:53:22.292-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -7.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.389-0500 s20014| 2016-04-06T02:53:22.292-0500 D ASIO [conn1] startCommand: RemoteCommand 550 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.292-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.389-0500 s20014| 2016-04-06T02:53:22.292-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 550 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.397-0500 s20014| 2016-04-06T02:53:22.292-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 550 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.399-0500 s20014| 2016-04-06T02:53:22.292-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.408-0500 s20014| 2016-04-06T02:53:22.295-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -6.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.414-0500 s20014| 2016-04-06T02:53:22.295-0500 D ASIO [conn1] startCommand: RemoteCommand 552 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.295-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.416-0500 s20014| 2016-04-06T02:53:22.295-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 552 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:47.424-0500 s20014| 2016-04-06T02:53:22.296-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 552 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.425-0500 s20014| 2016-04-06T02:53:22.296-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.432-0500 s20014| 2016-04-06T02:53:22.300-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -5.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.446-0500 s20014| 2016-04-06T02:53:22.300-0500 D ASIO [conn1] startCommand: RemoteCommand 554 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.300-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.448-0500 s20014| 2016-04-06T02:53:22.300-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 554 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.452-0500 s20014| 2016-04-06T02:53:22.301-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 554 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.452-0500 s20014| 2016-04-06T02:53:22.301-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.458-0500 s20014| 2016-04-06T02:53:22.305-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -4.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.462-0500 s20014| 2016-04-06T02:53:22.305-0500 D ASIO [conn1] startCommand: RemoteCommand 556 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.305-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.463-0500 s20014| 2016-04-06T02:53:22.305-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 556 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:47.467-0500 s20014| 2016-04-06T02:53:22.311-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 556 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.469-0500 s20014| 2016-04-06T02:53:22.311-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.474-0500 s20014| 2016-04-06T02:53:22.315-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -3.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.479-0500 s20014| 2016-04-06T02:53:22.315-0500 D ASIO [conn1] startCommand: RemoteCommand 558 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.315-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.479-0500 s20014| 2016-04-06T02:53:22.315-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 558 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.481-0500 s20014| 2016-04-06T02:53:22.316-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 558 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.481-0500 s20014| 2016-04-06T02:53:22.316-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.483-0500 s20014| 2016-04-06T02:53:22.324-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -2.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.484-0500 s20014| 2016-04-06T02:53:22.324-0500 D ASIO [conn1] startCommand: RemoteCommand 560 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.324-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.484-0500 s20014| 2016-04-06T02:53:22.324-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 560 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:47.485-0500 s20014| 2016-04-06T02:53:22.327-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 560 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.486-0500 s20014| 2016-04-06T02:53:22.327-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.489-0500 s20014| 2016-04-06T02:53:22.331-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: -1.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.490-0500 s20014| 2016-04-06T02:53:22.331-0500 D ASIO [conn1] startCommand: RemoteCommand 562 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.331-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.491-0500 s20014| 2016-04-06T02:53:22.331-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 562 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:47.493-0500 s20014| 2016-04-06T02:53:22.334-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 562 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.493-0500 s20014| 2016-04-06T02:53:22.334-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.497-0500 s20014| 2016-04-06T02:53:22.345-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 0.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.500-0500 s20014| 2016-04-06T02:53:22.345-0500 D ASIO [conn1] startCommand: RemoteCommand 564 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.345-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.502-0500 s20014| 2016-04-06T02:53:22.345-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 564 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:47.505-0500 s20014| 2016-04-06T02:53:22.346-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 564 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.506-0500 s20014| 2016-04-06T02:53:22.346-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.511-0500 s20014| 2016-04-06T02:53:22.349-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 1.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.516-0500 s20014| 2016-04-06T02:53:22.349-0500 D ASIO [conn1] startCommand: RemoteCommand 566 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.349-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.518-0500 s20014| 2016-04-06T02:53:22.349-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 566 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.526-0500 s20014| 2016-04-06T02:53:22.350-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 566 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.529-0500 s20014| 2016-04-06T02:53:22.350-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.536-0500 s20014| 2016-04-06T02:53:22.352-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 2.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.538-0500 s20014| 2016-04-06T02:53:22.353-0500 D ASIO [conn1] startCommand: RemoteCommand 568 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.353-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.539-0500 s20014| 2016-04-06T02:53:22.353-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 568 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.541-0500 s20014| 2016-04-06T02:53:22.354-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 568 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.542-0500 s20014| 2016-04-06T02:53:22.354-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.547-0500 s20014| 2016-04-06T02:53:22.357-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 3.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.549-0500 s20014| 2016-04-06T02:53:22.357-0500 D ASIO [conn1] startCommand: RemoteCommand 570 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.357-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.551-0500 s20014| 2016-04-06T02:53:22.357-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 570 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:47.554-0500 s20014| 2016-04-06T02:53:22.357-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 570 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.558-0500 s20014| 2016-04-06T02:53:22.357-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.562-0500 s20014| 2016-04-06T02:53:22.360-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 4.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.565-0500 s20014| 2016-04-06T02:53:22.360-0500 D ASIO [conn1] startCommand: RemoteCommand 572 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.360-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.566-0500 s20014| 2016-04-06T02:53:22.360-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 572 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.568-0500 s20014| 2016-04-06T02:53:22.362-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 572 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.569-0500 s20014| 2016-04-06T02:53:22.363-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.574-0500 s20014| 2016-04-06T02:53:22.369-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 5.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.590-0500 s20014| 2016-04-06T02:53:22.369-0500 D ASIO [conn1] startCommand: RemoteCommand 574 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.369-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.591-0500 s20014| 2016-04-06T02:53:22.370-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 574 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.595-0500 s20014| 2016-04-06T02:53:22.370-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 574 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.596-0500 s20014| 2016-04-06T02:53:22.370-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.604-0500 s20014| 2016-04-06T02:53:22.373-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 6.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.611-0500 s20014| 2016-04-06T02:53:22.374-0500 D ASIO [conn1] startCommand: RemoteCommand 576 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.374-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.612-0500 s20014| 2016-04-06T02:53:22.374-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 576 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.617-0500 s20014| 2016-04-06T02:53:22.374-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 576 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.619-0500 s20014| 2016-04-06T02:53:22.374-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.626-0500 s20014| 2016-04-06T02:53:22.378-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 7.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.629-0500 s20014| 2016-04-06T02:53:22.378-0500 D ASIO [conn1] startCommand: RemoteCommand 578 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.378-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.630-0500 s20014| 2016-04-06T02:53:22.378-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 578 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:47.634-0500 s20014| 2016-04-06T02:53:22.378-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 578 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.635-0500 s20014| 2016-04-06T02:53:22.378-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.641-0500 s20014| 2016-04-06T02:53:22.382-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 8.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.644-0500 s20014| 2016-04-06T02:53:22.382-0500 D ASIO [conn1] startCommand: RemoteCommand 580 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.382-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.649-0500 s20014| 2016-04-06T02:53:22.382-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 580 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:47.658-0500 s20014| 2016-04-06T02:53:22.382-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 580 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.659-0500 s20014| 2016-04-06T02:53:22.383-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.664-0500 s20014| 2016-04-06T02:53:22.390-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 9.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.670-0500 s20014| 2016-04-06T02:53:22.390-0500 D ASIO [conn1] startCommand: RemoteCommand 582 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.390-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.674-0500 s20014| 2016-04-06T02:53:22.391-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 582 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:47.684-0500 s20014| 2016-04-06T02:53:22.391-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 582 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.686-0500 s20014| 2016-04-06T02:53:22.391-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.692-0500 s20014| 2016-04-06T02:53:22.400-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 10.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.696-0500 s20014| 2016-04-06T02:53:22.400-0500 D ASIO [conn1] startCommand: RemoteCommand 584 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.400-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.700-0500 s20014| 2016-04-06T02:53:22.400-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 584 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.716-0500 s20014| 2016-04-06T02:53:22.400-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 584 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.718-0500 s20014| 2016-04-06T02:53:22.400-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.721-0500 s20014| 2016-04-06T02:53:22.406-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 11.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.724-0500 s20014| 2016-04-06T02:53:22.406-0500 D ASIO [conn1] startCommand: RemoteCommand 586 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.406-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.728-0500 s20014| 2016-04-06T02:53:22.406-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 586 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.734-0500 s20014| 2016-04-06T02:53:22.406-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 586 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.736-0500 s20014| 2016-04-06T02:53:22.406-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.757-0500 s20014| 2016-04-06T02:53:22.409-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 12.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.771-0500 s20014| 2016-04-06T02:53:22.409-0500 D ASIO [conn1] startCommand: RemoteCommand 588 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.409-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.775-0500 s20014| 2016-04-06T02:53:22.409-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 588 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.778-0500 s20014| 2016-04-06T02:53:22.409-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 588 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.779-0500 s20014| 2016-04-06T02:53:22.409-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.784-0500 s20014| 2016-04-06T02:53:22.411-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 13.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.788-0500 s20014| 2016-04-06T02:53:22.411-0500 D ASIO [conn1] startCommand: RemoteCommand 590 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.411-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.791-0500 s20014| 2016-04-06T02:53:22.411-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 590 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.797-0500 s20014| 2016-04-06T02:53:22.411-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 590 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.800-0500 s20014| 2016-04-06T02:53:22.411-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.806-0500 s20014| 2016-04-06T02:53:22.413-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 14.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.810-0500 s20014| 2016-04-06T02:53:22.413-0500 D ASIO [conn1] startCommand: RemoteCommand 592 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.413-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.812-0500 s20014| 2016-04-06T02:53:22.413-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 592 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:47.816-0500 s20014| 2016-04-06T02:53:22.414-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 592 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.817-0500 s20014| 2016-04-06T02:53:22.414-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.820-0500 s20014| 2016-04-06T02:53:22.416-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 15.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.825-0500 s20014| 2016-04-06T02:53:22.416-0500 D ASIO [conn1] startCommand: RemoteCommand 594 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.416-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.827-0500 s20014| 2016-04-06T02:53:22.416-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 594 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:47.829-0500 s20014| 2016-04-06T02:53:22.416-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 594 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.832-0500 s20014| 2016-04-06T02:53:22.416-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.837-0500 s20014| 2016-04-06T02:53:22.418-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 16.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.841-0500 s20014| 2016-04-06T02:53:22.418-0500 D ASIO [conn1] startCommand: RemoteCommand 596 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.418-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.843-0500 s20014| 2016-04-06T02:53:22.418-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 596 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.845-0500 s20014| 2016-04-06T02:53:22.418-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 596 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.848-0500 s20014| 2016-04-06T02:53:22.418-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.852-0500 s20014| 2016-04-06T02:53:22.420-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 17.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.852-0500 s20014| 2016-04-06T02:53:22.420-0500 D ASIO [conn1] startCommand: RemoteCommand 598 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.420-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.855-0500 s20014| 2016-04-06T02:53:22.420-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 598 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:47.856-0500 s20014| 2016-04-06T02:53:22.421-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 598 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.858-0500 s20014| 2016-04-06T02:53:22.421-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.860-0500 s20014| 2016-04-06T02:53:22.422-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 18.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.863-0500 s20014| 2016-04-06T02:53:22.422-0500 D ASIO [conn1] startCommand: RemoteCommand 600 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.422-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.864-0500 s20014| 2016-04-06T02:53:22.422-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 600 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.873-0500 s20014| 2016-04-06T02:53:22.423-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 600 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.875-0500 s20014| 2016-04-06T02:53:22.423-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.884-0500 s20014| 2016-04-06T02:53:22.425-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 19.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.886-0500 s20014| 2016-04-06T02:53:22.425-0500 D ASIO [conn1] startCommand: RemoteCommand 602 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.425-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.887-0500 s20014| 2016-04-06T02:53:22.425-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 602 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:47.896-0500 s20014| 2016-04-06T02:53:22.425-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 602 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.897-0500 s20014| 2016-04-06T02:53:22.425-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.900-0500 s20014| 2016-04-06T02:53:22.427-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 20.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.901-0500 2016-04-06T02:53:30.902-0500 I NETWORK [thread2] reconnect mongovm16:20012 (192.168.100.28) ok [js_test:multi_coll_drop] 2016-04-06T02:53:47.922-0500 s20014| 2016-04-06T02:53:22.427-0500 D ASIO [conn1] startCommand: RemoteCommand 604 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.427-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.924-0500 s20014| 2016-04-06T02:53:22.427-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 604 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.925-0500 s20014| 2016-04-06T02:53:22.428-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 604 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.926-0500 s20014| 2016-04-06T02:53:22.428-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.930-0500 s20014| 2016-04-06T02:53:22.430-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 21.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.932-0500 s20014| 2016-04-06T02:53:22.430-0500 D ASIO [conn1] startCommand: RemoteCommand 606 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.430-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.934-0500 s20014| 2016-04-06T02:53:22.430-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 606 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:47.942-0500 s20014| 2016-04-06T02:53:22.430-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 606 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.943-0500 s20014| 2016-04-06T02:53:22.430-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.952-0500 s20014| 2016-04-06T02:53:22.433-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 22.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.960-0500 s20014| 2016-04-06T02:53:22.433-0500 D ASIO [conn1] startCommand: RemoteCommand 608 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.433-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.961-0500 s20014| 2016-04-06T02:53:22.433-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 608 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:47.963-0500 s20014| 2016-04-06T02:53:22.433-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 608 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.965-0500 s20014| 2016-04-06T02:53:22.433-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.966-0500 s20014| 2016-04-06T02:53:22.437-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 23.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.968-0500 s20014| 2016-04-06T02:53:22.437-0500 D ASIO [conn1] startCommand: RemoteCommand 610 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.437-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.968-0500 s20014| 2016-04-06T02:53:22.441-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 610 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:47.970-0500 s20014| 2016-04-06T02:53:22.442-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 610 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.970-0500 s20014| 2016-04-06T02:53:22.442-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:47.974-0500 s20014| 2016-04-06T02:53:22.446-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 24.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:47.979-0500 s20014| 2016-04-06T02:53:22.446-0500 D ASIO [conn1] startCommand: RemoteCommand 612 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.446-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:47.992-0500 s20014| 2016-04-06T02:53:22.446-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 612 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:47.997-0500 s20014| 2016-04-06T02:53:22.446-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 612 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.001-0500 s20014| 2016-04-06T02:53:22.447-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.002-0500 s20014| 2016-04-06T02:53:22.451-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 25.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.003-0500 s20014| 2016-04-06T02:53:22.451-0500 D ASIO [conn1] startCommand: RemoteCommand 614 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.451-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.003-0500 s20014| 2016-04-06T02:53:22.451-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 614 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:48.004-0500 s20014| 2016-04-06T02:53:22.451-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 614 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.004-0500 s20014| 2016-04-06T02:53:22.452-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.005-0500 s20014| 2016-04-06T02:53:22.456-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 26.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.006-0500 s20014| 2016-04-06T02:53:22.456-0500 D ASIO [conn1] startCommand: RemoteCommand 616 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.456-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.006-0500 s20014| 2016-04-06T02:53:22.456-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 616 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:48.006-0500 s20014| 2016-04-06T02:53:22.457-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 616 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.006-0500 s20014| 2016-04-06T02:53:22.457-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.007-0500 s20014| 2016-04-06T02:53:22.461-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 27.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.008-0500 s20014| 2016-04-06T02:53:22.461-0500 D ASIO [conn1] startCommand: RemoteCommand 618 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.461-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.008-0500 s20014| 2016-04-06T02:53:22.461-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 618 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.008-0500 s20014| 2016-04-06T02:53:22.461-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 618 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.009-0500 s20014| 2016-04-06T02:53:22.461-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.014-0500 s20014| 2016-04-06T02:53:22.465-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 28.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.034-0500 s20014| 2016-04-06T02:53:22.465-0500 D ASIO [conn1] startCommand: RemoteCommand 620 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.465-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.037-0500 s20014| 2016-04-06T02:53:22.465-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 620 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:48.045-0500 s20014| 2016-04-06T02:53:22.465-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 620 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.052-0500 s20014| 2016-04-06T02:53:22.465-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.062-0500 s20014| 2016-04-06T02:53:22.469-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 29.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.066-0500 s20014| 2016-04-06T02:53:22.469-0500 D ASIO [conn1] startCommand: RemoteCommand 622 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.469-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.070-0500 s20014| 2016-04-06T02:53:22.469-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 622 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.074-0500 s20014| 2016-04-06T02:53:22.469-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 622 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.075-0500 s20014| 2016-04-06T02:53:22.469-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.092-0500 s20014| 2016-04-06T02:53:22.476-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 30.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.098-0500 s20014| 2016-04-06T02:53:22.476-0500 D ASIO [conn1] startCommand: RemoteCommand 624 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.476-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.099-0500 s20014| 2016-04-06T02:53:22.476-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 624 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.102-0500 s20014| 2016-04-06T02:53:22.481-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 624 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.106-0500 s20014| 2016-04-06T02:53:22.481-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.109-0500 s20014| 2016-04-06T02:53:22.498-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 31.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.117-0500 s20014| 2016-04-06T02:53:22.498-0500 D ASIO [conn1] startCommand: RemoteCommand 626 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.498-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.121-0500 s20014| 2016-04-06T02:53:22.498-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 626 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.127-0500 s20014| 2016-04-06T02:53:22.499-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 626 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.128-0500 s20014| 2016-04-06T02:53:22.499-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.136-0500 s20014| 2016-04-06T02:53:22.507-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 32.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.139-0500 s20014| 2016-04-06T02:53:22.507-0500 D ASIO [conn1] startCommand: RemoteCommand 628 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.507-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.141-0500 s20014| 2016-04-06T02:53:22.507-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 628 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:48.146-0500 s20014| 2016-04-06T02:53:22.509-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 628 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.149-0500 s20014| 2016-04-06T02:53:22.509-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.152-0500 s20014| 2016-04-06T02:53:22.516-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 33.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.163-0500 s20014| 2016-04-06T02:53:22.516-0500 D ASIO [conn1] startCommand: RemoteCommand 630 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.516-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.165-0500 s20014| 2016-04-06T02:53:22.516-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 630 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.169-0500 s20014| 2016-04-06T02:53:22.516-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 630 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.171-0500 s20014| 2016-04-06T02:53:22.517-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.177-0500 s20014| 2016-04-06T02:53:22.531-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 34.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.180-0500 s20014| 2016-04-06T02:53:22.531-0500 D ASIO [conn1] startCommand: RemoteCommand 632 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.531-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.182-0500 s20014| 2016-04-06T02:53:22.531-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 632 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.200-0500 s20014| 2016-04-06T02:53:22.531-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 632 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.201-0500 s20014| 2016-04-06T02:53:22.531-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.204-0500 s20014| 2016-04-06T02:53:22.548-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 35.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.207-0500 s20014| 2016-04-06T02:53:22.548-0500 D ASIO [conn1] startCommand: RemoteCommand 634 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.548-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.216-0500 s20014| 2016-04-06T02:53:22.548-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 634 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:48.226-0500 s20014| 2016-04-06T02:53:22.549-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 634 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.228-0500 s20014| 2016-04-06T02:53:22.550-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.232-0500 s20014| 2016-04-06T02:53:22.556-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 36.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.235-0500 s20014| 2016-04-06T02:53:22.556-0500 D ASIO [conn1] startCommand: RemoteCommand 636 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:52.556-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.236-0500 s20014| 2016-04-06T02:53:22.556-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 636 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.240-0500 s20014| 2016-04-06T02:53:22.557-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 636 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.241-0500 s20014| 2016-04-06T02:53:22.557-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.247-0500 s20014| 2016-04-06T02:53:22.562-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 37.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.250-0500 s20014| 2016-04-06T02:53:22.563-0500 D ASIO [conn1] startCommand: RemoteCommand 638 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.563-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.251-0500 s20014| 2016-04-06T02:53:22.563-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 638 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:48.254-0500 s20014| 2016-04-06T02:53:22.563-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 638 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.256-0500 s20014| 2016-04-06T02:53:22.563-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.270-0500 s20014| 2016-04-06T02:53:22.572-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 38.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.273-0500 s20014| 2016-04-06T02:53:22.572-0500 D ASIO [conn1] startCommand: RemoteCommand 640 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.572-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.276-0500 s20014| 2016-04-06T02:53:22.572-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 640 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:48.282-0500 s20014| 2016-04-06T02:53:22.573-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 640 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.282-0500 s20014| 2016-04-06T02:53:22.573-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.288-0500 s20014| 2016-04-06T02:53:22.576-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 39.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.297-0500 s20014| 2016-04-06T02:53:22.576-0500 D ASIO [conn1] startCommand: RemoteCommand 642 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:53:52.576-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.298-0500 s20014| 2016-04-06T02:53:22.576-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 642 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:48.312-0500 s20014| 2016-04-06T02:53:22.576-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 642 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.313-0500 s20014| 2016-04-06T02:53:22.576-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.315-0500 s20014| 2016-04-06T02:53:24.665-0500 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:48.318-0500 s20014| 2016-04-06T02:53:24.666-0500 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.100.28:20012, no events [js_test:multi_coll_drop] 2016-04-06T02:53:48.323-0500 s20014| 2016-04-06T02:53:29.117-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 40.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.334-0500 s20014| 2016-04-06T02:53:29.117-0500 D ASIO [conn1] startCommand: RemoteCommand 644 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:53:59.117-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.337-0500 s20014| 2016-04-06T02:53:29.117-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 644 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.345-0500 s20014| 2016-04-06T02:53:29.118-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 644 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.346-0500 s20014| 2016-04-06T02:53:29.118-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.347-0500 s20014| 2016-04-06T02:53:29.118-0500 D NETWORK [conn1] polling for status of connection to 192.168.100.28:20010, no events [js_test:multi_coll_drop] 2016-04-06T02:53:48.352-0500 s20014| 2016-04-06T02:53:30.046-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 41.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.357-0500 s20014| 2016-04-06T02:53:30.046-0500 D ASIO [conn1] startCommand: RemoteCommand 646 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:00.046-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929209000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.358-0500 s20014| 2016-04-06T02:53:30.047-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 646 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.360-0500 s20014| 2016-04-06T02:53:30.048-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 646 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.361-0500 s20014| 2016-04-06T02:53:30.049-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.366-0500 s20014| 2016-04-06T02:53:30.054-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 42.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.367-0500 s20014| 2016-04-06T02:53:30.055-0500 D ASIO [conn1] startCommand: RemoteCommand 648 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:54:00.055-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929209000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.368-0500 s20014| 2016-04-06T02:53:30.055-0500 I ASIO [conn1] dropping unhealthy pooled connection to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:48.374-0500 s20014| 2016-04-06T02:53:30.055-0500 I ASIO [conn1] dropping unhealthy pooled connection to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:48.374-0500 s20014| 2016-04-06T02:53:30.055-0500 I ASIO [conn1] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:53:48.378-0500 s20014| 2016-04-06T02:53:30.055-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:48.379-0500 s20014| 2016-04-06T02:53:30.055-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 649 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:48.383-0500 s20014| 2016-04-06T02:53:30.056-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:48.383-0500 s20014| 2016-04-06T02:53:30.056-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 649 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:48.386-0500 s20014| 2016-04-06T02:53:30.056-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 648 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:48.388-0500 s20014| 2016-04-06T02:53:31.839-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Failed to execute command: RemoteCommand 648 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:54:00.055-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929209000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } reason: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:48.391-0500 s20014| 2016-04-06T02:53:31.839-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 648 finished with response: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:48.391-0500 s20014| 2016-04-06T02:53:31.839-0500 D NETWORK [conn1] Marking host mongovm16:20011 as failed [js_test:multi_coll_drop] 2016-04-06T02:53:48.393-0500 s20014| 2016-04-06T02:53:31.839-0500 D ASIO [conn1] startCommand: RemoteCommand 651 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:01.839-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929209000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.395-0500 s20014| 2016-04-06T02:53:31.839-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 651 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.399-0500 s20014| 2016-04-06T02:53:31.841-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 651 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.403-0500 s20014| 2016-04-06T02:53:31.842-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.406-0500 s20014| 2016-04-06T02:53:31.845-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 43.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.411-0500 s20014| 2016-04-06T02:53:31.845-0500 D ASIO [conn1] startCommand: RemoteCommand 653 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:01.845-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.412-0500 s20014| 2016-04-06T02:53:31.846-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 653 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.416-0500 s20014| 2016-04-06T02:53:31.851-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 653 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.419-0500 s20014| 2016-04-06T02:53:31.852-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.435-0500 s20014| 2016-04-06T02:53:31.855-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 44.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.439-0500 s20014| 2016-04-06T02:53:31.856-0500 D ASIO [conn1] startCommand: RemoteCommand 655 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:01.856-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.443-0500 s20014| 2016-04-06T02:53:31.856-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 655 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.445-0500 s20014| 2016-04-06T02:53:31.866-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 655 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.446-0500 s20014| 2016-04-06T02:53:31.866-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.450-0500 s20014| 2016-04-06T02:53:31.870-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 45.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.454-0500 s20014| 2016-04-06T02:53:31.870-0500 D ASIO [conn1] startCommand: RemoteCommand 657 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:01.870-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.455-0500 s20014| 2016-04-06T02:53:31.871-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 657 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.457-0500 s20014| 2016-04-06T02:53:31.871-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 657 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.458-0500 s20014| 2016-04-06T02:53:31.872-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.463-0500 s20014| 2016-04-06T02:53:31.874-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 46.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.464-0500 s20014| 2016-04-06T02:53:31.874-0500 D ASIO [conn1] startCommand: RemoteCommand 659 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:01.874-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.465-0500 s20014| 2016-04-06T02:53:31.877-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 659 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.467-0500 s20014| 2016-04-06T02:53:31.879-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 659 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.468-0500 s20014| 2016-04-06T02:53:31.879-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.472-0500 s20014| 2016-04-06T02:53:31.883-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 47.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.477-0500 s20014| 2016-04-06T02:53:31.883-0500 D ASIO [conn1] startCommand: RemoteCommand 661 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:01.883-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.480-0500 s20014| 2016-04-06T02:53:31.883-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 661 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.485-0500 s20014| 2016-04-06T02:53:31.886-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 661 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.486-0500 s20014| 2016-04-06T02:53:31.886-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.495-0500 s20014| 2016-04-06T02:53:31.889-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 48.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.498-0500 s20014| 2016-04-06T02:53:31.889-0500 D ASIO [conn1] startCommand: RemoteCommand 663 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:01.889-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.500-0500 s20014| 2016-04-06T02:53:31.889-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 663 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.507-0500 s20014| 2016-04-06T02:53:31.890-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 663 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.509-0500 s20014| 2016-04-06T02:53:31.890-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.517-0500 s20014| 2016-04-06T02:53:31.893-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 49.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.529-0500 s20014| 2016-04-06T02:53:31.894-0500 D ASIO [conn1] startCommand: RemoteCommand 665 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:01.894-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.535-0500 s20014| 2016-04-06T02:53:31.894-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 665 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.539-0500 s20014| 2016-04-06T02:53:31.898-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 665 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.542-0500 s20014| 2016-04-06T02:53:31.898-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.551-0500 s20014| 2016-04-06T02:53:31.901-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 50.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.559-0500 s20014| 2016-04-06T02:53:31.901-0500 D ASIO [conn1] startCommand: RemoteCommand 667 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:01.901-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.562-0500 s20014| 2016-04-06T02:53:31.902-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 667 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.571-0500 s20014| 2016-04-06T02:53:31.909-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 667 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.577-0500 s20014| 2016-04-06T02:53:31.909-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.583-0500 s20014| 2016-04-06T02:53:31.914-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 51.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.585-0500 2016-04-06T02:53:32.977-0500 I NETWORK [thread2] trying reconnect to mongovm16:20013 (192.168.100.28) failed [js_test:multi_coll_drop] 2016-04-06T02:53:48.589-0500 s20014| 2016-04-06T02:53:31.914-0500 D ASIO [conn1] startCommand: RemoteCommand 669 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:01.914-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.592-0500 s20014| 2016-04-06T02:53:31.914-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 669 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.596-0500 s20014| 2016-04-06T02:53:31.915-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 669 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.597-0500 s20014| 2016-04-06T02:53:31.915-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.606-0500 s20014| 2016-04-06T02:53:31.918-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 52.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.612-0500 s20014| 2016-04-06T02:53:31.918-0500 D ASIO [conn1] startCommand: RemoteCommand 671 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:01.918-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.619-0500 s20014| 2016-04-06T02:53:31.918-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 671 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.625-0500 s20014| 2016-04-06T02:53:31.919-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 671 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.627-0500 s20014| 2016-04-06T02:53:31.919-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.633-0500 s20014| 2016-04-06T02:53:31.922-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 53.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.640-0500 s20014| 2016-04-06T02:53:31.922-0500 D ASIO [conn1] startCommand: RemoteCommand 673 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:01.922-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.642-0500 s20014| 2016-04-06T02:53:31.922-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 673 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.647-0500 s20014| 2016-04-06T02:53:31.923-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 673 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.653-0500 s20014| 2016-04-06T02:53:31.923-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.659-0500 s20014| 2016-04-06T02:53:31.926-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 54.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.662-0500 s20014| 2016-04-06T02:53:31.926-0500 D ASIO [conn1] startCommand: RemoteCommand 675 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:01.926-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.663-0500 s20014| 2016-04-06T02:53:31.926-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 675 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.677-0500 s20014| 2016-04-06T02:53:31.928-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 675 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.678-0500 s20014| 2016-04-06T02:53:31.928-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.682-0500 s20014| 2016-04-06T02:53:31.930-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 55.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.688-0500 s20014| 2016-04-06T02:53:31.930-0500 D ASIO [conn1] startCommand: RemoteCommand 677 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:01.930-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.690-0500 s20014| 2016-04-06T02:53:31.931-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 677 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.699-0500 s20014| 2016-04-06T02:53:31.931-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 677 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.704-0500 s20014| 2016-04-06T02:53:31.931-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.718-0500 s20014| 2016-04-06T02:53:31.934-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 56.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.721-0500 s20014| 2016-04-06T02:53:31.934-0500 D ASIO [conn1] startCommand: RemoteCommand 679 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:01.934-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.722-0500 s20014| 2016-04-06T02:53:31.934-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 679 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.726-0500 s20014| 2016-04-06T02:53:31.934-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 679 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.726-0500 s20014| 2016-04-06T02:53:31.935-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.746-0500 s20014| 2016-04-06T02:53:31.936-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 57.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.752-0500 s20014| 2016-04-06T02:53:31.937-0500 D ASIO [conn1] startCommand: RemoteCommand 681 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:01.937-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.753-0500 s20014| 2016-04-06T02:53:31.937-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 681 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.760-0500 s20014| 2016-04-06T02:53:31.937-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 681 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.762-0500 s20014| 2016-04-06T02:53:31.937-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.766-0500 s20014| 2016-04-06T02:53:31.939-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 58.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.770-0500 s20014| 2016-04-06T02:53:31.940-0500 D ASIO [conn1] startCommand: RemoteCommand 683 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:01.939-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.772-0500 s20014| 2016-04-06T02:53:31.940-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 683 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.778-0500 s20014| 2016-04-06T02:53:31.940-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 683 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.780-0500 s20014| 2016-04-06T02:53:31.940-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.784-0500 s20014| 2016-04-06T02:53:31.943-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 59.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.786-0500 s20014| 2016-04-06T02:53:31.943-0500 D ASIO [conn1] startCommand: RemoteCommand 685 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:01.943-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.787-0500 s20014| 2016-04-06T02:53:31.944-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 685 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.789-0500 s20014| 2016-04-06T02:53:31.944-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 685 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.790-0500 s20014| 2016-04-06T02:53:31.945-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.794-0500 s20014| 2016-04-06T02:53:31.946-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 60.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:48.796-0500 s20014| 2016-04-06T02:53:31.946-0500 D ASIO [conn1] startCommand: RemoteCommand 687 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:01.946-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.798-0500 s20014| 2016-04-06T02:53:31.946-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 687 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:48.801-0500 s20014| 2016-04-06T02:53:31.947-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 687 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.806-0500 s20014| 2016-04-06T02:53:31.947-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:48.806-0500 s20014| 2016-04-06T02:53:31.994-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:48.809-0500 s20014| 2016-04-06T02:53:31.994-0500 D NETWORK [Balancer] polling for status of connection to 192.168.100.28:20011, event detected [js_test:multi_coll_drop] 2016-04-06T02:53:48.811-0500 s20014| 2016-04-06T02:53:31.994-0500 I NETWORK [Balancer] Socket closed remotely, no longer connected (idle 13 secs, remote host 192.168.100.28:20011) [js_test:multi_coll_drop] 2016-04-06T02:53:48.816-0500 s20014| 2016-04-06T02:53:31.994-0500 D NETWORK [Balancer] creating new connection to:mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:48.820-0500 s20014| 2016-04-06T02:53:31.994-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:53:48.821-0500 s20014| 2016-04-06T02:53:31.994-0500 D NETWORK [Balancer] connected to server mongovm16:20011 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:53:48.821-0500 s20014| 2016-04-06T02:53:31.995-0500 D NETWORK [Balancer] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:53:48.824-0500 s20014| 2016-04-06T02:53:31.995-0500 D NETWORK [Balancer] polling for status of connection to 192.168.100.28:20013, event detected [js_test:multi_coll_drop] 2016-04-06T02:53:48.825-0500 s20014| 2016-04-06T02:53:31.995-0500 I NETWORK [Balancer] Socket closed remotely, no longer connected (idle 13 secs, remote host 192.168.100.28:20013) [js_test:multi_coll_drop] 2016-04-06T02:53:48.828-0500 s20014| 2016-04-06T02:53:31.995-0500 D NETWORK [Balancer] creating new connection to:mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:48.829-0500 s20014| 2016-04-06T02:53:31.995-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:53:48.830-0500 s20014| 2016-04-06T02:53:31.995-0500 D NETWORK [Balancer] connected to server mongovm16:20013 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:53:48.833-0500 c20011| 2016-04-06T02:53:04.664-0500 D NETWORK [conn38] Socket say send() Bad file descriptor 192.168.100.28:59636 [js_test:multi_coll_drop] 2016-04-06T02:53:48.834-0500 c20011| 2016-04-06T02:53:04.664-0500 I NETWORK [conn38] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [192.168.100.28:59636] [js_test:multi_coll_drop] 2016-04-06T02:53:48.839-0500 c20011| 2016-04-06T02:53:04.664-0500 I COMMAND [conn28] command admin.$cmd command: replSetRequestVotes { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 4, candidateIndex: 2, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929163000|8, t: 3 } } numYields:0 reslen:159 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:48.840-0500 c20011| 2016-04-06T02:53:04.664-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:33714 #47 (7 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:48.846-0500 c20011| 2016-04-06T02:53:04.664-0500 D NETWORK [conn37] SocketException: remote: 192.168.100.28:59630 error: 9001 socket exception [CLOSED] server [192.168.100.28:59630] [js_test:multi_coll_drop] 2016-04-06T02:53:48.847-0500 c20011| 2016-04-06T02:53:04.664-0500 I NETWORK [conn37] end connection 192.168.100.28:59630 (6 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:48.849-0500 c20011| 2016-04-06T02:53:04.664-0500 D COMMAND [conn47] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:48.851-0500 c20011| 2016-04-06T02:53:04.665-0500 I COMMAND [conn47] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20015" } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:48.862-0500 c20011| 2016-04-06T02:53:04.665-0500 D NETWORK [conn29] SocketException: remote: 192.168.100.28:59436 error: 9001 socket exception [CLOSED] server [192.168.100.28:59436] [js_test:multi_coll_drop] 2016-04-06T02:53:48.863-0500 c20011| 2016-04-06T02:53:04.665-0500 I NETWORK [conn29] end connection 192.168.100.28:59436 (5 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:48.863-0500 c20011| 2016-04-06T02:53:04.665-0500 D COMMAND [conn47] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.865-0500 c20011| 2016-04-06T02:53:04.665-0500 I COMMAND [conn47] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:48.867-0500 c20011| 2016-04-06T02:53:04.665-0500 D COMMAND [conn47] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.869-0500 c20011| 2016-04-06T02:53:04.665-0500 I COMMAND [conn47] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:48.871-0500 c20011| 2016-04-06T02:53:04.665-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:48.872-0500 c20011| 2016-04-06T02:53:04.665-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 325 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:48.875-0500 c20011| 2016-04-06T02:53:04.665-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 324 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:48.878-0500 c20011| 2016-04-06T02:53:04.665-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 324 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 4, primaryId: 0, durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, opTime: { ts: Timestamp 1459929163000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:48.884-0500 c20011| 2016-04-06T02:53:04.666-0500 I REPL [ReplicationExecutor] Member mongovm16:20013 is now in state SECONDARY [js_test:multi_coll_drop] 2016-04-06T02:53:48.885-0500 c20011| 2016-04-06T02:53:04.666-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:07.166Z [js_test:multi_coll_drop] 2016-04-06T02:53:48.887-0500 c20011| 2016-04-06T02:53:04.666-0500 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.889-0500 c20011| 2016-04-06T02:53:04.666-0500 D COMMAND [conn28] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:48.890-0500 c20011| 2016-04-06T02:53:04.667-0500 D COMMAND [conn40] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|8, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:48.892-0500 c20011| 2016-04-06T02:53:04.667-0500 D COMMAND [conn40] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|74 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|8, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.896-0500 c20011| 2016-04-06T02:53:04.667-0500 D QUERY [conn40] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:48.901-0500 c20011| 2016-04-06T02:53:04.667-0500 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:48.908-0500 c20011| 2016-04-06T02:53:04.667-0500 I COMMAND [conn40] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|74 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|8, t: 3 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:48.910-0500 c20011| 2016-04-06T02:53:04.667-0500 D NETWORK [conn40] Socket say send() Bad file descriptor 192.168.100.28:59865 [js_test:multi_coll_drop] 2016-04-06T02:53:48.912-0500 c20011| 2016-04-06T02:53:04.667-0500 I NETWORK [conn40] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [192.168.100.28:59865] [js_test:multi_coll_drop] 2016-04-06T02:53:48.920-0500 c20011| 2016-04-06T02:53:04.668-0500 I COMMAND [conn36] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929171765), up: 44, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:562 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 12902ms [js_test:multi_coll_drop] 2016-04-06T02:53:48.920-0500 c20011| 2016-04-06T02:53:04.668-0500 D NETWORK [conn36] Socket say send() Bad file descriptor 192.168.100.28:59629 [js_test:multi_coll_drop] 2016-04-06T02:53:48.922-0500 c20011| 2016-04-06T02:53:04.668-0500 I NETWORK [conn36] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [192.168.100.28:59629] [js_test:multi_coll_drop] 2016-04-06T02:53:48.928-0500 c20011| 2016-04-06T02:53:04.669-0500 D NETWORK [conn32] SocketException: remote: 192.168.100.28:59473 error: 9001 socket exception [CLOSED] server [192.168.100.28:59473] [js_test:multi_coll_drop] 2016-04-06T02:53:48.930-0500 c20011| 2016-04-06T02:53:04.669-0500 I NETWORK [conn32] end connection 192.168.100.28:59473 (2 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:48.935-0500 c20011| 2016-04-06T02:53:04.721-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:33729 #48 (3 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:48.935-0500 c20011| 2016-04-06T02:53:04.721-0500 D COMMAND [conn48] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20010" } [js_test:multi_coll_drop] 2016-04-06T02:53:48.943-0500 c20011| 2016-04-06T02:53:04.721-0500 I COMMAND [conn48] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20010" } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:48.943-0500 c20011| 2016-04-06T02:53:04.721-0500 D COMMAND [conn48] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.944-0500 c20011| 2016-04-06T02:53:04.721-0500 I COMMAND [conn48] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:48.945-0500 c20011| 2016-04-06T02:53:04.721-0500 D COMMAND [conn48] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.949-0500 c20011| 2016-04-06T02:53:04.721-0500 I COMMAND [conn48] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:48.950-0500 c20011| 2016-04-06T02:53:04.745-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:33731 #49 (4 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:48.951-0500 c20011| 2016-04-06T02:53:04.745-0500 D COMMAND [conn49] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20012" } [js_test:multi_coll_drop] 2016-04-06T02:53:48.954-0500 c20011| 2016-04-06T02:53:04.746-0500 I COMMAND [conn49] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20012" } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:48.955-0500 c20011| 2016-04-06T02:53:04.746-0500 D COMMAND [conn49] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.956-0500 c20011| 2016-04-06T02:53:04.746-0500 D COMMAND [conn49] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:48.960-0500 c20011| 2016-04-06T02:53:04.746-0500 I COMMAND [conn49] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } numYields:0 reslen:439 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:48.961-0500 c20011| 2016-04-06T02:53:04.917-0500 D REPL [rsBackgroundSync] bgsync fetch queue set to: { ts: Timestamp 1459929171000|2, t: 3 } -4814934274042927403 [js_test:multi_coll_drop] 2016-04-06T02:53:48.961-0500 c20011| 2016-04-06T02:53:05.169-0500 D COMMAND [conn47] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.966-0500 c20011| 2016-04-06T02:53:05.169-0500 I COMMAND [conn47] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:48.966-0500 c20011| 2016-04-06T02:53:05.222-0500 D COMMAND [conn48] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.968-0500 c20011| 2016-04-06T02:53:05.222-0500 I COMMAND [conn48] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:48.968-0500 c20011| 2016-04-06T02:53:05.674-0500 D COMMAND [conn47] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.975-0500 c20011| 2016-04-06T02:53:05.675-0500 I COMMAND [conn47] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:48.983-0500 c20011| 2016-04-06T02:53:05.676-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:33771 #50 (5 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:48.984-0500 c20011| 2016-04-06T02:53:05.677-0500 D COMMAND [conn49] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.984-0500 c20011| 2016-04-06T02:53:05.677-0500 D COMMAND [conn49] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:48.986-0500 c20011| 2016-04-06T02:53:05.677-0500 D COMMAND [conn50] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:48.991-0500 c20011| 2016-04-06T02:53:05.677-0500 I COMMAND [conn50] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20014" } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:48.992-0500 c20011| 2016-04-06T02:53:05.677-0500 D COMMAND [conn50] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:48.995-0500 c20011| 2016-04-06T02:53:05.677-0500 I COMMAND [conn50] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:48.996-0500 c20011| 2016-04-06T02:53:05.678-0500 D COMMAND [conn50] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.041-0500 c20011| 2016-04-06T02:53:05.678-0500 I COMMAND [conn50] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.048-0500 c20011| 2016-04-06T02:53:05.679-0500 I COMMAND [conn49] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } numYields:0 reslen:439 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.048-0500 c20011| 2016-04-06T02:53:05.723-0500 D COMMAND [conn48] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.052-0500 c20011| 2016-04-06T02:53:05.723-0500 I COMMAND [conn48] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.055-0500 c20011| 2016-04-06T02:53:05.739-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 335 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:53:15.739-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.062-0500 c20011| 2016-04-06T02:53:05.740-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 335 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:49.065-0500 c20011| 2016-04-06T02:53:05.741-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 335 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 4, primaryId: 2, durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, opTime: { ts: Timestamp 1459929185000|1, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.068-0500 c20011| 2016-04-06T02:53:05.741-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:53:08.241Z [js_test:multi_coll_drop] 2016-04-06T02:53:49.070-0500 c20011| 2016-04-06T02:53:05.918-0500 I REPL [ReplicationExecutor] syncing from: mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:49.075-0500 c20011| 2016-04-06T02:53:05.918-0500 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 337 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:53:35.918-0500 cmd:{ find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.076-0500 c20011| 2016-04-06T02:53:05.920-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 337 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:49.080-0500 c20011| 2016-04-06T02:53:05.920-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 337 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1459929117000|1, h: 1169182228640141205, v: 2, op: "n", ns: "", o: { msg: "initiating set" } } ], id: 0, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.082-0500 c20011| 2016-04-06T02:53:05.920-0500 D REPL [rsBackgroundSync] scheduling fetcher to read remote oplog on mongovm16:20012 starting at filter: { ts: { $gte: Timestamp 1459929171000|2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.086-0500 c20011| 2016-04-06T02:53:05.920-0500 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 339 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:53:10.920-0500 cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929171000|2 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.091-0500 c20011| 2016-04-06T02:53:05.920-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929171000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929171000|2, t: 3 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:49.104-0500 c20011| 2016-04-06T02:53:05.920-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 341 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929171000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929171000|2, t: 3 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:49.105-0500 c20011| 2016-04-06T02:53:05.920-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:49.107-0500 c20011| 2016-04-06T02:53:05.920-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] dropping unhealthy pooled connection to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:49.107-0500 c20011| 2016-04-06T02:53:05.921-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] dropping unhealthy pooled connection to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:49.108-0500 c20011| 2016-04-06T02:53:05.921-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:53:49.111-0500 c20011| 2016-04-06T02:53:05.921-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:49.114-0500 c20011| 2016-04-06T02:53:05.921-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 342 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:49.118-0500 c20011| 2016-04-06T02:53:05.921-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 340 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:49.120-0500 c20011| 2016-04-06T02:53:05.922-0500 I ASIO [NetworkInterfaceASIO-BGSync-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:49.122-0500 c20011| 2016-04-06T02:53:05.922-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 340 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:49.123-0500 c20011| 2016-04-06T02:53:05.922-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 339 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:49.125-0500 c20011| 2016-04-06T02:53:05.922-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:49.127-0500 c20011| 2016-04-06T02:53:05.922-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 342 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:49.128-0500 c20011| 2016-04-06T02:53:05.922-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 341 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:49.134-0500 c20011| 2016-04-06T02:53:05.922-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 339 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1459929185000|1, t: 4, h: -8800919752589540802, v: 2, op: "n", ns: "", o: { msg: "new primary" } } ], id: 23130095408, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.137-0500 c20011| 2016-04-06T02:53:05.923-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 341 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.140-0500 c20011| 2016-04-06T02:53:05.923-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929185000|1 and ending at ts: Timestamp 1459929185000|1 [js_test:multi_coll_drop] 2016-04-06T02:53:49.143-0500 c20011| 2016-04-06T02:53:05.923-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 345 -- target:mongovm16:20012 db:local cmd:{ killCursors: "oplog.rs", cursors: [ 23130095408 ] } [js_test:multi_coll_drop] 2016-04-06T02:53:49.145-0500 c20011| 2016-04-06T02:53:05.923-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 345 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:49.149-0500 c20011| 2016-04-06T02:53:05.923-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 345 finished with response: { cursorsKilled: [ 23130095408 ], cursorsNotFound: [], cursorsAlive: [], cursorsUnknown: [], ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.152-0500 c20011| 2016-04-06T02:53:05.923-0500 D REPL [rsBackgroundSync] fetcher stopped reading remote oplog on mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:49.155-0500 c20011| 2016-04-06T02:53:05.923-0500 I REPL [rsBackgroundSync] Starting rollback due to OplogStartMissing: our last op time fetched: { ts: Timestamp 1459929171000|2, t: 3 }. source's GTE: { ts: Timestamp 1459929185000|1, t: 4 } hashes: (-4814934274042927403/-8800919752589540802) [js_test:multi_coll_drop] 2016-04-06T02:53:49.156-0500 c20011| 2016-04-06T02:53:05.923-0500 I REPL [rsBackgroundSync] beginning rollback [js_test:multi_coll_drop] 2016-04-06T02:53:49.156-0500 c20011| 2016-04-06T02:53:05.923-0500 I REPL [rsBackgroundSync] rollback 0 [js_test:multi_coll_drop] 2016-04-06T02:53:49.156-0500 c20011| 2016-04-06T02:53:05.923-0500 I REPL [ReplicationExecutor] transition to ROLLBACK [js_test:multi_coll_drop] 2016-04-06T02:53:49.156-0500 c20011| 2016-04-06T02:53:05.923-0500 I REPL [rsBackgroundSync] rollback 1 [js_test:multi_coll_drop] 2016-04-06T02:53:49.158-0500 c20011| 2016-04-06T02:53:05.923-0500 D NETWORK [conn47] SocketException: remote: 192.168.100.28:33714 error: 9001 socket exception [CLOSED] server [192.168.100.28:33714] [js_test:multi_coll_drop] 2016-04-06T02:53:49.160-0500 c20011| 2016-04-06T02:53:05.923-0500 D NETWORK [conn48] SocketException: remote: 192.168.100.28:33729 error: 9001 socket exception [CLOSED] server [192.168.100.28:33729] [js_test:multi_coll_drop] 2016-04-06T02:53:49.163-0500 c20011| 2016-04-06T02:53:05.924-0500 I NETWORK [conn48] end connection 192.168.100.28:33729 (4 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:49.166-0500 c20011| 2016-04-06T02:53:05.923-0500 D NETWORK [conn49] SocketException: remote: 192.168.100.28:33731 error: 9001 socket exception [CLOSED] server [192.168.100.28:33731] [js_test:multi_coll_drop] 2016-04-06T02:53:49.167-0500 c20011| 2016-04-06T02:53:05.924-0500 I NETWORK [conn49] end connection 192.168.100.28:33731 (3 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:49.169-0500 c20011| 2016-04-06T02:53:05.924-0500 I NETWORK [conn47] end connection 192.168.100.28:33714 (4 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:49.173-0500 c20011| 2016-04-06T02:53:05.924-0500 D NETWORK [conn50] SocketException: remote: 192.168.100.28:33771 error: 9001 socket exception [CLOSED] server [192.168.100.28:33771] [js_test:multi_coll_drop] 2016-04-06T02:53:49.174-0500 c20011| 2016-04-06T02:53:05.924-0500 I NETWORK [conn50] end connection 192.168.100.28:33771 (1 connection now open) [js_test:multi_coll_drop] 2016-04-06T02:53:49.175-0500 c20011| 2016-04-06T02:53:05.924-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:53:49.176-0500 c20011| 2016-04-06T02:53:05.924-0500 D NETWORK [rsBackgroundSync] connected to server mongovm16:20012 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:53:49.179-0500 c20011| 2016-04-06T02:53:05.925-0500 I REPL [rsBackgroundSync] rollback 2 FindCommonPoint [js_test:multi_coll_drop] 2016-04-06T02:53:49.181-0500 c20011| 2016-04-06T02:53:05.925-0500 I REPL [rsBackgroundSync] rollback our last optime: Apr 6 02:52:51:2 [js_test:multi_coll_drop] 2016-04-06T02:53:49.186-0500 c20011| 2016-04-06T02:53:05.925-0500 I REPL [rsBackgroundSync] rollback their last optime: Apr 6 02:53:05:1 [js_test:multi_coll_drop] 2016-04-06T02:53:49.186-0500 c20011| 2016-04-06T02:53:05.925-0500 I REPL [rsBackgroundSync] rollback diff in end of log times: -14 seconds [js_test:multi_coll_drop] 2016-04-06T02:53:49.187-0500 c20011| 2016-04-06T02:53:05.925-0500 I REPL [rsBackgroundSync] rollback 3 fixup [js_test:multi_coll_drop] 2016-04-06T02:53:49.187-0500 c20011| 2016-04-06T02:53:05.925-0500 I REPL [rsBackgroundSync] rollback 3.5 [js_test:multi_coll_drop] 2016-04-06T02:53:49.190-0500 c20011| 2016-04-06T02:53:05.925-0500 I REPL [rsBackgroundSync] rollback 4 n:1 [js_test:multi_coll_drop] 2016-04-06T02:53:49.190-0500 c20011| 2016-04-06T02:53:05.925-0500 I REPL [rsBackgroundSync] minvalid={ ts: Timestamp 1459929185000|1, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.192-0500 c20011| 2016-04-06T02:53:05.925-0500 D QUERY [rsBackgroundSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:49.192-0500 c20011| 2016-04-06T02:53:05.926-0500 I REPL [rsBackgroundSync] rollback 4.6 [js_test:multi_coll_drop] 2016-04-06T02:53:49.192-0500 c20011| 2016-04-06T02:53:05.926-0500 I REPL [rsBackgroundSync] rollback 4.7 [js_test:multi_coll_drop] 2016-04-06T02:53:49.196-0500 c20011| 2016-04-06T02:53:05.926-0500 D QUERY [rsBackgroundSync] Using idhack: query: { _id: "mongovm16:20014" } sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:53:49.197-0500 c20011| 2016-04-06T02:53:05.926-0500 D QUERY [rsBackgroundSync] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:49.197-0500 c20011| 2016-04-06T02:53:05.926-0500 D QUERY [rsBackgroundSync] Using idhack: query: { _id: "mongovm16:20015" } sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:53:49.198-0500 c20011| 2016-04-06T02:53:05.926-0500 D QUERY [rsBackgroundSync] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:49.199-0500 c20011| 2016-04-06T02:53:05.926-0500 I REPL [rsBackgroundSync] rollback 5 d:0 u:2 [js_test:multi_coll_drop] 2016-04-06T02:53:49.200-0500 c20011| 2016-04-06T02:53:05.926-0500 I REPL [rsBackgroundSync] rollback 6 [js_test:multi_coll_drop] 2016-04-06T02:53:49.202-0500 c20011| 2016-04-06T02:53:05.926-0500 D REPL [rsBackgroundSync] rollback truncate oplog after Apr 6 02:52:43:8 [js_test:multi_coll_drop] 2016-04-06T02:53:49.203-0500 c20011| 2016-04-06T02:53:05.926-0500 D QUERY [rsBackgroundSync] Running query: query: {} sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:53:49.204-0500 c20011| 2016-04-06T02:53:05.926-0500 D QUERY [rsBackgroundSync] Collection admin.system.roles does not exist. Using EOF plan: query: {} sort: {} projection: {} [js_test:multi_coll_drop] 2016-04-06T02:53:49.206-0500 c20011| 2016-04-06T02:53:05.926-0500 I COMMAND [rsBackgroundSync] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:20 locks:{ Global: { acquireCount: { r: 13, w: 4, W: 1 } }, Database: { acquireCount: { r: 4, w: 1, W: 3 } }, Collection: { acquireCount: { r: 3 } }, oplog: { acquireCount: { R: 1, W: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.207-0500 c20011| 2016-04-06T02:53:05.926-0500 I REPL [rsBackgroundSync] rollback done [js_test:multi_coll_drop] 2016-04-06T02:53:49.209-0500 c20011| 2016-04-06T02:53:05.926-0500 I REPL [rsBackgroundSync] rollback finished [js_test:multi_coll_drop] 2016-04-06T02:53:49.210-0500 c20011| 2016-04-06T02:53:05.926-0500 D NETWORK [conn28] SocketException: remote: 192.168.100.28:59434 error: 9001 socket exception [CLOSED] server [192.168.100.28:59434] [js_test:multi_coll_drop] 2016-04-06T02:53:49.214-0500 c20011| 2016-04-06T02:53:05.926-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:49.216-0500 c20011| 2016-04-06T02:53:05.926-0500 I NETWORK [conn28] end connection 192.168.100.28:59434 (0 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:49.222-0500 c20011| 2016-04-06T02:53:05.926-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 347 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:49.224-0500 c20011| 2016-04-06T02:53:05.926-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 347 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:49.225-0500 c20011| 2016-04-06T02:53:05.927-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 347 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.225-0500 c20011| 2016-04-06T02:53:05.937-0500 I REPL [ReplicationExecutor] transition to RECOVERING [js_test:multi_coll_drop] 2016-04-06T02:53:49.226-0500 c20011| 2016-04-06T02:53:05.938-0500 D REPL [rsBackgroundSync] bgsync fetch queue set to: { ts: Timestamp 1459929163000|8, t: 3 } -788849406847319887 [js_test:multi_coll_drop] 2016-04-06T02:53:49.228-0500 c20011| 2016-04-06T02:53:05.938-0500 I REPL [ReplicationExecutor] syncing from: mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:49.229-0500 c20011| 2016-04-06T02:53:05.938-0500 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 349 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:53:35.938-0500 cmd:{ find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.231-0500 c20011| 2016-04-06T02:53:05.939-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 349 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:49.238-0500 c20011| 2016-04-06T02:53:05.939-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 349 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1459929117000|1, h: 1169182228640141205, v: 2, op: "n", ns: "", o: { msg: "initiating set" } } ], id: 0, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.240-0500 c20011| 2016-04-06T02:53:05.940-0500 D REPL [rsBackgroundSync] scheduling fetcher to read remote oplog on mongovm16:20012 starting at filter: { ts: { $gte: Timestamp 1459929163000|8 } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.246-0500 c20011| 2016-04-06T02:53:05.940-0500 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 351 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:53:10.940-0500 cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929163000|8 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.254-0500 c20011| 2016-04-06T02:53:05.940-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 351 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:49.260-0500 c20011| 2016-04-06T02:53:05.940-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:49.266-0500 c20011| 2016-04-06T02:53:05.940-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 352 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:49.268-0500 c20011| 2016-04-06T02:53:05.940-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 352 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:49.284-0500 c20011| 2016-04-06T02:53:05.940-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 351 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1459929163000|8, t: 3, h: -788849406847319887, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c04b65c17830b843f1c7'), state: 2, when: new Date(1459929163335), why: "splitting chunk [{ _id: -64.0 }, { _id: MaxKey }) in multidrop.coll" } } }, { ts: Timestamp 1459929185000|1, t: 4, h: -8800919752589540802, v: 2, op: "n", ns: "", o: { msg: "new primary" } } ], id: 25053585400, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.284-0500 c20011| 2016-04-06T02:53:05.940-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 352 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.289-0500 c20011| 2016-04-06T02:53:05.942-0500 D REPL [rsBackgroundSync-0] fetcher read 2 operations from remote oplog starting at ts: Timestamp 1459929163000|8 and ending at ts: Timestamp 1459929185000|1 [js_test:multi_coll_drop] 2016-04-06T02:53:49.305-0500 c20011| 2016-04-06T02:53:05.942-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:49.307-0500 c20011| 2016-04-06T02:53:05.942-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.309-0500 c20011| 2016-04-06T02:53:05.942-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.314-0500 c20011| 2016-04-06T02:53:05.942-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.315-0500 c20011| 2016-04-06T02:53:05.942-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.316-0500 c20011| 2016-04-06T02:53:05.943-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.319-0500 c20011| 2016-04-06T02:53:05.943-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.320-0500 c20011| 2016-04-06T02:53:05.943-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.321-0500 c20011| 2016-04-06T02:53:05.943-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.324-0500 c20011| 2016-04-06T02:53:05.943-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.325-0500 c20011| 2016-04-06T02:53:05.943-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.327-0500 c20011| 2016-04-06T02:53:05.943-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.329-0500 c20011| 2016-04-06T02:53:05.943-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.329-0500 c20011| 2016-04-06T02:53:05.943-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.331-0500 c20011| 2016-04-06T02:53:05.943-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:49.333-0500 c20011| 2016-04-06T02:53:05.943-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.335-0500 c20011| 2016-04-06T02:53:05.943-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.335-0500 c20011| 2016-04-06T02:53:05.943-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.338-0500 c20011| 2016-04-06T02:53:05.943-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.338-0500 c20011| 2016-04-06T02:53:05.943-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.339-0500 c20011| 2016-04-06T02:53:05.943-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.339-0500 c20012| 2016-04-06T02:53:30.901-0500 D COMMAND [conn44] run command admin.$cmd { isMaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.343-0500 c20012| 2016-04-06T02:53:30.902-0500 I COMMAND [conn44] command admin.$cmd command: isMaster { isMaster: 1 } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.347-0500 c20012| 2016-04-06T02:53:30.905-0500 D COMMAND [conn35] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.350-0500 c20012| 2016-04-06T02:53:30.905-0500 I COMMAND [conn35] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.355-0500 c20012| 2016-04-06T02:53:31.293-0500 D COMMAND [conn37] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 6 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.355-0500 c20012| 2016-04-06T02:53:31.293-0500 D COMMAND [conn37] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:49.360-0500 c20012| 2016-04-06T02:53:31.296-0500 I COMMAND [conn37] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 6 } numYields:0 reslen:509 locks:{} protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.362-0500 c20012| 2016-04-06T02:53:31.613-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1388 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:41.613-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 6 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.376-0500 c20012| 2016-04-06T02:53:31.613-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1388 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:49.384-0500 c20012| 2016-04-06T02:53:31.614-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1388 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 6, primaryId: 2, durableOpTime: { ts: Timestamp 1459929209000|1, t: 5 }, opTime: { ts: Timestamp 1459929209000|1, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.386-0500 c20012| 2016-04-06T02:53:31.614-0500 D REPL [ReplicationExecutor] Ignoring older committed snapshot optime: { ts: Timestamp 1459929201000|1, t: 5 }, currentCommittedOpTime: { ts: Timestamp 1459929210000|1, t: 6 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.387-0500 c20012| 2016-04-06T02:53:31.614-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:33.614Z [js_test:multi_coll_drop] 2016-04-06T02:53:49.387-0500 c20012| 2016-04-06T02:53:31.634-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 6 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.388-0500 c20012| 2016-04-06T02:53:31.634-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:49.390-0500 c20012| 2016-04-06T02:53:31.636-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 6 } numYields:0 reslen:509 locks:{} protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.395-0500 c20012| 2016-04-06T02:53:31.840-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929209000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.396-0500 c20012| 2016-04-06T02:53:31.840-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929209000|1, t: 6 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.402-0500 c20012| 2016-04-06T02:53:31.840-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929209000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.404-0500 c20012| 2016-04-06T02:53:31.840-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:49.409-0500 c20012| 2016-04-06T02:53:31.841-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929209000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.411-0500 c20012| 2016-04-06T02:53:31.846-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.412-0500 c20012| 2016-04-06T02:53:31.846-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.414-0500 c20012| 2016-04-06T02:53:31.846-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.416-0500 c20012| 2016-04-06T02:53:31.846-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:49.420-0500 c20012| 2016-04-06T02:53:31.851-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.424-0500 c20012| 2016-04-06T02:53:31.856-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.427-0500 c20012| 2016-04-06T02:53:31.856-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.429-0500 c20012| 2016-04-06T02:53:31.856-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.430-0500 c20012| 2016-04-06T02:53:31.856-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:49.435-0500 c20012| 2016-04-06T02:53:31.858-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.436-0500 c20012| 2016-04-06T02:53:31.871-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.441-0500 c20012| 2016-04-06T02:53:31.871-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.448-0500 c20012| 2016-04-06T02:53:31.871-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.451-0500 c20012| 2016-04-06T02:53:31.871-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:49.454-0500 c20012| 2016-04-06T02:53:31.871-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.459-0500 c20012| 2016-04-06T02:53:31.878-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.460-0500 c20012| 2016-04-06T02:53:31.878-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.464-0500 c20012| 2016-04-06T02:53:31.878-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.467-0500 c20012| 2016-04-06T02:53:31.878-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:49.470-0500 c20012| 2016-04-06T02:53:31.879-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.472-0500 c20012| 2016-04-06T02:53:31.884-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.476-0500 c20012| 2016-04-06T02:53:31.884-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.478-0500 c20012| 2016-04-06T02:53:31.884-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.479-0500 c20012| 2016-04-06T02:53:31.885-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:49.489-0500 c20012| 2016-04-06T02:53:31.886-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.496-0500 c20012| 2016-04-06T02:53:31.889-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.500-0500 c20012| 2016-04-06T02:53:31.889-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.504-0500 c20012| 2016-04-06T02:53:31.889-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.509-0500 c20012| 2016-04-06T02:53:31.889-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:49.514-0500 c20012| 2016-04-06T02:53:31.890-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.518-0500 c20012| 2016-04-06T02:53:31.894-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.520-0500 c20012| 2016-04-06T02:53:31.894-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.523-0500 c20012| 2016-04-06T02:53:31.894-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.526-0500 c20012| 2016-04-06T02:53:31.894-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:49.529-0500 c20012| 2016-04-06T02:53:31.895-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.531-0500 c20012| 2016-04-06T02:53:31.908-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.532-0500 c20012| 2016-04-06T02:53:31.908-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.536-0500 c20012| 2016-04-06T02:53:31.908-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.540-0500 c20012| 2016-04-06T02:53:31.909-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:49.547-0500 c20012| 2016-04-06T02:53:31.909-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.548-0500 c20012| 2016-04-06T02:53:31.914-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.550-0500 c20012| 2016-04-06T02:53:31.914-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.552-0500 c20012| 2016-04-06T02:53:31.914-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.552-0500 c20012| 2016-04-06T02:53:31.914-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:49.556-0500 c20012| 2016-04-06T02:53:31.915-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.558-0500 c20012| 2016-04-06T02:53:31.919-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.561-0500 c20012| 2016-04-06T02:53:31.919-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.563-0500 c20012| 2016-04-06T02:53:31.919-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.566-0500 c20012| 2016-04-06T02:53:31.919-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:49.569-0500 c20012| 2016-04-06T02:53:31.919-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.573-0500 c20012| 2016-04-06T02:53:31.922-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.579-0500 c20012| 2016-04-06T02:53:31.922-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.587-0500 c20012| 2016-04-06T02:53:31.923-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.591-0500 c20012| 2016-04-06T02:53:31.923-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:49.595-0500 c20012| 2016-04-06T02:53:31.923-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.601-0500 c20012| 2016-04-06T02:53:31.926-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.604-0500 c20012| 2016-04-06T02:53:31.926-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.606-0500 c20012| 2016-04-06T02:53:31.926-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.609-0500 c20012| 2016-04-06T02:53:31.927-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:49.613-0500 c20012| 2016-04-06T02:53:31.928-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.616-0500 c20012| 2016-04-06T02:53:31.931-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.620-0500 c20012| 2016-04-06T02:53:31.931-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.623-0500 c20012| 2016-04-06T02:53:31.931-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.625-0500 c20012| 2016-04-06T02:53:31.931-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:49.628-0500 c20012| 2016-04-06T02:53:31.931-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.631-0500 c20012| 2016-04-06T02:53:31.934-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.633-0500 c20012| 2016-04-06T02:53:31.934-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.635-0500 c20012| 2016-04-06T02:53:31.934-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.641-0500 c20012| 2016-04-06T02:53:31.934-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:49.646-0500 c20012| 2016-04-06T02:53:31.934-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.652-0500 c20012| 2016-04-06T02:53:31.937-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.655-0500 c20012| 2016-04-06T02:53:31.937-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.659-0500 c20012| 2016-04-06T02:53:31.937-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.663-0500 c20012| 2016-04-06T02:53:31.937-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:49.666-0500 c20012| 2016-04-06T02:53:31.937-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.667-0500 c20012| 2016-04-06T02:53:31.940-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.670-0500 c20012| 2016-04-06T02:53:31.940-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.673-0500 c20012| 2016-04-06T02:53:31.940-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.675-0500 c20012| 2016-04-06T02:53:31.940-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:49.680-0500 c20012| 2016-04-06T02:53:31.940-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.688-0500 c20012| 2016-04-06T02:53:31.944-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.689-0500 c20012| 2016-04-06T02:53:31.944-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.695-0500 c20012| 2016-04-06T02:53:31.944-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.697-0500 c20012| 2016-04-06T02:53:31.944-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:49.700-0500 c20012| 2016-04-06T02:53:31.944-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.703-0500 c20012| 2016-04-06T02:53:31.946-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.707-0500 c20012| 2016-04-06T02:53:31.947-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.709-0500 c20012| 2016-04-06T02:53:31.947-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.716-0500 c20012| 2016-04-06T02:53:31.947-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:49.719-0500 c20012| 2016-04-06T02:53:31.947-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.721-0500 c20012| 2016-04-06T02:53:31.995-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1390 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:41.995-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 6 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.722-0500 c20012| 2016-04-06T02:53:31.995-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1390 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:49.726-0500 c20012| 2016-04-06T02:53:32.567-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929209000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929209000|1, t: 5 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:49.734-0500 c20012| 2016-04-06T02:53:32.567-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1391 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929209000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929209000|1, t: 5 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:49.735-0500 c20012| 2016-04-06T02:53:32.567-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1391 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:49.736-0500 c20012| 2016-04-06T02:53:32.975-0500 D COMMAND [conn44] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.739-0500 c20012| 2016-04-06T02:53:32.975-0500 I COMMAND [conn44] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.740-0500 c20012| 2016-04-06T02:53:33.614-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1392 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:43.614-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 6 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.742-0500 c20012| 2016-04-06T02:53:33.614-0500 I ASIO [ReplicationExecutor] dropping unhealthy pooled connection to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:49.744-0500 c20012| 2016-04-06T02:53:33.614-0500 I ASIO [ReplicationExecutor] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:53:49.745-0500 c20012| 2016-04-06T02:53:33.614-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:49.746-0500 c20012| 2016-04-06T02:53:33.614-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1393 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:49.748-0500 c20012| 2016-04-06T02:53:33.615-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:49.749-0500 c20012| 2016-04-06T02:53:33.615-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1393 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:49.753-0500 c20012| 2016-04-06T02:53:33.615-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1392 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:49.756-0500 c20012| 2016-04-06T02:53:33.616-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1392 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20013", term: 6, primaryId: 2, durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, opTime: { ts: Timestamp 1459929210000|1, t: 6 } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.758-0500 c20012| 2016-04-06T02:53:33.618-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:35.618Z [js_test:multi_coll_drop] 2016-04-06T02:53:49.760-0500 c20012| 2016-04-06T02:53:34.138-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 6 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.760-0500 c20012| 2016-04-06T02:53:34.138-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:49.763-0500 c20012| 2016-04-06T02:53:34.139-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 6 } numYields:0 reslen:509 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.763-0500 c20012| 2016-04-06T02:53:34.668-0500 D COMMAND [conn32] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.768-0500 c20012| 2016-04-06T02:53:34.669-0500 I COMMAND [conn32] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.774-0500 c20012| 2016-04-06T02:53:35.067-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1387 timed out, adjusted timeout after getting connection from pool was 5000ms, op was id: 23, states: [ UNINITIALIZED, IN_PROGRESS ], start_time: 2016-04-06T02:53:30.067-0500, request: RemoteCommand 1387 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:35.067-0500 cmd:{ getMore: 22818882735, collection: "oplog.rs", maxTimeMS: 2500, term: 6, lastKnownCommittedOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.780-0500 c20012| 2016-04-06T02:53:35.067-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Operation timing out; original request was: RemoteCommand 1387 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:35.067-0500 cmd:{ getMore: 22818882735, collection: "oplog.rs", maxTimeMS: 2500, term: 6, lastKnownCommittedOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.789-0500 c20012| 2016-04-06T02:53:35.067-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Failed to execute command: RemoteCommand 1387 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:35.067-0500 cmd:{ getMore: 22818882735, collection: "oplog.rs", maxTimeMS: 2500, term: 6, lastKnownCommittedOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } reason: ExceededTimeLimit: Operation timed out, request was RemoteCommand 1387 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:35.067-0500 cmd:{ getMore: 22818882735, collection: "oplog.rs", maxTimeMS: 2500, term: 6, lastKnownCommittedOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.797-0500 c20012| 2016-04-06T02:53:35.067-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1387 finished with response: ExceededTimeLimit: Operation timed out, request was RemoteCommand 1387 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:35.067-0500 cmd:{ getMore: 22818882735, collection: "oplog.rs", maxTimeMS: 2500, term: 6, lastKnownCommittedOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.803-0500 c20012| 2016-04-06T02:53:35.067-0500 D REPL [rsBackgroundSync-0] Error returned from oplog query: ExceededTimeLimit: Operation timed out, request was RemoteCommand 1387 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:35.067-0500 cmd:{ getMore: 22818882735, collection: "oplog.rs", maxTimeMS: 2500, term: 6, lastKnownCommittedOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.806-0500 c20012| 2016-04-06T02:53:35.067-0500 D REPL [rsBackgroundSync] fetcher stopped reading remote oplog on mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:49.807-0500 c20012| 2016-04-06T02:53:35.067-0500 I REPL [ReplicationExecutor] could not find member to sync from [js_test:multi_coll_drop] 2016-04-06T02:53:49.810-0500 c20012| 2016-04-06T02:53:35.067-0500 D ASIO [ReplicationExecutor] Canceling operation; original request was: RemoteCommand 1390 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:41.995-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 6 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.815-0500 c20012| 2016-04-06T02:53:35.067-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:35.067Z [js_test:multi_coll_drop] 2016-04-06T02:53:49.816-0500 c20012| 2016-04-06T02:53:35.067-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:35.067Z [js_test:multi_coll_drop] 2016-04-06T02:53:49.823-0500 c20012| 2016-04-06T02:53:35.068-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1396 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:45.068-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 6 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.826-0500 c20012| 2016-04-06T02:53:35.068-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1397 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:41.995-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 6 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.829-0500 c20012| 2016-04-06T02:53:35.068-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:49.836-0500 c20012| 2016-04-06T02:53:35.068-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to execute command: RemoteCommand 1390 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:41.995-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 6 } reason: CallbackCanceled: Callback canceled [js_test:multi_coll_drop] 2016-04-06T02:53:49.839-0500 c20012| 2016-04-06T02:53:35.068-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1390 finished with response: CallbackCanceled: Callback canceled [js_test:multi_coll_drop] 2016-04-06T02:53:49.840-0500 c20012| 2016-04-06T02:53:35.068-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1396 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:49.844-0500 c20012| 2016-04-06T02:53:35.068-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1398 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:49.847-0500 c20012| 2016-04-06T02:53:35.076-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1396 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20013", term: 6, primaryId: 2, durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, opTime: { ts: Timestamp 1459929210000|1, t: 6 } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.847-0500 c20012| 2016-04-06T02:53:35.076-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:37.576Z [js_test:multi_coll_drop] 2016-04-06T02:53:49.851-0500 s20015| 2016-04-06T02:53:30.045-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 118 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929209000|1, t: 6 }, electionId: ObjectId('7fffffff0000000000000006') } [js_test:multi_coll_drop] 2016-04-06T02:53:49.852-0500 c20013| 2016-04-06T02:52:26.896-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.856-0500 c20013| 2016-04-06T02:52:26.896-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:49.856-0500 c20013| 2016-04-06T02:52:26.897-0500 D QUERY [repl writer worker 5] Using idhack: { _id: "multidrop.coll-_id_-76.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:49.859-0500 c20013| 2016-04-06T02:52:26.897-0500 D QUERY [repl writer worker 5] Using idhack: { _id: "multidrop.coll-_id_-75.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:49.862-0500 c20013| 2016-04-06T02:52:26.898-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.864-0500 c20013| 2016-04-06T02:52:26.898-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.865-0500 c20013| 2016-04-06T02:52:26.898-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.868-0500 c20013| 2016-04-06T02:52:26.898-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.873-0500 c20013| 2016-04-06T02:52:26.898-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.875-0500 c20013| 2016-04-06T02:52:26.898-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.877-0500 c20011| 2016-04-06T02:53:05.943-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.880-0500 c20011| 2016-04-06T02:53:05.943-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.882-0500 c20011| 2016-04-06T02:53:05.943-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.883-0500 c20011| 2016-04-06T02:53:05.944-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.884-0500 c20011| 2016-04-06T02:53:05.943-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.884-0500 c20011| 2016-04-06T02:53:05.944-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.884-0500 c20011| 2016-04-06T02:53:05.944-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.888-0500 c20011| 2016-04-06T02:53:05.944-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.888-0500 c20011| 2016-04-06T02:53:05.944-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.895-0500 c20011| 2016-04-06T02:53:05.944-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 355 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:53:10.944-0500 cmd:{ getMore: 25053585400, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:49.895-0500 c20011| 2016-04-06T02:53:05.945-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.898-0500 c20011| 2016-04-06T02:53:05.946-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 355 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:49.900-0500 c20011| 2016-04-06T02:53:05.946-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.904-0500 c20011| 2016-04-06T02:53:05.946-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.907-0500 c20011| 2016-04-06T02:53:05.946-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.910-0500 c20011| 2016-04-06T02:53:05.947-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:49.911-0500 c20011| 2016-04-06T02:53:05.947-0500 I REPL [ReplicationExecutor] transition to SECONDARY [js_test:multi_coll_drop] 2016-04-06T02:53:49.925-0500 c20011| 2016-04-06T02:53:05.947-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter failed to prepare update command with status: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:49.927-0500 c20011| 2016-04-06T02:53:05.948-0500 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to mongovm16:20012: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:49.934-0500 c20011| 2016-04-06T02:53:05.948-0500 D REPL [SyncSourceFeedback] The replication progress command (replSetUpdatePosition) failed and will be retried: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:49.938-0500 c20011| 2016-04-06T02:53:06.668-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:33839 #51 (1 connection now open) [js_test:multi_coll_drop] 2016-04-06T02:53:49.939-0500 c20011| 2016-04-06T02:53:06.668-0500 D COMMAND [conn51] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20013" } [js_test:multi_coll_drop] 2016-04-06T02:53:49.945-0500 c20013| 2016-04-06T02:52:26.898-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:49.949-0500 s20015| 2016-04-06T02:53:30.046-0500 D ASIO [Balancer] startCommand: RemoteCommand 121 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:00.046-0500 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929209000|1, t: 6 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.952-0500 s20015| 2016-04-06T02:53:30.047-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 121 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:49.956-0500 s20015| 2016-04-06T02:53:30.048-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 121 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "shard0000", host: "mongovm16:20010" } ], id: 0, ns: "config.shards" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.957-0500 s20015| 2016-04-06T02:53:30.048-0500 D SHARDING [Balancer] found 1 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp 1459929209000|1, t: 6 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.958-0500 s20015| 2016-04-06T02:53:30.048-0500 D ASIO [Balancer] startCommand: RemoteCommand 123 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:00.048-0500 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929209000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.961-0500 s20015| 2016-04-06T02:53:30.048-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 123 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:49.963-0500 s20015| 2016-04-06T02:53:30.051-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 123 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "chunksize", value: 50 } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.966-0500 s20015| 2016-04-06T02:53:30.052-0500 D SHARDING [Balancer] Refreshing MaxChunkSize: 50MB [js_test:multi_coll_drop] 2016-04-06T02:53:49.967-0500 s20015| 2016-04-06T02:53:30.052-0500 D ASIO [Balancer] startCommand: RemoteCommand 125 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:00.052-0500 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929209000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.968-0500 s20015| 2016-04-06T02:53:30.052-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 125 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:49.973-0500 s20015| 2016-04-06T02:53:30.054-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 125 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "balancer", stopped: true } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.973-0500 s20015| 2016-04-06T02:53:30.054-0500 D SHARDING [Balancer] skipping balancing round because balancing is disabled [js_test:multi_coll_drop] 2016-04-06T02:53:49.979-0500 s20015| 2016-04-06T02:53:30.054-0500 D ASIO [Balancer] startCommand: RemoteCommand 127 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:00.054-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929210054), up: 83, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.981-0500 s20015| 2016-04-06T02:53:30.054-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 127 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:49.984-0500 s20015| 2016-04-06T02:53:30.069-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 127 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929210000|1, t: 6 }, electionId: ObjectId('7fffffff0000000000000006') } [js_test:multi_coll_drop] 2016-04-06T02:53:49.985-0500 c20011| 2016-04-06T02:53:06.675-0500 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20013" } numYields:0 reslen:429 locks:{} protocol:op_query 6ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.987-0500 c20011| 2016-04-06T02:53:06.675-0500 D COMMAND [conn51] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.988-0500 c20011| 2016-04-06T02:53:06.675-0500 D COMMAND [conn51] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:49.990-0500 c20011| 2016-04-06T02:53:06.675-0500 I COMMAND [conn51] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } numYields:0 reslen:470 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.991-0500 c20011| 2016-04-06T02:53:06.829-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:33850 #52 (2 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:49.992-0500 c20011| 2016-04-06T02:53:06.834-0500 D COMMAND [conn52] run command admin.$cmd { isMaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.995-0500 c20011| 2016-04-06T02:53:06.834-0500 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1 } numYields:0 reslen:429 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:49.997-0500 c20011| 2016-04-06T02:53:06.834-0500 D COMMAND [conn52] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:49.998-0500 c20011| 2016-04-06T02:53:06.840-0500 I COMMAND [conn52] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:414 locks:{} protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:53:50.006-0500 c20011| 2016-04-06T02:53:07.167-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 356 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:17.167-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.028-0500 c20011| 2016-04-06T02:53:07.167-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 356 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:50.042-0500 c20011| 2016-04-06T02:53:07.167-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 356 finished with response: { ok: 1.0, electionTime: new Date(6270348099755966465), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 4, primaryId: 2, durableOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, opTime: { ts: Timestamp 1459929185000|4, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:50.046-0500 c20011| 2016-04-06T02:53:07.170-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929185000|1, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.049-0500 c20011| 2016-04-06T02:53:07.170-0500 I REPL [ReplicationExecutor] Member mongovm16:20013 is now in state PRIMARY [js_test:multi_coll_drop] 2016-04-06T02:53:50.052-0500 c20011| 2016-04-06T02:53:07.170-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:09.170Z [js_test:multi_coll_drop] 2016-04-06T02:53:50.052-0500 c20011| 2016-04-06T02:53:08.184-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:33926 #53 (3 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:50.053-0500 c20011| 2016-04-06T02:53:08.184-0500 D COMMAND [conn53] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20012" } [js_test:multi_coll_drop] 2016-04-06T02:53:50.056-0500 c20011| 2016-04-06T02:53:08.184-0500 I COMMAND [conn53] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20012" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:50.063-0500 c20011| 2016-04-06T02:53:08.185-0500 D COMMAND [conn53] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.066-0500 s20014| 2016-04-06T02:53:34.668-0500 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:50.069-0500 c20013| 2016-04-06T02:52:26.898-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.071-0500 d20010| 2016-04-06T02:53:30.046-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:50.081-0500 d20010| 2016-04-06T02:53:30.046-0500 I COMMAND [conn5] command admin.$cmd command: splitChunk { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 41.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } numYields:0 reslen:250 locks:{} protocol:op_command 927ms [js_test:multi_coll_drop] 2016-04-06T02:53:50.081-0500 s20014| 2016-04-06T02:53:34.668-0500 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.100.28:20012, no events [js_test:multi_coll_drop] 2016-04-06T02:53:50.085-0500 d20010| 2016-04-06T02:53:30.049-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 42.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:50.087-0500 d20010| 2016-04-06T02:53:30.054-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:50.090-0500 d20010| 2016-04-06T02:53:31.842-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 43.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:50.091-0500 d20010| 2016-04-06T02:53:31.845-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:50.094-0500 d20010| 2016-04-06T02:53:31.852-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 44.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:50.096-0500 d20010| 2016-04-06T02:53:31.855-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:50.103-0500 d20010| 2016-04-06T02:53:31.867-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 45.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:50.105-0500 d20010| 2016-04-06T02:53:31.870-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:50.123-0500 d20010| 2016-04-06T02:53:31.872-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 46.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:50.127-0500 d20010| 2016-04-06T02:53:31.874-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:50.129-0500 d20010| 2016-04-06T02:53:31.879-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 47.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:50.130-0500 d20010| 2016-04-06T02:53:31.883-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:50.132-0500 d20010| 2016-04-06T02:53:31.886-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 48.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:50.133-0500 d20010| 2016-04-06T02:53:31.889-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:50.139-0500 d20010| 2016-04-06T02:53:31.890-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 49.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:50.142-0500 d20010| 2016-04-06T02:53:31.893-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:50.147-0500 d20010| 2016-04-06T02:53:31.898-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 50.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:50.150-0500 d20010| 2016-04-06T02:53:31.901-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:50.152-0500 d20010| 2016-04-06T02:53:31.909-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 51.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:50.155-0500 d20010| 2016-04-06T02:53:31.914-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:50.158-0500 d20010| 2016-04-06T02:53:31.915-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 52.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:50.159-0500 d20010| 2016-04-06T02:53:31.918-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:50.163-0500 d20010| 2016-04-06T02:53:31.919-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 53.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:50.167-0500 d20010| 2016-04-06T02:53:31.922-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:50.168-0500 d20010| 2016-04-06T02:53:31.923-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 54.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:50.172-0500 d20010| 2016-04-06T02:53:31.926-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:50.180-0500 d20010| 2016-04-06T02:53:31.928-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 55.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:50.184-0500 d20010| 2016-04-06T02:53:31.930-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:50.188-0500 d20010| 2016-04-06T02:53:31.932-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 56.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:50.189-0500 d20010| 2016-04-06T02:53:31.934-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:50.192-0500 d20010| 2016-04-06T02:53:31.935-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 57.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:50.195-0500 d20010| 2016-04-06T02:53:31.936-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:50.198-0500 d20010| 2016-04-06T02:53:31.937-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 58.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:50.203-0500 d20010| 2016-04-06T02:53:31.939-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:50.209-0500 d20010| 2016-04-06T02:53:31.940-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 59.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:50.220-0500 d20010| 2016-04-06T02:53:31.943-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:50.223-0500 d20010| 2016-04-06T02:53:31.945-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 60.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:50.224-0500 d20010| 2016-04-06T02:53:31.946-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:50.228-0500 d20010| 2016-04-06T02:53:31.947-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 61.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:50.230-0500 c20013| 2016-04-06T02:52:26.898-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.233-0500 c20013| 2016-04-06T02:52:26.898-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.235-0500 c20013| 2016-04-06T02:52:26.898-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.236-0500 c20013| 2016-04-06T02:52:26.898-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.237-0500 c20013| 2016-04-06T02:52:26.898-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.239-0500 c20013| 2016-04-06T02:52:26.898-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.241-0500 c20013| 2016-04-06T02:52:26.898-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.243-0500 c20013| 2016-04-06T02:52:26.898-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.243-0500 c20013| 2016-04-06T02:52:26.898-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.245-0500 c20013| 2016-04-06T02:52:26.902-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:50.250-0500 c20013| 2016-04-06T02:52:26.902-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:50.255-0500 c20013| 2016-04-06T02:52:26.902-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1234 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:50.257-0500 c20013| 2016-04-06T02:52:26.902-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1234 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:50.258-0500 c20013| 2016-04-06T02:52:26.903-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1234 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.260-0500 c20013| 2016-04-06T02:52:26.903-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20012: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:50.264-0500 c20013| 2016-04-06T02:52:26.903-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1236 -- target:mongovm16:20012 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, appliedOpTime: { ts: Timestamp 1459929142000|12, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, appliedOpTime: { ts: Timestamp 1459929130000|10, t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:50.265-0500 c20013| 2016-04-06T02:52:26.903-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1236 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:50.267-0500 c20013| 2016-04-06T02:52:27.555-0500 D COMMAND [conn7] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.267-0500 c20013| 2016-04-06T02:52:27.556-0500 D COMMAND [conn7] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:50.271-0500 c20013| 2016-04-06T02:52:27.556-0500 I COMMAND [conn7] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:50.276-0500 c20013| 2016-04-06T02:52:28.056-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1237 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:38.056-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.277-0500 c20013| 2016-04-06T02:52:28.056-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1237 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:50.284-0500 c20013| 2016-04-06T02:52:28.056-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1237 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20012", term: 2, primaryId: 1, durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, opTime: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:50.286-0500 c20013| 2016-04-06T02:52:28.056-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:30.056Z [js_test:multi_coll_drop] 2016-04-06T02:53:50.287-0500 c20013| 2016-04-06T02:52:28.811-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1239 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:38.811-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.295-0500 c20013| 2016-04-06T02:52:28.812-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1239 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:50.297-0500 c20013| 2016-04-06T02:52:29.559-0500 D COMMAND [conn7] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.298-0500 c20013| 2016-04-06T02:52:29.559-0500 D COMMAND [conn7] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:50.301-0500 c20013| 2016-04-06T02:52:29.559-0500 I COMMAND [conn7] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:50.304-0500 c20013| 2016-04-06T02:52:30.056-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1240 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:40.056-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.305-0500 c20013| 2016-04-06T02:52:30.056-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1240 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:50.309-0500 c20013| 2016-04-06T02:52:30.057-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1240 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20012", term: 2, primaryId: 1, durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, opTime: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:50.311-0500 c20013| 2016-04-06T02:52:30.057-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:32.057Z [js_test:multi_coll_drop] 2016-04-06T02:53:50.317-0500 c20013| 2016-04-06T02:52:31.560-0500 D COMMAND [conn7] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.322-0500 c20013| 2016-04-06T02:52:31.560-0500 D COMMAND [conn7] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:50.327-0500 c20013| 2016-04-06T02:52:31.566-0500 I COMMAND [conn7] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } numYields:0 reslen:489 locks:{} protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:53:50.336-0500 c20013| 2016-04-06T02:52:31.896-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1233 timed out, adjusted timeout after getting connection from pool was 5000ms, op was id: 9, states: [ UNINITIALIZED, IN_PROGRESS ], start_time: 2016-04-06T02:52:26.896-0500, request: RemoteCommand 1233 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.896-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:50.340-0500 c20013| 2016-04-06T02:52:31.896-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Operation timing out; original request was: RemoteCommand 1233 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.896-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:50.344-0500 c20013| 2016-04-06T02:52:31.896-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Failed to execute command: RemoteCommand 1233 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.896-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } reason: ExceededTimeLimit: Operation timed out, request was RemoteCommand 1233 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.896-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:50.349-0500 c20013| 2016-04-06T02:52:31.896-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1233 finished with response: ExceededTimeLimit: Operation timed out, request was RemoteCommand 1233 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.896-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:50.353-0500 c20013| 2016-04-06T02:52:31.898-0500 D REPL [rsBackgroundSync-0] Error returned from oplog query: ExceededTimeLimit: Operation timed out, request was RemoteCommand 1233 -- target:mongovm16:20012 db:local expDate:2016-04-06T02:52:31.896-0500 cmd:{ getMore: 25449496203, collection: "oplog.rs", maxTimeMS: 2500, term: 2, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:50.355-0500 c20013| 2016-04-06T02:52:31.898-0500 D REPL [rsBackgroundSync] fetcher stopped reading remote oplog on mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:50.358-0500 c20013| 2016-04-06T02:52:31.900-0500 I REPL [ReplicationExecutor] could not find member to sync from [js_test:multi_coll_drop] 2016-04-06T02:53:50.372-0500 c20013| 2016-04-06T02:52:31.900-0500 D ASIO [ReplicationExecutor] Canceling operation; original request was: RemoteCommand 1239 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:38.811-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.374-0500 c20013| 2016-04-06T02:52:31.900-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:31.900Z [js_test:multi_coll_drop] 2016-04-06T02:53:50.375-0500 c20013| 2016-04-06T02:52:31.900-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:31.900Z [js_test:multi_coll_drop] 2016-04-06T02:53:50.377-0500 c20013| 2016-04-06T02:52:31.900-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1243 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:41.900-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.381-0500 c20013| 2016-04-06T02:52:31.901-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1244 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:38.811-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.385-0500 c20013| 2016-04-06T02:52:31.901-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:50.395-0500 c20013| 2016-04-06T02:52:31.901-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to execute command: RemoteCommand 1239 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:38.811-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 2 } reason: CallbackCanceled: Callback canceled [js_test:multi_coll_drop] 2016-04-06T02:53:50.401-0500 c20013| 2016-04-06T02:52:31.901-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1239 finished with response: CallbackCanceled: Callback canceled [js_test:multi_coll_drop] 2016-04-06T02:53:50.407-0500 c20013| 2016-04-06T02:52:31.901-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1243 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:50.409-0500 c20013| 2016-04-06T02:52:31.901-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1245 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:50.415-0500 c20013| 2016-04-06T02:52:31.901-0500 D COMMAND [conn7] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.416-0500 c20013| 2016-04-06T02:52:31.901-0500 D COMMAND [conn7] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:50.420-0500 c20013| 2016-04-06T02:52:31.901-0500 I COMMAND [conn7] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 2 } numYields:0 reslen:458 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:50.424-0500 c20013| 2016-04-06T02:52:31.901-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1243 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 2, primaryId: 1, durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, opTime: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:50.426-0500 c20013| 2016-04-06T02:52:31.901-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:34.401Z [js_test:multi_coll_drop] 2016-04-06T02:53:50.429-0500 c20013| 2016-04-06T02:52:32.204-0500 D COMMAND [conn7] run command admin.$cmd { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 2, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:50.431-0500 c20013| 2016-04-06T02:52:32.204-0500 D COMMAND [conn7] command: replSetRequestVotes [js_test:multi_coll_drop] 2016-04-06T02:53:50.432-0500 c20013| 2016-04-06T02:52:32.205-0500 D QUERY [conn7] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:50.435-0500 c20013| 2016-04-06T02:52:32.206-0500 I COMMAND [conn7] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 2, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929146000|10, t: 2 } } numYields:0 reslen:143 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:50.443-0500 c20013| 2016-04-06T02:52:32.206-0500 D COMMAND [conn7] run command admin.$cmd { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 3, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:50.444-0500 c20013| 2016-04-06T02:52:32.206-0500 D COMMAND [conn7] command: replSetRequestVotes [js_test:multi_coll_drop] 2016-04-06T02:53:50.447-0500 c20013| 2016-04-06T02:52:32.206-0500 D QUERY [conn7] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:50.452-0500 c20013| 2016-04-06T02:52:32.206-0500 I COMMAND [conn7] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 3, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929146000|10, t: 2 } } numYields:0 reslen:143 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:50.456-0500 c20013| 2016-04-06T02:52:32.207-0500 D COMMAND [conn7] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.461-0500 c20013| 2016-04-06T02:52:32.207-0500 D COMMAND [conn7] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:50.466-0500 c20013| 2016-04-06T02:52:32.207-0500 I COMMAND [conn7] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } numYields:0 reslen:478 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:50.472-0500 c20013| 2016-04-06T02:52:34.209-0500 D COMMAND [conn7] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.472-0500 c20013| 2016-04-06T02:52:34.209-0500 D COMMAND [conn7] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:50.477-0500 c20013| 2016-04-06T02:52:34.209-0500 I COMMAND [conn7] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } numYields:0 reslen:478 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:50.481-0500 c20013| 2016-04-06T02:52:34.401-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1248 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:44.401-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.482-0500 c20013| 2016-04-06T02:52:34.401-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1248 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:50.485-0500 c20013| 2016-04-06T02:52:34.402-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1248 finished with response: { ok: 1.0, electionTime: new Date(6270347962317012993), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, opTime: { ts: Timestamp 1459929152000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:50.486-0500 c20013| 2016-04-06T02:52:34.402-0500 I REPL [ReplicationExecutor] Member mongovm16:20011 is now in state PRIMARY [js_test:multi_coll_drop] 2016-04-06T02:53:50.489-0500 c20013| 2016-04-06T02:52:34.402-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:36.902Z [js_test:multi_coll_drop] 2016-04-06T02:53:50.491-0500 c20013| 2016-04-06T02:52:34.902-0500 I REPL [ReplicationExecutor] syncing from: mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:50.492-0500 c20013| 2016-04-06T02:52:34.902-0500 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 1250 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:53:04.902-0500 cmd:{ find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:50.495-0500 c20013| 2016-04-06T02:52:34.902-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1250 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:50.496-0500 c20013| 2016-04-06T02:52:34.902-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1250 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1459929117000|1, h: 1169182228640141205, v: 2, op: "n", ns: "", o: { msg: "initiating set" } } ], id: 0, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.502-0500 c20013| 2016-04-06T02:52:34.903-0500 D REPL [rsBackgroundSync] scheduling fetcher to read remote oplog on mongovm16:20011 starting at filter: { ts: { $gte: Timestamp 1459929146000|10 } } [js_test:multi_coll_drop] 2016-04-06T02:53:50.505-0500 c20013| 2016-04-06T02:52:34.903-0500 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 1252 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:39.903-0500 cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929146000|10 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.508-0500 c20013| 2016-04-06T02:52:34.903-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1252 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:50.514-0500 c20013| 2016-04-06T02:52:34.903-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1252 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1459929146000|10, t: 2, h: 8129632561130330747, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-76.0", lastmod: Timestamp 1000|51, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -76.0 }, max: { _id: -75.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-76.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-75.0", lastmod: Timestamp 1000|52, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -75.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-75.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } }, { ts: Timestamp 1459929152000|2, t: 3, h: -6846298690708567284, v: 2, op: "n", ns: "", o: { msg: "new primary" } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.516-0500 c20013| 2016-04-06T02:52:34.903-0500 D REPL [rsBackgroundSync-0] fetcher read 2 operations from remote oplog starting at ts: Timestamp 1459929146000|10 and ending at ts: Timestamp 1459929152000|2 [js_test:multi_coll_drop] 2016-04-06T02:53:50.517-0500 c20013| 2016-04-06T02:52:34.903-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:50.518-0500 c20013| 2016-04-06T02:52:34.904-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.518-0500 c20013| 2016-04-06T02:52:34.904-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.519-0500 c20013| 2016-04-06T02:52:34.904-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.520-0500 c20013| 2016-04-06T02:52:34.904-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.520-0500 c20013| 2016-04-06T02:52:34.904-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.521-0500 c20013| 2016-04-06T02:52:34.904-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.525-0500 c20013| 2016-04-06T02:52:34.904-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.526-0500 c20013| 2016-04-06T02:52:34.904-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.527-0500 c20013| 2016-04-06T02:52:34.904-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.528-0500 c20013| 2016-04-06T02:52:34.904-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.529-0500 c20013| 2016-04-06T02:52:34.904-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.529-0500 c20013| 2016-04-06T02:52:34.904-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.529-0500 c20013| 2016-04-06T02:52:34.904-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.530-0500 c20013| 2016-04-06T02:52:34.904-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.531-0500 c20013| 2016-04-06T02:52:34.904-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.532-0500 c20013| 2016-04-06T02:52:34.904-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:50.537-0500 c20013| 2016-04-06T02:52:34.904-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.539-0500 c20013| 2016-04-06T02:52:34.904-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.540-0500 c20013| 2016-04-06T02:52:34.904-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.541-0500 c20013| 2016-04-06T02:52:34.904-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.541-0500 c20013| 2016-04-06T02:52:34.904-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.543-0500 c20013| 2016-04-06T02:52:34.904-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.544-0500 c20013| 2016-04-06T02:52:34.904-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.555-0500 c20013| 2016-04-06T02:52:34.904-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.555-0500 c20013| 2016-04-06T02:52:34.904-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.557-0500 c20013| 2016-04-06T02:52:34.904-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.561-0500 c20013| 2016-04-06T02:52:34.905-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1254 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:39.905-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929146000|9, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:50.565-0500 c20013| 2016-04-06T02:52:34.906-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1254 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:50.566-0500 c20013| 2016-04-06T02:52:34.907-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.568-0500 c20013| 2016-04-06T02:52:34.907-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.569-0500 c20013| 2016-04-06T02:52:34.907-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.570-0500 c20013| 2016-04-06T02:52:34.907-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.572-0500 c20013| 2016-04-06T02:52:34.908-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.577-0500 c20013| 2016-04-06T02:52:34.911-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.592-0500 c20013| 2016-04-06T02:52:34.912-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.592-0500 c20013| 2016-04-06T02:52:34.912-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:50.594-0500 c20013| 2016-04-06T02:52:36.210-0500 D COMMAND [conn7] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.595-0500 c20013| 2016-04-06T02:52:36.210-0500 D COMMAND [conn7] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:50.598-0500 c20013| 2016-04-06T02:52:36.210-0500 I COMMAND [conn7] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } numYields:0 reslen:509 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:50.599-0500 c20013| 2016-04-06T02:52:36.211-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1254 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.599-0500 c20013| 2016-04-06T02:52:36.211-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929152000|2, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.600-0500 c20013| 2016-04-06T02:52:36.211-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:50.604-0500 c20013| 2016-04-06T02:52:36.211-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1256 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:41.211-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929152000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:50.612-0500 c20013| 2016-04-06T02:52:36.211-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1256 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:50.616-0500 c20013| 2016-04-06T02:52:36.902-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1257 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:46.902-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.626-0500 c20013| 2016-04-06T02:52:36.902-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1257 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:50.629-0500 c20013| 2016-04-06T02:52:36.903-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1257 finished with response: { ok: 1.0, electionTime: new Date(6270347962317012993), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, opTime: { ts: Timestamp 1459929152000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:50.630-0500 c20013| 2016-04-06T02:52:36.903-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:38.903Z [js_test:multi_coll_drop] 2016-04-06T02:53:50.631-0500 c20013| 2016-04-06T02:52:38.210-0500 D COMMAND [conn7] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.631-0500 c20013| 2016-04-06T02:52:38.210-0500 D COMMAND [conn7] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:50.636-0500 c20013| 2016-04-06T02:52:38.212-0500 I COMMAND [conn7] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } numYields:0 reslen:509 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:50.641-0500 c20013| 2016-04-06T02:52:38.712-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1256 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.642-0500 c20013| 2016-04-06T02:52:38.712-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:50.645-0500 c20013| 2016-04-06T02:52:38.713-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1260 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:43.712-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929152000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:50.649-0500 c20013| 2016-04-06T02:52:38.713-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1260 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:50.649-0500 c20013| 2016-04-06T02:52:38.717-0500 D COMMAND [conn9] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.654-0500 c20013| 2016-04-06T02:52:38.717-0500 I COMMAND [conn9] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:50.655-0500 c20013| 2016-04-06T02:52:38.811-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to get connection from pool for request 1244: ExceededTimeLimit: Couldn't get a connection within the time limit [js_test:multi_coll_drop] 2016-04-06T02:53:50.661-0500 c20013| 2016-04-06T02:52:38.811-0500 I REPL [ReplicationExecutor] Error in heartbeat request to mongovm16:20012; ExceededTimeLimit: Couldn't get a connection within the time limit [js_test:multi_coll_drop] 2016-04-06T02:53:50.663-0500 c20013| 2016-04-06T02:52:38.811-0500 D REPL [ReplicationExecutor] setDownValues: heartbeat response failed for member _id:1, msg: Couldn't get a connection within the time limit [js_test:multi_coll_drop] 2016-04-06T02:53:50.666-0500 c20013| 2016-04-06T02:52:38.811-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:40.811Z [js_test:multi_coll_drop] 2016-04-06T02:53:50.671-0500 c20013| 2016-04-06T02:52:38.903-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1261 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:48.903-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.672-0500 c20013| 2016-04-06T02:52:38.903-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1261 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:50.681-0500 c20013| 2016-04-06T02:52:38.907-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1261 finished with response: { ok: 1.0, electionTime: new Date(6270347962317012993), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, opTime: { ts: Timestamp 1459929152000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:50.681-0500 c20013| 2016-04-06T02:52:38.907-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:40.907Z [js_test:multi_coll_drop] 2016-04-06T02:53:50.689-0500 c20013| 2016-04-06T02:52:40.212-0500 D COMMAND [conn7] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.690-0500 c20013| 2016-04-06T02:52:40.212-0500 D COMMAND [conn7] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:50.695-0500 c20013| 2016-04-06T02:52:40.213-0500 I COMMAND [conn7] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } numYields:0 reslen:509 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:50.702-0500 c20013| 2016-04-06T02:52:40.812-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1263 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:50.812-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.705-0500 c20013| 2016-04-06T02:52:40.907-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1264 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:50.907-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.708-0500 c20013| 2016-04-06T02:52:40.907-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1264 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:50.712-0500 c20013| 2016-04-06T02:52:40.907-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1264 finished with response: { ok: 1.0, electionTime: new Date(6270347962317012993), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, opTime: { ts: Timestamp 1459929152000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:50.715-0500 c20013| 2016-04-06T02:52:40.907-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:42.907Z [js_test:multi_coll_drop] 2016-04-06T02:53:50.719-0500 c20013| 2016-04-06T02:52:41.213-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1260 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.719-0500 c20013| 2016-04-06T02:52:41.215-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:50.721-0500 c20013| 2016-04-06T02:52:41.215-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1267 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:46.215-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929152000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:50.723-0500 c20013| 2016-04-06T02:52:41.215-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1267 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:50.726-0500 c20013| 2016-04-06T02:52:41.707-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1236 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.729-0500 c20013| 2016-04-06T02:52:41.707-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter failed to prepare update command with status: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:50.733-0500 c20013| 2016-04-06T02:52:41.707-0500 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to mongovm16:20012: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:50.735-0500 c20013| 2016-04-06T02:52:41.707-0500 D REPL [SyncSourceFeedback] The replication progress command (replSetUpdatePosition) failed and will be retried: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:53:50.736-0500 c20013| 2016-04-06T02:52:41.707-0500 D REPL [SyncSourceFeedback] setting syncSourceFeedback to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:50.741-0500 c20013| 2016-04-06T02:52:41.707-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:50.749-0500 c20013| 2016-04-06T02:52:41.707-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1269 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:50.750-0500 c20013| 2016-04-06T02:52:41.707-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1269 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:50.752-0500 c20013| 2016-04-06T02:52:41.708-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1269 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.756-0500 c20013| 2016-04-06T02:52:41.719-0500 D COMMAND [conn14] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 2 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.757-0500 c20013| 2016-04-06T02:52:41.720-0500 D COMMAND [conn14] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:50.757-0500 c20013| 2016-04-06T02:52:41.720-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.759-0500 c20013| 2016-04-06T02:52:41.720-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:50.760-0500 c20013| 2016-04-06T02:52:41.720-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1245 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:50.763-0500 c20013| 2016-04-06T02:52:41.720-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1263 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:50.767-0500 c20013| 2016-04-06T02:52:41.720-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:50.768-0500 c20013| 2016-04-06T02:52:41.720-0500 D COMMAND [conn8] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.778-0500 c20013| 2016-04-06T02:52:41.720-0500 I COMMAND [conn8] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:50.778-0500 c20013| 2016-04-06T02:52:41.721-0500 D COMMAND [conn13] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.780-0500 c20013| 2016-04-06T02:52:41.721-0500 I COMMAND [conn13] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:50.784-0500 c20013| 2016-04-06T02:52:41.722-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1267 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929161000|1, t: 3, h: 724532987243091218, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20015" }, o: { $set: { ping: new Date(1459929152631), up: 25, waiting: false } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.790-0500 c20013| 2016-04-06T02:52:41.723-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1263 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 2, durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, opTime: { ts: Timestamp 1459929161000|3, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:53:50.793-0500 c20013| 2016-04-06T02:52:41.723-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929161000|1 and ending at ts: Timestamp 1459929161000|1 [js_test:multi_coll_drop] 2016-04-06T02:53:50.801-0500 c20013| 2016-04-06T02:52:41.723-0500 D REPL [ReplicationExecutor] Ignoring older committed snapshot optime: { ts: Timestamp 1459929146000|10, t: 2 }, currentCommittedOpTime: { ts: Timestamp 1459929152000|2, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.802-0500 c20013| 2016-04-06T02:52:41.723-0500 I REPL [ReplicationExecutor] Member mongovm16:20012 is now in state SECONDARY [js_test:multi_coll_drop] 2016-04-06T02:53:50.807-0500 c20013| 2016-04-06T02:52:41.723-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:52:43.723Z [js_test:multi_coll_drop] 2016-04-06T02:53:50.811-0500 c20013| 2016-04-06T02:52:41.726-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1273 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:46.726-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929152000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:50.813-0500 c20013| 2016-04-06T02:52:41.726-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1273 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:50.817-0500 c20013| 2016-04-06T02:52:41.726-0500 I COMMAND [conn14] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 2 } numYields:0 reslen:489 locks:{} protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:53:50.824-0500 c20013| 2016-04-06T02:52:41.726-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1273 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929161000|2, t: 3, h: -8330485042973896426, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:41.710-0500-5704c04965c17830b843f1b0", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929161710), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -76.0 }, max: { _id: MaxKey } }, left: { min: { _id: -76.0 }, max: { _id: -75.0 }, lastmod: Timestamp 1000|51, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -75.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|52, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } }, { ts: Timestamp 1459929161000|3, t: 3, h: 348221258137002286, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20014" }, o: { $set: { ping: new Date(1459929151652), up: 24, waiting: false } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.826-0500 c20013| 2016-04-06T02:52:41.726-0500 D REPL [rsBackgroundSync-0] fetcher read 2 operations from remote oplog starting at ts: Timestamp 1459929161000|2 and ending at ts: Timestamp 1459929161000|3 [js_test:multi_coll_drop] 2016-04-06T02:53:50.827-0500 c20013| 2016-04-06T02:52:41.727-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:50.832-0500 c20013| 2016-04-06T02:52:41.727-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.834-0500 c20013| 2016-04-06T02:52:41.727-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.835-0500 c20013| 2016-04-06T02:52:41.727-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.837-0500 c20013| 2016-04-06T02:52:41.727-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.840-0500 c20013| 2016-04-06T02:52:41.727-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.841-0500 c20013| 2016-04-06T02:52:41.727-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.844-0500 c20013| 2016-04-06T02:52:41.727-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.851-0500 c20013| 2016-04-06T02:52:41.727-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.854-0500 c20013| 2016-04-06T02:52:41.727-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.856-0500 c20013| 2016-04-06T02:52:41.727-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.857-0500 c20013| 2016-04-06T02:52:41.727-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.860-0500 c20013| 2016-04-06T02:52:41.727-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.861-0500 c20013| 2016-04-06T02:52:41.727-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.865-0500 c20013| 2016-04-06T02:52:41.727-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.865-0500 c20013| 2016-04-06T02:52:41.727-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:50.868-0500 c20013| 2016-04-06T02:52:41.727-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.868-0500 c20013| 2016-04-06T02:52:41.727-0500 D QUERY [repl writer worker 13] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:50.869-0500 c20013| 2016-04-06T02:52:41.727-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.871-0500 c20013| 2016-04-06T02:52:41.727-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.874-0500 c20013| 2016-04-06T02:52:41.727-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.879-0500 c20013| 2016-04-06T02:52:41.727-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.879-0500 c20013| 2016-04-06T02:52:41.728-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.880-0500 c20013| 2016-04-06T02:52:41.728-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.884-0500 c20013| 2016-04-06T02:52:41.728-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.887-0500 c20013| 2016-04-06T02:52:41.728-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.888-0500 c20013| 2016-04-06T02:52:41.728-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.889-0500 c20013| 2016-04-06T02:52:41.728-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.890-0500 c20013| 2016-04-06T02:52:41.728-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.891-0500 c20013| 2016-04-06T02:52:41.728-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.893-0500 c20013| 2016-04-06T02:52:41.728-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.894-0500 c20013| 2016-04-06T02:52:41.728-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.899-0500 c20013| 2016-04-06T02:52:41.727-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.900-0500 c20013| 2016-04-06T02:52:41.728-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.900-0500 c20013| 2016-04-06T02:52:41.728-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.903-0500 c20013| 2016-04-06T02:52:41.728-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1275 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:46.728-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929152000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:50.904-0500 c20013| 2016-04-06T02:52:41.728-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1275 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:50.906-0500 c20013| 2016-04-06T02:52:41.729-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:50.910-0500 c20013| 2016-04-06T02:52:41.729-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:50.915-0500 c20013| 2016-04-06T02:52:41.729-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|1, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:50.918-0500 c20013| 2016-04-06T02:52:41.729-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1276 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|1, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:50.919-0500 c20013| 2016-04-06T02:52:41.729-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.921-0500 c20013| 2016-04-06T02:52:41.729-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.924-0500 c20013| 2016-04-06T02:52:41.729-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.928-0500 c20013| 2016-04-06T02:52:41.729-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1276 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:50.928-0500 c20013| 2016-04-06T02:52:41.729-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.930-0500 c20013| 2016-04-06T02:52:41.729-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.945-0500 c20013| 2016-04-06T02:52:41.729-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.948-0500 c20013| 2016-04-06T02:52:41.729-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.949-0500 c20013| 2016-04-06T02:52:41.729-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.952-0500 c20013| 2016-04-06T02:52:41.729-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.953-0500 c20013| 2016-04-06T02:52:41.729-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.958-0500 c20013| 2016-04-06T02:52:41.729-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.962-0500 c20013| 2016-04-06T02:52:41.729-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.962-0500 c20013| 2016-04-06T02:52:41.729-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1276 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:50.964-0500 c20013| 2016-04-06T02:52:41.729-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.966-0500 c20013| 2016-04-06T02:52:41.729-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.968-0500 c20013| 2016-04-06T02:52:41.729-0500 D REPL [rsSync] replication batch size is 2 [js_test:multi_coll_drop] 2016-04-06T02:53:50.968-0500 c20013| 2016-04-06T02:52:41.729-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:50.969-0500 c20013| 2016-04-06T02:52:41.730-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.971-0500 c20013| 2016-04-06T02:52:41.730-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.973-0500 c20013| 2016-04-06T02:52:41.730-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.974-0500 c20013| 2016-04-06T02:52:41.730-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.974-0500 c20013| 2016-04-06T02:52:41.730-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.976-0500 c20013| 2016-04-06T02:52:41.730-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.977-0500 c20013| 2016-04-06T02:52:41.730-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.979-0500 c20013| 2016-04-06T02:52:41.730-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.981-0500 c20013| 2016-04-06T02:52:41.730-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.983-0500 c20013| 2016-04-06T02:52:41.730-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.984-0500 c20013| 2016-04-06T02:52:41.730-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.985-0500 c20013| 2016-04-06T02:52:41.730-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.986-0500 c20013| 2016-04-06T02:52:41.730-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.987-0500 c20013| 2016-04-06T02:52:41.730-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.988-0500 c20013| 2016-04-06T02:52:41.730-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.989-0500 c20013| 2016-04-06T02:52:41.730-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.990-0500 c20013| 2016-04-06T02:52:41.730-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.990-0500 c20013| 2016-04-06T02:52:41.730-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:50.992-0500 c20013| 2016-04-06T02:52:41.731-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:50.999-0500 c20013| 2016-04-06T02:52:41.731-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:51.013-0500 c20013| 2016-04-06T02:52:41.731-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1278 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:51.013-0500 c20013| 2016-04-06T02:52:41.731-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1278 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:51.015-0500 c20013| 2016-04-06T02:52:41.731-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1278 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.023-0500 c20013| 2016-04-06T02:52:41.733-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:51.027-0500 c20013| 2016-04-06T02:52:41.733-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1280 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:51.029-0500 c20013| 2016-04-06T02:52:41.733-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1280 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:51.032-0500 c20013| 2016-04-06T02:52:41.734-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1280 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.039-0500 c20013| 2016-04-06T02:52:41.736-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|3, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:51.049-0500 c20013| 2016-04-06T02:52:41.736-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1282 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|3, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:51.049-0500 c20013| 2016-04-06T02:52:41.736-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1282 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:51.052-0500 c20013| 2016-04-06T02:52:41.737-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1282 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.060-0500 c20013| 2016-04-06T02:52:41.738-0500 D COMMAND [conn10] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.064-0500 c20013| 2016-04-06T02:52:41.738-0500 D REPL [conn10] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929161000|3, t: 3 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929152000|2, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.064-0500 c20013| 2016-04-06T02:52:41.738-0500 D REPL [conn10] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999969μs [js_test:multi_coll_drop] 2016-04-06T02:53:51.067-0500 c20013| 2016-04-06T02:52:41.742-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1275 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929161000|4, t: 3, h: 569718958403941141, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.068-0500 c20013| 2016-04-06T02:52:41.742-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929161000|3, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.071-0500 c20013| 2016-04-06T02:52:41.742-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:51.073-0500 c20013| 2016-04-06T02:52:41.742-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.075-0500 c20013| 2016-04-06T02:52:41.742-0500 D QUERY [conn10] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:51.078-0500 c20013| 2016-04-06T02:52:41.742-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929161000|4 and ending at ts: Timestamp 1459929161000|4 [js_test:multi_coll_drop] 2016-04-06T02:53:51.081-0500 c20013| 2016-04-06T02:52:41.742-0500 I COMMAND [conn10] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } }, maxTimeMS: 30000 } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:423 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:53:51.082-0500 c20013| 2016-04-06T02:52:41.742-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:51.083-0500 c20013| 2016-04-06T02:52:41.742-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.083-0500 c20013| 2016-04-06T02:52:41.742-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.084-0500 c20013| 2016-04-06T02:52:41.742-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.084-0500 c20013| 2016-04-06T02:52:41.742-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.085-0500 c20013| 2016-04-06T02:52:41.742-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.087-0500 c20013| 2016-04-06T02:52:41.742-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.087-0500 c20013| 2016-04-06T02:52:41.742-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.088-0500 c20011| 2016-04-06T02:53:08.185-0500 D COMMAND [conn53] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:51.088-0500 c20012| 2016-04-06T02:53:35.654-0500 I REPL [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 5000ms [js_test:multi_coll_drop] 2016-04-06T02:53:51.089-0500 c20012| 2016-04-06T02:53:35.654-0500 I REPL [ReplicationExecutor] conducting a dry run election to see if we could be elected [js_test:multi_coll_drop] 2016-04-06T02:53:51.093-0500 c20012| 2016-04-06T02:53:35.654-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1401 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:40.654-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 6, candidateIndex: 1, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929210000|1, t: 6 } } [js_test:multi_coll_drop] 2016-04-06T02:53:51.096-0500 c20012| 2016-04-06T02:53:35.654-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1402 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:40.654-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 6, candidateIndex: 1, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929210000|1, t: 6 } } [js_test:multi_coll_drop] 2016-04-06T02:53:51.105-0500 c20012| 2016-04-06T02:53:35.657-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:51.109-0500 c20012| 2016-04-06T02:53:35.657-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1401 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:51.113-0500 c20012| 2016-04-06T02:53:35.662-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1401 finished with response: { term: 6, voteGranted: true, reason: "", ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.114-0500 c20012| 2016-04-06T02:53:35.662-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1403 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:51.116-0500 c20012| 2016-04-06T02:53:35.662-0500 I REPL [ReplicationExecutor] dry election run succeeded, running for election [js_test:multi_coll_drop] 2016-04-06T02:53:51.119-0500 c20012| 2016-04-06T02:53:35.663-0500 D QUERY [replExecDBWorker-1] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:51.123-0500 c20012| 2016-04-06T02:53:35.664-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1405 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:40.664-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 7, candidateIndex: 1, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929210000|1, t: 6 } } [js_test:multi_coll_drop] 2016-04-06T02:53:51.127-0500 c20012| 2016-04-06T02:53:35.664-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1406 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:40.664-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 7, candidateIndex: 1, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929210000|1, t: 6 } } [js_test:multi_coll_drop] 2016-04-06T02:53:51.127-0500 c20012| 2016-04-06T02:53:35.667-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:51.128-0500 c20012| 2016-04-06T02:53:35.667-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1405 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:51.129-0500 c20012| 2016-04-06T02:53:35.667-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1407 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:51.131-0500 c20012| 2016-04-06T02:53:35.667-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1405 finished with response: { term: 7, voteGranted: true, reason: "", ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.132-0500 c20012| 2016-04-06T02:53:35.667-0500 I REPL [ReplicationExecutor] election succeeded, assuming primary role in term 7 [js_test:multi_coll_drop] 2016-04-06T02:53:51.135-0500 c20012| 2016-04-06T02:53:35.667-0500 I REPL [ReplicationExecutor] transition to PRIMARY [js_test:multi_coll_drop] 2016-04-06T02:53:51.135-0500 c20012| 2016-04-06T02:53:35.667-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:35.667Z [js_test:multi_coll_drop] 2016-04-06T02:53:51.138-0500 c20012| 2016-04-06T02:53:35.667-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:35.667Z [js_test:multi_coll_drop] 2016-04-06T02:53:51.140-0500 c20012| 2016-04-06T02:53:35.667-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1409 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:45.667-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.141-0500 c20012| 2016-04-06T02:53:35.667-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1409 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:51.144-0500 c20012| 2016-04-06T02:53:35.667-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1410 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:41.995-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.144-0500 c20012| 2016-04-06T02:53:35.667-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:51.146-0500 c20012| 2016-04-06T02:53:35.667-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1411 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:51.150-0500 c20012| 2016-04-06T02:53:35.669-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1409 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20013", term: 7, primaryId: 2, durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, opTime: { ts: Timestamp 1459929210000|1, t: 6 } } [js_test:multi_coll_drop] 2016-04-06T02:53:51.151-0500 c20012| 2016-04-06T02:53:35.670-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:37.670Z [js_test:multi_coll_drop] 2016-04-06T02:53:51.153-0500 c20012| 2016-04-06T02:53:36.068-0500 D REPL [rsSync] Removing temporary collections from config [js_test:multi_coll_drop] 2016-04-06T02:53:51.157-0500 c20012| 2016-04-06T02:53:36.068-0500 I REPL [rsSync] transition to primary complete; database writes are now permitted [js_test:multi_coll_drop] 2016-04-06T02:53:51.158-0500 c20012| 2016-04-06T02:53:36.141-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.158-0500 c20012| 2016-04-06T02:53:36.141-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:51.160-0500 c20012| 2016-04-06T02:53:36.141-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 7 } numYields:0 reslen:500 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:51.162-0500 c20011| 2016-04-06T02:53:08.185-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 355 finished with response: { cursor: { nextBatch: [], id: 25053585400, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.164-0500 c20011| 2016-04-06T02:53:08.186-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:51.170-0500 c20011| 2016-04-06T02:53:08.186-0500 D REPL [rsBackgroundSync-0] Cancelling oplog query because we have to choose a sync source. Current source: :27017, OpTime{ ts: Timestamp 1459929185000|1, t: 4 }, hasSyncSource:0 [js_test:multi_coll_drop] 2016-04-06T02:53:51.177-0500 c20011| 2016-04-06T02:53:08.186-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 359 -- target:mongovm16:20012 db:local cmd:{ killCursors: "oplog.rs", cursors: [ 25053585400 ] } [js_test:multi_coll_drop] 2016-04-06T02:53:51.179-0500 c20011| 2016-04-06T02:53:08.186-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 359 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:51.184-0500 c20011| 2016-04-06T02:53:08.186-0500 D REPL [rsBackgroundSync] fetcher stopped reading remote oplog on :27017 [js_test:multi_coll_drop] 2016-04-06T02:53:51.212-0500 c20011| 2016-04-06T02:53:08.186-0500 I COMMAND [conn53] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } numYields:0 reslen:489 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:51.213-0500 c20011| 2016-04-06T02:53:08.187-0500 I REPL [ReplicationExecutor] syncing from: mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:51.217-0500 c20011| 2016-04-06T02:53:08.187-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 359 finished with response: { cursorsKilled: [ 25053585400 ], cursorsNotFound: [], cursorsAlive: [], cursorsUnknown: [], ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.221-0500 c20011| 2016-04-06T02:53:08.188-0500 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 361 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:38.188-0500 cmd:{ find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:51.224-0500 c20011| 2016-04-06T02:53:08.188-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 361 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:51.231-0500 c20011| 2016-04-06T02:53:08.188-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 361 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1459929117000|1, h: 1169182228640141205, v: 2, op: "n", ns: "", o: { msg: "initiating set" } } ], id: 0, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.233-0500 c20011| 2016-04-06T02:53:08.193-0500 D REPL [SyncSourceFeedback] setting syncSourceFeedback to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:51.233-0500 c20011| 2016-04-06T02:53:08.193-0500 D REPL [rsBackgroundSync] scheduling fetcher to read remote oplog on mongovm16:20013 starting at filter: { ts: { $gte: Timestamp 1459929185000|1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:51.238-0500 c20011| 2016-04-06T02:53:08.193-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:51.242-0500 c20011| 2016-04-06T02:53:08.193-0500 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 363 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.193-0500 cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929185000|1 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.246-0500 c20011| 2016-04-06T02:53:08.193-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 364 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:51.247-0500 c20011| 2016-04-06T02:53:08.193-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:51.248-0500 c20011| 2016-04-06T02:53:08.193-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:51.249-0500 c20011| 2016-04-06T02:53:08.193-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 366 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:51.250-0500 c20011| 2016-04-06T02:53:08.194-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 365 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:51.251-0500 c20011| 2016-04-06T02:53:08.194-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:51.252-0500 c20011| 2016-04-06T02:53:08.194-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 365 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:51.253-0500 c20011| 2016-04-06T02:53:08.194-0500 I ASIO [NetworkInterfaceASIO-BGSync-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:51.254-0500 c20011| 2016-04-06T02:53:08.194-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 366 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:51.256-0500 c20011| 2016-04-06T02:53:08.194-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 364 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:51.259-0500 c20011| 2016-04-06T02:53:08.194-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 363 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:51.260-0500 c20011| 2016-04-06T02:53:08.195-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 364 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.267-0500 c20011| 2016-04-06T02:53:08.195-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 363 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1459929185000|1, t: 4, h: -8800919752589540802, v: 2, op: "n", ns: "", o: { msg: "new primary" } }, { ts: Timestamp 1459929185000|2, t: 4, h: -3715515470456908696, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20014" }, o: { $set: { ping: new Date(1459929171765), up: 44, waiting: false } } }, { ts: Timestamp 1459929185000|3, t: 4, h: -2117331217373926554, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20015" }, o: { $set: { ping: new Date(1459929171773), up: 44, waiting: false } } }, { ts: Timestamp 1459929185000|4, t: 4, h: 7420545252714322932, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-64.0", lastmod: Timestamp 1000|75, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -64.0 }, max: { _id: -63.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-64.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-63.0", lastmod: Timestamp 1000|76, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -63.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-63.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 23953707769, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.274-0500 c20011| 2016-04-06T02:53:08.195-0500 D REPL [rsBackgroundSync-0] fetcher read 4 operations from remote oplog starting at ts: Timestamp 1459929185000|1 and ending at ts: Timestamp 1459929185000|4 [js_test:multi_coll_drop] 2016-04-06T02:53:51.278-0500 c20011| 2016-04-06T02:53:08.196-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:51.280-0500 c20011| 2016-04-06T02:53:08.196-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.281-0500 c20011| 2016-04-06T02:53:08.196-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.283-0500 c20011| 2016-04-06T02:53:08.196-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.287-0500 c20011| 2016-04-06T02:53:08.196-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.287-0500 c20011| 2016-04-06T02:53:08.196-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.290-0500 c20011| 2016-04-06T02:53:08.196-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.290-0500 c20011| 2016-04-06T02:53:08.196-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.291-0500 c20011| 2016-04-06T02:53:08.196-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.295-0500 c20011| 2016-04-06T02:53:08.196-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.295-0500 c20011| 2016-04-06T02:53:08.196-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.298-0500 c20011| 2016-04-06T02:53:08.196-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.300-0500 c20011| 2016-04-06T02:53:08.196-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.301-0500 c20011| 2016-04-06T02:53:08.196-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.301-0500 c20011| 2016-04-06T02:53:08.196-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.303-0500 c20011| 2016-04-06T02:53:08.196-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.305-0500 c20013| 2016-04-06T02:52:41.742-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.306-0500 c20013| 2016-04-06T02:52:41.742-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.307-0500 c20013| 2016-04-06T02:52:41.743-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.308-0500 c20013| 2016-04-06T02:52:41.743-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.309-0500 c20013| 2016-04-06T02:52:41.743-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.309-0500 c20013| 2016-04-06T02:52:41.743-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.310-0500 c20013| 2016-04-06T02:52:41.743-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.310-0500 c20013| 2016-04-06T02:52:41.743-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:51.310-0500 c20013| 2016-04-06T02:52:41.743-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.312-0500 c20013| 2016-04-06T02:52:41.743-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:51.314-0500 c20013| 2016-04-06T02:52:41.743-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.315-0500 c20013| 2016-04-06T02:52:41.743-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.316-0500 c20013| 2016-04-06T02:52:41.743-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.318-0500 c20013| 2016-04-06T02:52:41.743-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.318-0500 c20013| 2016-04-06T02:52:41.743-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.319-0500 c20013| 2016-04-06T02:52:41.743-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.320-0500 c20013| 2016-04-06T02:52:41.743-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.320-0500 c20013| 2016-04-06T02:52:41.743-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.321-0500 c20013| 2016-04-06T02:52:41.743-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.322-0500 c20013| 2016-04-06T02:52:41.743-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.323-0500 c20013| 2016-04-06T02:52:41.743-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.324-0500 c20013| 2016-04-06T02:52:41.743-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.324-0500 c20013| 2016-04-06T02:52:41.743-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.324-0500 c20013| 2016-04-06T02:52:41.743-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.327-0500 c20013| 2016-04-06T02:52:41.743-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.328-0500 c20013| 2016-04-06T02:52:41.743-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.329-0500 c20013| 2016-04-06T02:52:41.744-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.331-0500 c20013| 2016-04-06T02:52:41.744-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:51.336-0500 c20013| 2016-04-06T02:52:41.744-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|3, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|4, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:51.340-0500 c20013| 2016-04-06T02:52:41.744-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1285 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|3, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|4, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:51.340-0500 c20013| 2016-04-06T02:52:41.744-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1285 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:51.343-0500 c20013| 2016-04-06T02:52:41.744-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1286 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:46.744-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|3, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:51.347-0500 c20013| 2016-04-06T02:52:41.744-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1286 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:51.348-0500 c20013| 2016-04-06T02:52:41.744-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1285 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.351-0500 c20013| 2016-04-06T02:52:41.745-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1286 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929161000|5, t: 3, h: 7208870335463155550, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20014" }, o: { $set: { ping: new Date(1459929161743), up: 34, waiting: true } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.355-0500 c20013| 2016-04-06T02:52:41.745-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929161000|5 and ending at ts: Timestamp 1459929161000|5 [js_test:multi_coll_drop] 2016-04-06T02:53:51.356-0500 c20013| 2016-04-06T02:52:41.745-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:51.358-0500 c20013| 2016-04-06T02:52:41.745-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.360-0500 c20013| 2016-04-06T02:52:41.745-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.361-0500 c20013| 2016-04-06T02:52:41.745-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.363-0500 c20013| 2016-04-06T02:52:41.745-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.366-0500 c20013| 2016-04-06T02:52:41.745-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.371-0500 c20013| 2016-04-06T02:52:41.745-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.371-0500 c20013| 2016-04-06T02:52:41.745-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.372-0500 c20013| 2016-04-06T02:52:41.745-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.375-0500 c20013| 2016-04-06T02:52:41.745-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.380-0500 c20013| 2016-04-06T02:52:41.745-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.390-0500 c20013| 2016-04-06T02:52:41.745-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.390-0500 c20013| 2016-04-06T02:52:41.745-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.391-0500 c20013| 2016-04-06T02:52:41.745-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.395-0500 c20013| 2016-04-06T02:52:41.745-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:51.397-0500 c20013| 2016-04-06T02:52:41.745-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.401-0500 c20013| 2016-04-06T02:52:41.745-0500 D QUERY [repl writer worker 4] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:51.402-0500 c20013| 2016-04-06T02:52:41.745-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.402-0500 c20013| 2016-04-06T02:52:41.745-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.409-0500 c20013| 2016-04-06T02:52:41.746-0500 D COMMAND [conn16] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.411-0500 c20013| 2016-04-06T02:52:41.746-0500 D COMMAND [conn16] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:51.414-0500 c20013| 2016-04-06T02:52:41.746-0500 D COMMAND [conn16] Using 'committed' snapshot. { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.416-0500 c20013| 2016-04-06T02:52:41.746-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.427-0500 c20013| 2016-04-06T02:52:41.746-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.430-0500 c20013| 2016-04-06T02:52:41.746-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.442-0500 c20013| 2016-04-06T02:52:41.746-0500 D QUERY [conn16] Using idhack: query: { _id: "balancer" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:51.442-0500 c20013| 2016-04-06T02:52:41.746-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.444-0500 c20013| 2016-04-06T02:52:41.746-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.444-0500 c20013| 2016-04-06T02:52:41.746-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.455-0500 c20013| 2016-04-06T02:52:41.747-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|4, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:51.466-0500 c20013| 2016-04-06T02:52:41.747-0500 I COMMAND [conn16] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:408 locks:{ Global: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 148 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:51.470-0500 c20013| 2016-04-06T02:52:41.747-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1289 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|4, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:51.472-0500 c20013| 2016-04-06T02:52:41.747-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.473-0500 c20013| 2016-04-06T02:52:41.747-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1289 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:51.474-0500 c20013| 2016-04-06T02:52:41.747-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.477-0500 c20013| 2016-04-06T02:52:41.747-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.478-0500 c20013| 2016-04-06T02:52:41.747-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1289 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.483-0500 c20013| 2016-04-06T02:52:41.747-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1290 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:46.747-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|3, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:51.485-0500 c20013| 2016-04-06T02:52:41.748-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.486-0500 c20013| 2016-04-06T02:52:41.748-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.489-0500 c20013| 2016-04-06T02:52:41.748-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.489-0500 c20013| 2016-04-06T02:52:41.748-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.490-0500 c20013| 2016-04-06T02:52:41.748-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.492-0500 c20013| 2016-04-06T02:52:41.748-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.495-0500 c20013| 2016-04-06T02:52:41.748-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.497-0500 c20013| 2016-04-06T02:52:41.748-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:51.498-0500 c20013| 2016-04-06T02:52:41.748-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1290 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:51.503-0500 c20013| 2016-04-06T02:52:41.749-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|5, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:51.509-0500 c20013| 2016-04-06T02:52:41.749-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1292 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|5, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:51.510-0500 c20013| 2016-04-06T02:52:41.749-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1292 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:51.512-0500 c20013| 2016-04-06T02:52:41.749-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1292 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.520-0500 c20013| 2016-04-06T02:52:41.750-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|5, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:51.527-0500 c20013| 2016-04-06T02:52:41.750-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1294 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|5, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:51.528-0500 c20013| 2016-04-06T02:52:41.750-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1294 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:51.530-0500 c20013| 2016-04-06T02:52:41.750-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1294 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.533-0500 c20013| 2016-04-06T02:52:41.762-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1290 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929161000|6, t: 3, h: 9145859565647178306, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20015" }, o: { $set: { ping: new Date(1459929161747), up: 34, waiting: true } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.537-0500 c20013| 2016-04-06T02:52:41.765-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929161000|5, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.542-0500 c20013| 2016-04-06T02:52:41.765-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929161000|6 and ending at ts: Timestamp 1459929161000|6 [js_test:multi_coll_drop] 2016-04-06T02:53:51.543-0500 c20013| 2016-04-06T02:52:41.765-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:51.545-0500 c20013| 2016-04-06T02:52:41.766-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.545-0500 c20013| 2016-04-06T02:52:41.766-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.546-0500 c20013| 2016-04-06T02:52:41.766-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.548-0500 c20013| 2016-04-06T02:52:41.766-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.554-0500 c20013| 2016-04-06T02:52:41.766-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.556-0500 c20013| 2016-04-06T02:52:41.766-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.556-0500 c20013| 2016-04-06T02:52:41.766-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.557-0500 c20013| 2016-04-06T02:52:41.766-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.558-0500 c20013| 2016-04-06T02:52:41.766-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.559-0500 c20013| 2016-04-06T02:52:41.766-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.560-0500 c20013| 2016-04-06T02:52:41.766-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.563-0500 c20013| 2016-04-06T02:52:41.766-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.563-0500 c20013| 2016-04-06T02:52:41.766-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.564-0500 c20013| 2016-04-06T02:52:41.766-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:51.567-0500 c20013| 2016-04-06T02:52:41.766-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.567-0500 c20013| 2016-04-06T02:52:41.766-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:51.568-0500 c20013| 2016-04-06T02:52:41.766-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.569-0500 c20013| 2016-04-06T02:52:41.766-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.571-0500 c20013| 2016-04-06T02:52:41.766-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.575-0500 c20013| 2016-04-06T02:52:41.766-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.578-0500 c20013| 2016-04-06T02:52:41.766-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.579-0500 c20013| 2016-04-06T02:52:41.766-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.580-0500 c20013| 2016-04-06T02:52:41.767-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.581-0500 c20013| 2016-04-06T02:52:41.767-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.582-0500 c20013| 2016-04-06T02:52:41.767-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.583-0500 c20013| 2016-04-06T02:52:41.767-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.584-0500 c20013| 2016-04-06T02:52:41.767-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.586-0500 c20013| 2016-04-06T02:52:41.767-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.589-0500 c20013| 2016-04-06T02:52:41.767-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.591-0500 c20013| 2016-04-06T02:52:41.767-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.591-0500 c20013| 2016-04-06T02:52:41.768-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.593-0500 c20013| 2016-04-06T02:52:41.768-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.593-0500 c20013| 2016-04-06T02:52:41.769-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.594-0500 c20013| 2016-04-06T02:52:41.769-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.606-0500 c20013| 2016-04-06T02:52:41.769-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:51.610-0500 c20013| 2016-04-06T02:52:41.769-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|6, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:51.616-0500 c20013| 2016-04-06T02:52:41.769-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1297 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|6, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:51.618-0500 c20013| 2016-04-06T02:52:41.769-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1297 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:51.620-0500 c20013| 2016-04-06T02:52:41.769-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1297 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.622-0500 c20013| 2016-04-06T02:52:41.770-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1299 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:46.770-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|5, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:51.625-0500 c20013| 2016-04-06T02:52:41.770-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1299 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:51.631-0500 c20013| 2016-04-06T02:52:41.773-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|6, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|6, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:51.636-0500 c20013| 2016-04-06T02:52:41.773-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1300 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|6, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|6, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:51.637-0500 c20013| 2016-04-06T02:52:41.773-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1300 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:51.638-0500 c20013| 2016-04-06T02:52:41.773-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1300 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.642-0500 c20013| 2016-04-06T02:52:41.773-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1299 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929161000|7, t: 3, h: 5502916262959992045, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c04965c17830b843f1b1'), state: 2, when: new Date(1459929161772), why: "splitting chunk [{ _id: -75.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.644-0500 c20013| 2016-04-06T02:52:41.774-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929161000|6, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.646-0500 c20013| 2016-04-06T02:52:41.774-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929161000|7 and ending at ts: Timestamp 1459929161000|7 [js_test:multi_coll_drop] 2016-04-06T02:53:51.647-0500 c20013| 2016-04-06T02:52:41.774-0500 D REPL [rsBackgroundSync-0] bgsync buffer has 0 bytes [js_test:multi_coll_drop] 2016-04-06T02:53:51.648-0500 c20013| 2016-04-06T02:52:41.774-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:51.650-0500 c20013| 2016-04-06T02:52:41.775-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.651-0500 c20013| 2016-04-06T02:52:41.775-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.652-0500 c20013| 2016-04-06T02:52:41.775-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.653-0500 c20013| 2016-04-06T02:52:41.775-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.654-0500 c20013| 2016-04-06T02:52:41.775-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.655-0500 c20013| 2016-04-06T02:52:41.775-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.656-0500 c20013| 2016-04-06T02:52:41.775-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.656-0500 c20013| 2016-04-06T02:52:41.775-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.658-0500 c20013| 2016-04-06T02:52:41.775-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.662-0500 c20013| 2016-04-06T02:52:41.775-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.663-0500 c20013| 2016-04-06T02:52:41.775-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.664-0500 c20013| 2016-04-06T02:52:41.775-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.667-0500 c20013| 2016-04-06T02:52:41.775-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.669-0500 c20013| 2016-04-06T02:52:41.775-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:51.670-0500 c20013| 2016-04-06T02:52:41.775-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.671-0500 c20013| 2016-04-06T02:52:41.775-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.672-0500 c20013| 2016-04-06T02:52:41.775-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:51.673-0500 c20013| 2016-04-06T02:52:41.775-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.674-0500 c20013| 2016-04-06T02:52:41.775-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.676-0500 c20013| 2016-04-06T02:52:41.775-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.678-0500 c20013| 2016-04-06T02:52:41.775-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.679-0500 c20013| 2016-04-06T02:52:41.775-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.679-0500 c20013| 2016-04-06T02:52:41.775-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.680-0500 c20013| 2016-04-06T02:52:41.775-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.683-0500 c20013| 2016-04-06T02:52:41.775-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.684-0500 c20013| 2016-04-06T02:52:41.775-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.685-0500 c20013| 2016-04-06T02:52:41.775-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.687-0500 c20013| 2016-04-06T02:52:41.776-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.688-0500 c20013| 2016-04-06T02:52:41.776-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.688-0500 c20013| 2016-04-06T02:52:41.775-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.690-0500 c20013| 2016-04-06T02:52:41.776-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.693-0500 c20013| 2016-04-06T02:52:41.776-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.694-0500 c20013| 2016-04-06T02:52:41.776-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.696-0500 c20013| 2016-04-06T02:52:41.776-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.700-0500 c20013| 2016-04-06T02:52:41.776-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:51.708-0500 c20013| 2016-04-06T02:52:41.776-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|6, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|7, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:51.713-0500 c20013| 2016-04-06T02:52:41.776-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1303 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|6, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|7, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:51.715-0500 c20013| 2016-04-06T02:52:41.776-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1303 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:51.715-0500 c20013| 2016-04-06T02:52:41.776-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1303 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.721-0500 c20013| 2016-04-06T02:52:41.777-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|7, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|7, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:51.729-0500 c20013| 2016-04-06T02:52:41.777-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1305 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|7, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|7, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:51.730-0500 c20013| 2016-04-06T02:52:41.777-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1305 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:51.734-0500 c20013| 2016-04-06T02:52:41.777-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1305 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.736-0500 c20013| 2016-04-06T02:52:41.778-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1307 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:46.778-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|6, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:51.738-0500 c20013| 2016-04-06T02:52:41.778-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1307 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:51.739-0500 c20013| 2016-04-06T02:52:41.782-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1307 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.745-0500 c20013| 2016-04-06T02:52:41.782-0500 D COMMAND [conn15] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|7, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.748-0500 c20013| 2016-04-06T02:52:41.782-0500 D REPL [conn15] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929161000|7, t: 3 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929161000|6, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.751-0500 c20013| 2016-04-06T02:52:41.782-0500 D REPL [conn15] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999977μs [js_test:multi_coll_drop] 2016-04-06T02:53:51.753-0500 c20013| 2016-04-06T02:52:41.782-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929161000|7, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.755-0500 c20013| 2016-04-06T02:52:41.782-0500 D COMMAND [conn15] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|7, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:51.757-0500 c20013| 2016-04-06T02:52:41.782-0500 D COMMAND [conn15] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|7, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.757-0500 c20013| 2016-04-06T02:52:41.782-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:51.762-0500 c20013| 2016-04-06T02:52:41.782-0500 D QUERY [conn15] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:51.768-0500 c20013| 2016-04-06T02:52:41.782-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1309 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:46.782-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|7, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:51.775-0500 c20013| 2016-04-06T02:52:41.782-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1309 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:51.788-0500 c20013| 2016-04-06T02:52:41.783-0500 I COMMAND [conn15] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|7, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:492 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:51.795-0500 c20013| 2016-04-06T02:52:41.788-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1309 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929161000|8, t: 3, h: 6949985940899244306, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-75.0", lastmod: Timestamp 1000|53, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -75.0 }, max: { _id: -74.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-75.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-74.0", lastmod: Timestamp 1000|54, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -74.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-74.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:51.797-0500 c20013| 2016-04-06T02:52:41.789-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929161000|8 and ending at ts: Timestamp 1459929161000|8 [js_test:multi_coll_drop] 2016-04-06T02:53:51.799-0500 c20013| 2016-04-06T02:52:41.789-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:51.801-0500 c20013| 2016-04-06T02:52:41.789-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:51.803-0500 c20013| 2016-04-06T02:52:41.789-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.127-0500 c20013| 2016-04-06T02:52:41.789-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.129-0500 c20013| 2016-04-06T02:52:41.789-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.131-0500 c20013| 2016-04-06T02:52:41.789-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.131-0500 c20013| 2016-04-06T02:52:41.789-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.136-0500 c20013| 2016-04-06T02:52:41.789-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.152-0500 c20013| 2016-04-06T02:52:41.789-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.154-0500 c20013| 2016-04-06T02:52:41.790-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.155-0500 c20013| 2016-04-06T02:52:41.790-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.158-0500 c20013| 2016-04-06T02:52:41.790-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.165-0500 c20013| 2016-04-06T02:52:41.790-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.165-0500 c20013| 2016-04-06T02:52:41.790-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.167-0500 c20013| 2016-04-06T02:52:41.790-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:52.168-0500 c20013| 2016-04-06T02:52:41.790-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.170-0500 c20013| 2016-04-06T02:52:41.790-0500 D QUERY [repl writer worker 4] Using idhack: { _id: "multidrop.coll-_id_-75.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:52.172-0500 c20013| 2016-04-06T02:52:41.790-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.174-0500 c20013| 2016-04-06T02:52:41.790-0500 D QUERY [repl writer worker 4] Using idhack: { _id: "multidrop.coll-_id_-74.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:52.177-0500 c20013| 2016-04-06T02:52:41.790-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.180-0500 c20013| 2016-04-06T02:52:41.790-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.188-0500 c20013| 2016-04-06T02:52:41.790-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.189-0500 c20013| 2016-04-06T02:52:41.790-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.191-0500 c20013| 2016-04-06T02:52:41.790-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.192-0500 c20013| 2016-04-06T02:52:41.790-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.194-0500 c20013| 2016-04-06T02:52:41.790-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.195-0500 c20013| 2016-04-06T02:52:41.790-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.196-0500 c20013| 2016-04-06T02:52:41.790-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.202-0500 c20013| 2016-04-06T02:52:41.790-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.203-0500 c20013| 2016-04-06T02:52:41.790-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.203-0500 c20013| 2016-04-06T02:52:41.790-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.204-0500 c20013| 2016-04-06T02:52:41.790-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.205-0500 c20013| 2016-04-06T02:52:41.790-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.207-0500 c20013| 2016-04-06T02:52:41.790-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.207-0500 c20013| 2016-04-06T02:52:41.790-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.208-0500 c20013| 2016-04-06T02:52:41.790-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.210-0500 c20013| 2016-04-06T02:52:41.790-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:52.216-0500 c20013| 2016-04-06T02:52:41.791-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|7, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:52.222-0500 c20013| 2016-04-06T02:52:41.791-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1311 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|7, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:52.224-0500 c20013| 2016-04-06T02:52:41.791-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1311 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:52.226-0500 c20013| 2016-04-06T02:52:41.791-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1311 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.228-0500 c20013| 2016-04-06T02:52:41.791-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1313 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:46.791-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|7, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:52.229-0500 c20013| 2016-04-06T02:52:41.791-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1313 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:52.235-0500 c20013| 2016-04-06T02:52:41.796-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:52.245-0500 c20013| 2016-04-06T02:52:41.796-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1314 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:52.251-0500 c20013| 2016-04-06T02:52:41.796-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1314 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:52.252-0500 c20013| 2016-04-06T02:52:41.796-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1314 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.253-0500 c20013| 2016-04-06T02:52:41.797-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1313 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.257-0500 c20013| 2016-04-06T02:52:41.797-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929161000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.263-0500 c20013| 2016-04-06T02:52:41.798-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:52.267-0500 c20013| 2016-04-06T02:52:41.798-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1317 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:46.798-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:52.270-0500 c20013| 2016-04-06T02:52:41.798-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1317 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:52.279-0500 c20013| 2016-04-06T02:52:41.798-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1317 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929161000|9, t: 3, h: -4617580344049194992, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:41.797-0500-5704c04965c17830b843f1b2", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929161797), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -75.0 }, max: { _id: MaxKey } }, left: { min: { _id: -75.0 }, max: { _id: -74.0 }, lastmod: Timestamp 1000|53, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -74.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|54, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.284-0500 c20013| 2016-04-06T02:52:41.799-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929161000|9 and ending at ts: Timestamp 1459929161000|9 [js_test:multi_coll_drop] 2016-04-06T02:53:52.286-0500 c20013| 2016-04-06T02:52:41.799-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:52.287-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.289-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.290-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.293-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.295-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.298-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.300-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.302-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.304-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.307-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.309-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.310-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.311-0500 c20013| 2016-04-06T02:52:41.800-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:52.312-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.314-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.314-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.316-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.321-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.322-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.323-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.326-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.326-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.327-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.329-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.331-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.331-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.334-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.336-0500 c20013| 2016-04-06T02:52:41.800-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.337-0500 c20013| 2016-04-06T02:52:41.801-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.338-0500 c20013| 2016-04-06T02:52:41.801-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.346-0500 c20013| 2016-04-06T02:52:41.801-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.347-0500 c20013| 2016-04-06T02:52:41.801-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.348-0500 c20013| 2016-04-06T02:52:41.801-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.350-0500 c20013| 2016-04-06T02:52:41.801-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:52.371-0500 c20013| 2016-04-06T02:52:41.801-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|9, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:52.375-0500 c20013| 2016-04-06T02:52:41.801-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1319 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|9, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:52.388-0500 c20013| 2016-04-06T02:52:41.801-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1319 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:52.403-0500 c20013| 2016-04-06T02:52:41.801-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1319 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.413-0500 c20013| 2016-04-06T02:52:41.802-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1321 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:46.802-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:52.414-0500 c20013| 2016-04-06T02:52:41.802-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1321 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:52.418-0500 c20013| 2016-04-06T02:52:41.822-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|9, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|9, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:52.429-0500 c20013| 2016-04-06T02:52:41.822-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1322 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|9, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|9, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:52.431-0500 c20013| 2016-04-06T02:52:41.822-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1322 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:52.431-0500 c20013| 2016-04-06T02:52:41.822-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1322 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.434-0500 c20013| 2016-04-06T02:52:41.822-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1321 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.435-0500 c20013| 2016-04-06T02:52:41.823-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929161000|9, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.440-0500 c20013| 2016-04-06T02:52:41.823-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:52.443-0500 c20013| 2016-04-06T02:52:41.824-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1325 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:46.824-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|9, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:52.453-0500 c20013| 2016-04-06T02:52:41.824-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1325 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:52.458-0500 c20013| 2016-04-06T02:52:41.824-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1325 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929161000|10, t: 3, h: -6490455652975516690, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.460-0500 c20013| 2016-04-06T02:52:41.825-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929161000|10 and ending at ts: Timestamp 1459929161000|10 [js_test:multi_coll_drop] 2016-04-06T02:53:52.464-0500 c20013| 2016-04-06T02:52:41.825-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:52.466-0500 c20013| 2016-04-06T02:52:41.825-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.467-0500 c20013| 2016-04-06T02:52:41.825-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.468-0500 c20013| 2016-04-06T02:52:41.825-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.470-0500 c20013| 2016-04-06T02:52:41.825-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.470-0500 c20013| 2016-04-06T02:52:41.825-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.471-0500 c20013| 2016-04-06T02:52:41.825-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.472-0500 c20013| 2016-04-06T02:52:41.826-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.473-0500 c20013| 2016-04-06T02:52:41.826-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.474-0500 c20013| 2016-04-06T02:52:41.826-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.474-0500 c20013| 2016-04-06T02:52:41.826-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.476-0500 c20013| 2016-04-06T02:52:41.826-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.476-0500 c20013| 2016-04-06T02:52:41.826-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.478-0500 c20013| 2016-04-06T02:52:41.826-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.479-0500 c20013| 2016-04-06T02:52:41.826-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.480-0500 c20013| 2016-04-06T02:52:41.826-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.480-0500 c20013| 2016-04-06T02:52:41.826-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:52.483-0500 c20013| 2016-04-06T02:52:41.826-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.484-0500 c20013| 2016-04-06T02:52:41.826-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:52.485-0500 c20013| 2016-04-06T02:52:41.826-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.487-0500 c20013| 2016-04-06T02:52:41.826-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.488-0500 c20013| 2016-04-06T02:52:41.826-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.489-0500 c20013| 2016-04-06T02:52:41.826-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.491-0500 c20013| 2016-04-06T02:52:41.826-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.495-0500 c20013| 2016-04-06T02:52:41.826-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.496-0500 c20013| 2016-04-06T02:52:41.826-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.497-0500 c20013| 2016-04-06T02:52:41.826-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.500-0500 c20013| 2016-04-06T02:52:41.826-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.500-0500 c20013| 2016-04-06T02:52:41.827-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.504-0500 c20013| 2016-04-06T02:52:41.827-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.504-0500 c20013| 2016-04-06T02:52:41.827-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.505-0500 c20013| 2016-04-06T02:52:41.827-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.507-0500 c20013| 2016-04-06T02:52:41.826-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.508-0500 c20013| 2016-04-06T02:52:41.827-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.513-0500 c20013| 2016-04-06T02:52:41.827-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.514-0500 c20013| 2016-04-06T02:52:41.827-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:52.519-0500 c20013| 2016-04-06T02:52:41.827-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1327 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:46.827-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|9, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:52.519-0500 c20013| 2016-04-06T02:52:41.827-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1327 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:52.527-0500 c20013| 2016-04-06T02:52:41.831-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|9, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|10, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:52.530-0500 c20013| 2016-04-06T02:52:41.831-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1328 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|9, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|10, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:52.532-0500 c20013| 2016-04-06T02:52:41.831-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1328 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:52.535-0500 c20013| 2016-04-06T02:52:41.832-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1328 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.539-0500 c20013| 2016-04-06T02:52:41.838-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|10, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|10, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:52.543-0500 c20013| 2016-04-06T02:52:41.838-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1330 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|10, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|10, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:52.546-0500 c20013| 2016-04-06T02:52:41.838-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1330 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:52.552-0500 c20013| 2016-04-06T02:52:41.838-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1330 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.558-0500 c20013| 2016-04-06T02:52:41.839-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1327 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.559-0500 c20013| 2016-04-06T02:52:41.840-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929161000|10, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.567-0500 c20013| 2016-04-06T02:52:41.840-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:52.571-0500 c20013| 2016-04-06T02:52:41.840-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1333 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:46.840-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|10, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:52.573-0500 c20013| 2016-04-06T02:52:41.840-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1333 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:52.578-0500 c20013| 2016-04-06T02:52:41.841-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|52 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|10, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.581-0500 c20013| 2016-04-06T02:52:41.841-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|10, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:52.585-0500 c20013| 2016-04-06T02:52:41.841-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|52 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|10, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.588-0500 c20013| 2016-04-06T02:52:41.841-0500 D QUERY [conn10] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:52.592-0500 c20013| 2016-04-06T02:52:41.841-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|52 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|10, t: 3 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:712 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:52.595-0500 c20013| 2016-04-06T02:52:41.842-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|10, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.597-0500 c20013| 2016-04-06T02:52:41.842-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|10, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:52.599-0500 c20013| 2016-04-06T02:52:41.842-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|10, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.601-0500 c20013| 2016-04-06T02:52:41.842-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:52.602-0500 c20013| 2016-04-06T02:52:41.842-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|10, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:52.607-0500 c20013| 2016-04-06T02:52:41.843-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1333 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929161000|11, t: 3, h: 5945394017863447987, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c04965c17830b843f1b3'), state: 2, when: new Date(1459929161842), why: "splitting chunk [{ _id: -74.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.609-0500 c20013| 2016-04-06T02:52:41.843-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929161000|11 and ending at ts: Timestamp 1459929161000|11 [js_test:multi_coll_drop] 2016-04-06T02:53:52.611-0500 c20013| 2016-04-06T02:52:41.845-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1335 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:46.845-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|10, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:52.613-0500 c20013| 2016-04-06T02:52:41.845-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1335 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:52.615-0500 c20013| 2016-04-06T02:52:41.851-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:52.615-0500 c20013| 2016-04-06T02:52:41.851-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.618-0500 c20013| 2016-04-06T02:52:41.851-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.619-0500 c20013| 2016-04-06T02:52:41.851-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.620-0500 c20013| 2016-04-06T02:52:41.851-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.621-0500 c20013| 2016-04-06T02:52:41.851-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.623-0500 c20013| 2016-04-06T02:52:41.851-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.626-0500 c20013| 2016-04-06T02:52:41.851-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.631-0500 c20013| 2016-04-06T02:52:41.851-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.633-0500 c20013| 2016-04-06T02:52:41.851-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.633-0500 c20013| 2016-04-06T02:52:41.851-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.642-0500 c20013| 2016-04-06T02:52:41.851-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.644-0500 c20013| 2016-04-06T02:52:41.851-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.645-0500 c20013| 2016-04-06T02:52:41.851-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.648-0500 c20013| 2016-04-06T02:52:41.851-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.649-0500 c20013| 2016-04-06T02:52:41.851-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.651-0500 c20013| 2016-04-06T02:52:41.851-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:52.652-0500 c20013| 2016-04-06T02:52:41.851-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.657-0500 c20013| 2016-04-06T02:52:41.851-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:52.660-0500 c20013| 2016-04-06T02:52:41.856-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.695-0500 c20013| 2016-04-06T02:52:41.856-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.696-0500 c20013| 2016-04-06T02:52:41.865-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.698-0500 c20013| 2016-04-06T02:52:41.870-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.699-0500 c20013| 2016-04-06T02:52:41.870-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.700-0500 c20013| 2016-04-06T02:52:41.870-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.701-0500 c20013| 2016-04-06T02:52:41.870-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.703-0500 c20013| 2016-04-06T02:52:41.870-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.704-0500 c20013| 2016-04-06T02:52:41.870-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.704-0500 c20013| 2016-04-06T02:52:41.870-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.705-0500 c20013| 2016-04-06T02:52:41.870-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.706-0500 c20013| 2016-04-06T02:52:41.870-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.706-0500 c20013| 2016-04-06T02:52:41.871-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.709-0500 c20013| 2016-04-06T02:52:41.871-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.710-0500 c20013| 2016-04-06T02:52:41.871-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.711-0500 c20013| 2016-04-06T02:52:41.871-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.712-0500 c20013| 2016-04-06T02:52:41.874-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:52.714-0500 c20013| 2016-04-06T02:52:41.875-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|10, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:52.720-0500 2016-04-06T02:53:36.392-0500c20013| 2016-04-06T02:52:41.875-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1336 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|10, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:52.722-0500 c20013| 2016-04-06T02:52:41.875-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1336 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:52.723-0500 c20013| 2016-04-06T02:52:41.875-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1336 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.728-0500 c20013| 2016-04-06T02:52:41.878-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:52.733-0500 I NETWORK c20013| 2016-04-06T02:52:41.878-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1338 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:52.734-0500 c20013| 2016-04-06T02:52:41.878-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1338 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:52.735-0500 c20013| 2016-04-06T02:52:41.878-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1338 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.735-0500 [thread2] reconnect mongovm16:20013 (192.168.100.28) failed failed [js_test:multi_coll_drop] 2016-04-06T02:53:52.746-0500 c20013| 2016-04-06T02:52:41.879-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1335 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.749-0500 c20013| 2016-04-06T02:52:41.879-0500 D COMMAND [conn15] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|11, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.750-0500 c20013| 2016-04-06T02:52:41.879-0500 D REPL [conn15] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929161000|11, t: 3 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929161000|10, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.750-0500 c20013| 2016-04-06T02:52:41.879-0500 D REPL [conn15] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999971μs [js_test:multi_coll_drop] 2016-04-06T02:53:52.752-0500 c20013| 2016-04-06T02:52:41.879-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929161000|11, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.753-0500 c20013| 2016-04-06T02:52:41.879-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:52.765-0500 c20013| 2016-04-06T02:52:41.879-0500 D COMMAND [conn15] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|11, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:52.776-0500 c20013| 2016-04-06T02:52:41.879-0500 D COMMAND [conn15] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|11, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.782-0500 c20013| 2016-04-06T02:52:41.879-0500 D QUERY [conn15] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:52.788-0500 c20013| 2016-04-06T02:52:41.879-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1341 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:46.879-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|11, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:52.792-0500 c20013| 2016-04-06T02:52:41.880-0500 I COMMAND [conn15] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|11, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:492 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:52.796-0500 c20013| 2016-04-06T02:52:41.880-0500 D COMMAND [conn15] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|54 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|11, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.801-0500 c20013| 2016-04-06T02:52:41.880-0500 D COMMAND [conn15] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|11, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:52.803-0500 c20013| 2016-04-06T02:52:41.880-0500 D COMMAND [conn15] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|54 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|11, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.821-0500 c20013| 2016-04-06T02:52:41.880-0500 D QUERY [conn15] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:52.832-0500 c20013| 2016-04-06T02:52:41.880-0500 I COMMAND [conn15] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|54 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|11, t: 3 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:52.836-0500 c20013| 2016-04-06T02:52:41.880-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1341 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:52.853-0500 c20013| 2016-04-06T02:52:41.881-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1341 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929161000|12, t: 3, h: 4287115959176304978, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-74.0", lastmod: Timestamp 1000|55, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -74.0 }, max: { _id: -73.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-74.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-73.0", lastmod: Timestamp 1000|56, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -73.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-73.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.855-0500 c20013| 2016-04-06T02:52:41.881-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929161000|12 and ending at ts: Timestamp 1459929161000|12 [js_test:multi_coll_drop] 2016-04-06T02:53:52.856-0500 c20013| 2016-04-06T02:52:41.881-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:52.856-0500 c20013| 2016-04-06T02:52:41.881-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.859-0500 c20013| 2016-04-06T02:52:41.881-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.860-0500 c20013| 2016-04-06T02:52:41.881-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.865-0500 c20013| 2016-04-06T02:52:41.882-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.866-0500 c20013| 2016-04-06T02:52:41.882-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.867-0500 c20013| 2016-04-06T02:52:41.882-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.868-0500 c20013| 2016-04-06T02:52:41.882-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.868-0500 c20013| 2016-04-06T02:52:41.882-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.869-0500 c20013| 2016-04-06T02:52:41.882-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.870-0500 c20013| 2016-04-06T02:52:41.882-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.873-0500 c20013| 2016-04-06T02:52:41.882-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.874-0500 c20013| 2016-04-06T02:52:41.882-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.874-0500 c20013| 2016-04-06T02:52:41.882-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.875-0500 c20013| 2016-04-06T02:52:41.882-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:52.876-0500 c20013| 2016-04-06T02:52:41.882-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll-_id_-74.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:52.877-0500 c20013| 2016-04-06T02:52:41.882-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll-_id_-73.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:52.880-0500 c20013| 2016-04-06T02:52:41.882-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.882-0500 c20013| 2016-04-06T02:52:41.882-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.882-0500 c20013| 2016-04-06T02:52:41.882-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.883-0500 c20013| 2016-04-06T02:52:41.882-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.884-0500 c20013| 2016-04-06T02:52:41.882-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.886-0500 c20013| 2016-04-06T02:52:41.882-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.886-0500 c20013| 2016-04-06T02:52:41.882-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.887-0500 c20013| 2016-04-06T02:52:41.883-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.888-0500 c20013| 2016-04-06T02:52:41.882-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.889-0500 c20013| 2016-04-06T02:52:41.882-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.892-0500 c20013| 2016-04-06T02:52:41.882-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.899-0500 c20013| 2016-04-06T02:52:41.882-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.901-0500 c20013| 2016-04-06T02:52:41.882-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.904-0500 c20013| 2016-04-06T02:52:41.882-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.911-0500 c20013| 2016-04-06T02:52:41.882-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.923-0500 c20013| 2016-04-06T02:52:41.883-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1343 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:46.883-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|11, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:52.924-0500 c20013| 2016-04-06T02:52:41.883-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1343 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:52.928-0500 c20013| 2016-04-06T02:52:41.885-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.930-0500 c20013| 2016-04-06T02:52:41.885-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.932-0500 c20013| 2016-04-06T02:52:41.886-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.932-0500 c20013| 2016-04-06T02:52:41.886-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:52.934-0500 c20013| 2016-04-06T02:52:41.886-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:52.939-0500 c20013| 2016-04-06T02:52:41.886-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|12, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:52.951-0500 c20013| 2016-04-06T02:52:41.886-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1344 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|12, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:52.952-0500 c20013| 2016-04-06T02:52:41.886-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1344 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:52.955-0500 c20013| 2016-04-06T02:52:41.887-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1344 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.961-0500 c20013| 2016-04-06T02:52:41.893-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|12, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|12, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:52.968-0500 c20013| 2016-04-06T02:52:41.893-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1346 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|12, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|12, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:52.974-0500 c20013| 2016-04-06T02:52:41.893-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1346 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:52.979-0500 c20013| 2016-04-06T02:52:41.893-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1346 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.981-0500 c20013| 2016-04-06T02:52:41.893-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1343 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.981-0500 c20013| 2016-04-06T02:52:41.894-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929161000|12, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:52.984-0500 c20013| 2016-04-06T02:52:41.894-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:52.989-0500 c20013| 2016-04-06T02:52:41.894-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1349 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:46.894-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|12, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:52.991-0500 c20013| 2016-04-06T02:52:41.894-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1349 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:52.999-0500 c20013| 2016-04-06T02:52:41.895-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1349 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929161000|13, t: 3, h: 8150617756176431639, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:41.893-0500-5704c04965c17830b843f1b4", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929161893), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -74.0 }, max: { _id: MaxKey } }, left: { min: { _id: -74.0 }, max: { _id: -73.0 }, lastmod: Timestamp 1000|55, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -73.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|56, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:53.013-0500 c20013| 2016-04-06T02:52:41.895-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929161000|13 and ending at ts: Timestamp 1459929161000|13 [js_test:multi_coll_drop] 2016-04-06T02:53:53.013-0500 c20013| 2016-04-06T02:52:41.895-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:53.013-0500 c20013| 2016-04-06T02:52:41.895-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.013-0500 c20013| 2016-04-06T02:52:41.895-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.014-0500 c20013| 2016-04-06T02:52:41.895-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.014-0500 c20013| 2016-04-06T02:52:41.895-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.014-0500 c20013| 2016-04-06T02:52:41.895-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.014-0500 c20013| 2016-04-06T02:52:41.895-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.014-0500 c20013| 2016-04-06T02:52:41.896-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.015-0500 c20013| 2016-04-06T02:52:41.896-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.016-0500 c20013| 2016-04-06T02:52:41.896-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.016-0500 c20013| 2016-04-06T02:52:41.896-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.017-0500 c20013| 2016-04-06T02:52:41.896-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.017-0500 c20013| 2016-04-06T02:52:41.896-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.019-0500 c20013| 2016-04-06T02:52:41.896-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.019-0500 c20013| 2016-04-06T02:52:41.896-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.019-0500 c20013| 2016-04-06T02:52:41.896-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:53.023-0500 c20013| 2016-04-06T02:52:41.896-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.024-0500 c20013| 2016-04-06T02:52:41.896-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.032-0500 c20013| 2016-04-06T02:52:41.896-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.034-0500 c20013| 2016-04-06T02:52:41.896-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.035-0500 c20013| 2016-04-06T02:52:41.896-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.038-0500 c20013| 2016-04-06T02:52:41.896-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.040-0500 c20013| 2016-04-06T02:52:41.896-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.042-0500 c20013| 2016-04-06T02:52:41.897-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.049-0500 c20013| 2016-04-06T02:52:41.897-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.050-0500 c20013| 2016-04-06T02:52:41.897-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.051-0500 c20013| 2016-04-06T02:52:41.897-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.053-0500 c20013| 2016-04-06T02:52:41.897-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.053-0500 c20013| 2016-04-06T02:52:41.897-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.055-0500 c20013| 2016-04-06T02:52:41.897-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.057-0500 c20013| 2016-04-06T02:52:41.897-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.064-0500 c20013| 2016-04-06T02:52:41.897-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1351 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:46.897-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|12, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:53:53.066-0500 c20013| 2016-04-06T02:52:41.897-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1351 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:53.067-0500 c20013| 2016-04-06T02:52:41.903-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.070-0500 c20013| 2016-04-06T02:52:41.903-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.070-0500 c20013| 2016-04-06T02:52:41.903-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.072-0500 c20013| 2016-04-06T02:52:41.903-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:53.102-0500 c20013| 2016-04-06T02:52:41.906-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|12, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|13, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:53.114-0500 c20013| 2016-04-06T02:52:41.906-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1352 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|12, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|13, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:53.118-0500 c20013| 2016-04-06T02:52:41.906-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1352 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:53.121-0500 c20013| 2016-04-06T02:52:41.906-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1352 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:53.122-0500 c20011| 2016-04-06T02:53:08.196-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:53.124-0500 c20011| 2016-04-06T02:53:08.196-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.125-0500 c20011| 2016-04-06T02:53:08.196-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:53.130-0500 c20011| 2016-04-06T02:53:08.197-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.130-0500 c20011| 2016-04-06T02:53:08.197-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.133-0500 c20011| 2016-04-06T02:53:08.197-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.134-0500 c20011| 2016-04-06T02:53:08.197-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.134-0500 c20011| 2016-04-06T02:53:08.197-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.138-0500 c20011| 2016-04-06T02:53:08.197-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.138-0500 c20011| 2016-04-06T02:53:08.197-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.140-0500 c20011| 2016-04-06T02:53:08.197-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.142-0500 c20011| 2016-04-06T02:53:08.197-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.144-0500 c20011| 2016-04-06T02:53:08.197-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.147-0500 ReplSetTest Could not call ismaster on node connection to mongovm16:20013: Error: error doing query: failed: socket exception [CONNECT_ERROR] for network error while attempting to run command 'isMaster' on host 'mongovm16:20013' [js_test:multi_coll_drop] 2016-04-06T02:53:53.151-0500 c20011| 2016-04-06T02:53:08.197-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.155-0500 c20011| 2016-04-06T02:53:08.197-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.158-0500 c20011| 2016-04-06T02:53:08.197-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.163-0500 c20011| 2016-04-06T02:53:08.197-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.167-0500 c20011| 2016-04-06T02:53:08.197-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.168-0500 c20011| 2016-04-06T02:53:08.197-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.174-0500 c20011| 2016-04-06T02:53:08.197-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:53.176-0500 c20011| 2016-04-06T02:53:08.197-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:53.177-0500 c20011| 2016-04-06T02:53:08.197-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.184-0500 c20011| 2016-04-06T02:53:08.197-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.189-0500 c20011| 2016-04-06T02:53:08.197-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.189-0500 c20011| 2016-04-06T02:53:08.197-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.192-0500 c20011| 2016-04-06T02:53:08.198-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.195-0500 c20011| 2016-04-06T02:53:08.198-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.198-0500 c20011| 2016-04-06T02:53:08.198-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.199-0500 c20011| 2016-04-06T02:53:08.198-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 369 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.198-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929185000|1, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:53.204-0500 c20011| 2016-04-06T02:53:08.198-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.207-0500 c20011| 2016-04-06T02:53:08.198-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 369 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:53.207-0500 c20011| 2016-04-06T02:53:08.198-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.209-0500 c20011| 2016-04-06T02:53:08.198-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.210-0500 c20011| 2016-04-06T02:53:08.198-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:53.212-0500 c20011| 2016-04-06T02:53:08.198-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.217-0500 c20011| 2016-04-06T02:53:08.198-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.218-0500 c20011| 2016-04-06T02:53:08.198-0500 D QUERY [repl writer worker 3] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:53.220-0500 c20011| 2016-04-06T02:53:08.198-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.221-0500 c20011| 2016-04-06T02:53:08.198-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.229-0500 c20011| 2016-04-06T02:53:08.198-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|2, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:53.229-0500 c20011| 2016-04-06T02:53:08.198-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.234-0500 c20011| 2016-04-06T02:53:08.198-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 370 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|2, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:53.236-0500 c20011| 2016-04-06T02:53:08.198-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.238-0500 c20011| 2016-04-06T02:53:08.198-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.239-0500 c20011| 2016-04-06T02:53:08.198-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 370 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:53.242-0500 c20011| 2016-04-06T02:53:08.198-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.243-0500 c20011| 2016-04-06T02:53:08.198-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.247-0500 c20011| 2016-04-06T02:53:08.198-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.248-0500 c20011| 2016-04-06T02:53:08.198-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.255-0500 c20011| 2016-04-06T02:53:08.198-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.261-0500 c20011| 2016-04-06T02:53:08.198-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.263-0500 c20011| 2016-04-06T02:53:08.198-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.265-0500 c20011| 2016-04-06T02:53:08.198-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.266-0500 c20011| 2016-04-06T02:53:08.198-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 370 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:53.268-0500 c20011| 2016-04-06T02:53:08.198-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.271-0500 c20011| 2016-04-06T02:53:08.198-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.271-0500 c20011| 2016-04-06T02:53:08.198-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.272-0500 c20011| 2016-04-06T02:53:08.198-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.276-0500 c20011| 2016-04-06T02:53:08.198-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.278-0500 c20011| 2016-04-06T02:53:08.199-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.284-0500 c20011| 2016-04-06T02:53:08.199-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.294-0500 c20011| 2016-04-06T02:53:08.199-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:53.299-0500 c20011| 2016-04-06T02:53:08.199-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:53.305-0500 c20011| 2016-04-06T02:53:08.199-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|3, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:53.312-0500 c20011| 2016-04-06T02:53:08.199-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 372 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|3, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:53.312-0500 c20011| 2016-04-06T02:53:08.199-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.317-0500 c20011| 2016-04-06T02:53:08.199-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 372 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:53.319-0500 c20011| 2016-04-06T02:53:08.199-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.319-0500 c20011| 2016-04-06T02:53:08.199-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.321-0500 c20011| 2016-04-06T02:53:08.199-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.321-0500 c20011| 2016-04-06T02:53:08.199-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.326-0500 c20011| 2016-04-06T02:53:08.199-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.327-0500 c20011| 2016-04-06T02:53:08.199-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.328-0500 c20011| 2016-04-06T02:53:08.199-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.332-0500 c20011| 2016-04-06T02:53:08.199-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.333-0500 c20011| 2016-04-06T02:53:08.199-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.335-0500 c20011| 2016-04-06T02:53:08.200-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.337-0500 c20011| 2016-04-06T02:53:08.199-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.340-0500 c20011| 2016-04-06T02:53:08.200-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:53.345-0500 c20011| 2016-04-06T02:53:08.200-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.345-0500 c20011| 2016-04-06T02:53:08.200-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.347-0500 c20011| 2016-04-06T02:53:08.200-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.348-0500 c20011| 2016-04-06T02:53:08.200-0500 D QUERY [repl writer worker 11] Using idhack: { _id: "multidrop.coll-_id_-64.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:53.349-0500 c20011| 2016-04-06T02:53:08.200-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.351-0500 c20011| 2016-04-06T02:53:08.200-0500 D QUERY [repl writer worker 11] Using idhack: { _id: "multidrop.coll-_id_-63.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:53.357-0500 c20011| 2016-04-06T02:53:08.200-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.359-0500 c20011| 2016-04-06T02:53:08.200-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.361-0500 c20011| 2016-04-06T02:53:08.200-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.362-0500 c20011| 2016-04-06T02:53:08.200-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.364-0500 c20011| 2016-04-06T02:53:08.200-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.365-0500 c20011| 2016-04-06T02:53:08.200-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 372 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:53.372-0500 c20011| 2016-04-06T02:53:08.200-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.373-0500 c20011| 2016-04-06T02:53:08.200-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.379-0500 c20011| 2016-04-06T02:53:08.200-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.380-0500 c20011| 2016-04-06T02:53:08.200-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.383-0500 c20011| 2016-04-06T02:53:08.200-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.390-0500 c20011| 2016-04-06T02:53:08.200-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.398-0500 c20011| 2016-04-06T02:53:08.200-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.399-0500 c20011| 2016-04-06T02:53:08.200-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.400-0500 c20011| 2016-04-06T02:53:08.200-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.404-0500 c20011| 2016-04-06T02:53:08.200-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.405-0500 c20011| 2016-04-06T02:53:08.200-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.419-0500 c20011| 2016-04-06T02:53:08.200-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:53.438-0500 c20011| 2016-04-06T02:53:08.201-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:53.442-0500 c20011| 2016-04-06T02:53:08.201-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 374 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:53.443-0500 c20011| 2016-04-06T02:53:08.201-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 374 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:53.446-0500 c20011| 2016-04-06T02:53:08.201-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 374 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:53.451-0500 c20011| 2016-04-06T02:53:08.203-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:53.461-0500 c20011| 2016-04-06T02:53:08.203-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 376 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:53.470-0500 c20011| 2016-04-06T02:53:08.203-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 376 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:53.471-0500 c20011| 2016-04-06T02:53:08.203-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 376 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:53.472-0500 c20011| 2016-04-06T02:53:08.205-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 369 finished with response: { cursor: { nextBatch: [], id: 23953707769, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:53.475-0500 c20011| 2016-04-06T02:53:08.206-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929185000|2, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:53.476-0500 c20011| 2016-04-06T02:53:08.206-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:53.481-0500 c20011| 2016-04-06T02:53:08.206-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 379 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.206-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929185000|2, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:53.483-0500 c20011| 2016-04-06T02:53:08.206-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:33930 #54 (4 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:53.484-0500 c20011| 2016-04-06T02:53:08.206-0500 D COMMAND [conn54] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:53.486-0500 c20011| 2016-04-06T02:53:08.206-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 379 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:53.491-0500 c20011| 2016-04-06T02:53:08.207-0500 I COMMAND [conn54] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20014" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:53.497-0500 c20011| 2016-04-06T02:53:08.207-0500 D COMMAND [conn54] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|2, t: 4 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:53.498-0500 c20011| 2016-04-06T02:53:08.207-0500 D COMMAND [conn54] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|2, t: 4 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:53.502-0500 c20011| 2016-04-06T02:53:08.208-0500 D COMMAND [conn54] Using 'committed' snapshot. { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|2, t: 4 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:53.503-0500 c20011| 2016-04-06T02:53:08.208-0500 D QUERY [conn54] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:53.508-0500 c20011| 2016-04-06T02:53:08.208-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:53.512-0500 c20011| 2016-04-06T02:53:08.208-0500 I COMMAND [conn54] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|2, t: 4 } }, maxTimeMS: 30000 } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:423 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:53.517-0500 c20011| 2016-04-06T02:53:08.208-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 380 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:53.518-0500 c20011| 2016-04-06T02:53:08.208-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 380 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:53.520-0500 c20011| 2016-04-06T02:53:08.208-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 380 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:53.525-0500 c20011| 2016-04-06T02:53:08.212-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 379 finished with response: { cursor: { nextBatch: [], id: 23953707769, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:53.527-0500 c20011| 2016-04-06T02:53:08.213-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:33931 #55 (5 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:53.528-0500 c20011| 2016-04-06T02:53:08.215-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929185000|4, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:53.529-0500 c20011| 2016-04-06T02:53:08.215-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:53.533-0500 c20011| 2016-04-06T02:53:08.215-0500 D COMMAND [conn54] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|4, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:53.534-0500 c20011| 2016-04-06T02:53:08.215-0500 D COMMAND [conn54] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|4, t: 4 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:53.536-0500 c20011| 2016-04-06T02:53:08.215-0500 D COMMAND [conn54] Using 'committed' snapshot. { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|4, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:53.537-0500 c20011| 2016-04-06T02:53:08.215-0500 D QUERY [conn54] Using idhack: query: { _id: "balancer" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:53.548-0500 c20011| 2016-04-06T02:53:08.215-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 383 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.215-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929185000|4, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:53.551-0500 c20011| 2016-04-06T02:53:08.216-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 383 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:53.567-0500 c20011| 2016-04-06T02:53:08.216-0500 I COMMAND [conn54] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|4, t: 4 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:408 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:53.570-0500 c20011| 2016-04-06T02:53:08.216-0500 D COMMAND [conn55] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:53.602-0500 c20011| 2016-04-06T02:53:08.216-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 383 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929188000|1, t: 4, h: 9006822706624246442, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:53:08.213-0500-5704c06465c17830b843f1c8", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929188213), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -64.0 }, max: { _id: MaxKey } }, left: { min: { _id: -64.0 }, max: { _id: -63.0 }, lastmod: Timestamp 1000|75, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -63.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|76, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 23953707769, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:53.613-0500 c20011| 2016-04-06T02:53:08.216-0500 I COMMAND [conn55] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20015" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:53.620-0500 c20011| 2016-04-06T02:53:08.216-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929188000|1 and ending at ts: Timestamp 1459929188000|1 [js_test:multi_coll_drop] 2016-04-06T02:53:53.632-0500 c20011| 2016-04-06T02:53:08.216-0500 D COMMAND [conn55] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|4, t: 4 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:53.636-0500 c20011| 2016-04-06T02:53:08.216-0500 D COMMAND [conn55] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|4, t: 4 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:53.638-0500 c20011| 2016-04-06T02:53:08.216-0500 D COMMAND [conn55] Using 'committed' snapshot. { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|4, t: 4 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:53.641-0500 c20011| 2016-04-06T02:53:08.217-0500 D QUERY [conn55] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:53.643-0500 c20011| 2016-04-06T02:53:08.217-0500 I COMMAND [conn55] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|4, t: 4 } }, maxTimeMS: 30000 } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:423 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:53.648-0500 c20011| 2016-04-06T02:53:08.219-0500 D COMMAND [conn55] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|4, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:53.654-0500 c20011| 2016-04-06T02:53:08.219-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 385 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.219-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929185000|4, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:53.656-0500 c20011| 2016-04-06T02:53:08.219-0500 D COMMAND [conn55] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|4, t: 4 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:53.659-0500 c20011| 2016-04-06T02:53:08.219-0500 D COMMAND [conn55] Using 'committed' snapshot. { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|4, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:53.660-0500 c20011| 2016-04-06T02:53:08.219-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 385 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:53.663-0500 c20011| 2016-04-06T02:53:08.219-0500 D QUERY [conn55] Using idhack: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:53.665-0500 c20011| 2016-04-06T02:53:08.219-0500 I COMMAND [conn55] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|4, t: 4 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:414 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:53.668-0500 c20011| 2016-04-06T02:53:08.221-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 385 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929188000|2, t: 4, h: -5651042818538587262, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20014" }, o: { $set: { ping: new Date(1459929188220), up: 61, waiting: true } } } ], id: 23953707769, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:53.670-0500 c20011| 2016-04-06T02:53:08.221-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:53.673-0500 c20011| 2016-04-06T02:53:08.221-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929188000|2 and ending at ts: Timestamp 1459929188000|2 [js_test:multi_coll_drop] 2016-04-06T02:53:53.676-0500 c20011| 2016-04-06T02:53:08.221-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.679-0500 c20011| 2016-04-06T02:53:08.221-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.681-0500 c20011| 2016-04-06T02:53:08.221-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.683-0500 c20011| 2016-04-06T02:53:08.221-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.685-0500 c20011| 2016-04-06T02:53:08.221-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.688-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.690-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.694-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.698-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.698-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.700-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.701-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.704-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.708-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.709-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.711-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.712-0500 c20011| 2016-04-06T02:53:08.222-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:53.713-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.741-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.742-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.753-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.754-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.765-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.798-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.802-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.806-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.809-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.810-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.812-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.815-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.816-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.818-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.818-0500 c20011| 2016-04-06T02:53:08.222-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.820-0500 c20011| 2016-04-06T02:53:08.223-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:53.821-0500 c20011| 2016-04-06T02:53:08.223-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:53.822-0500 c20011| 2016-04-06T02:53:08.223-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.823-0500 c20011| 2016-04-06T02:53:08.223-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.824-0500 c20011| 2016-04-06T02:53:08.223-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.828-0500 c20011| 2016-04-06T02:53:08.223-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.829-0500 c20011| 2016-04-06T02:53:08.223-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.830-0500 c20011| 2016-04-06T02:53:08.223-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.831-0500 c20011| 2016-04-06T02:53:08.223-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.832-0500 c20011| 2016-04-06T02:53:08.223-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.833-0500 c20011| 2016-04-06T02:53:08.223-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.833-0500 [js_test:multi_coll_drop] 2016-04-06T02:53:53.833-0500 [js_test:multi_coll_drop] 2016-04-06T02:53:53.834-0500 ---- [js_test:multi_coll_drop] 2016-04-06T02:53:53.835-0500 Create versioned connection for each mongos... [js_test:multi_coll_drop] 2016-04-06T02:53:53.835-0500 ---- [js_test:multi_coll_drop] 2016-04-06T02:53:53.835-0500 [js_test:multi_coll_drop] 2016-04-06T02:53:53.835-0500 [js_test:multi_coll_drop] 2016-04-06T02:53:53.838-0500 c20011| 2016-04-06T02:53:08.223-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.839-0500 c20011| 2016-04-06T02:53:08.223-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.839-0500 c20011| 2016-04-06T02:53:08.224-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.839-0500 c20011| 2016-04-06T02:53:08.224-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.840-0500 c20011| 2016-04-06T02:53:08.224-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:53.840-0500 c20011| 2016-04-06T02:53:08.224-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.842-0500 c20011| 2016-04-06T02:53:08.224-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:53:53.843-0500 c20011| 2016-04-06T02:53:08.224-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.843-0500 c20011| 2016-04-06T02:53:08.224-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.846-0500 ReplSetTest Could not call ismaster on node connection to mongovm16:20011: Error: error doing query: failed: network error while attempting to run command 'ismaster' on host 'mongovm16:20011' [js_test:multi_coll_drop] 2016-04-06T02:53:53.847-0500 c20011| 2016-04-06T02:53:08.224-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.853-0500 c20011| 2016-04-06T02:53:08.224-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 387 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.224-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929185000|4, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:53.854-0500 c20011| 2016-04-06T02:53:08.224-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.855-0500 c20011| 2016-04-06T02:53:08.224-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.856-0500 c20011| 2016-04-06T02:53:08.224-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.857-0500 c20011| 2016-04-06T02:53:08.224-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.858-0500 c20011| 2016-04-06T02:53:08.224-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.859-0500 c20011| 2016-04-06T02:53:08.224-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.863-0500 c20011| 2016-04-06T02:53:08.224-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.863-0500 c20011| 2016-04-06T02:53:08.224-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.865-0500 c20011| 2016-04-06T02:53:08.224-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.869-0500 c20011| 2016-04-06T02:53:08.224-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.870-0500 c20011| 2016-04-06T02:53:08.224-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.873-0500 c20011| 2016-04-06T02:53:08.224-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 387 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:53.874-0500 c20011| 2016-04-06T02:53:08.224-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.878-0500 c20011| 2016-04-06T02:53:08.224-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.880-0500 c20011| 2016-04-06T02:53:08.224-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.880-0500 c20011| 2016-04-06T02:53:08.224-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.882-0500 c20011| 2016-04-06T02:53:08.225-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:53.889-0500 c20011| 2016-04-06T02:53:08.225-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|2, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:53.897-0500 c20011| 2016-04-06T02:53:08.225-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 388 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|2, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:53.898-0500 c20011| 2016-04-06T02:53:08.225-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 388 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:53.900-0500 c20011| 2016-04-06T02:53:08.225-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 387 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929188000|3, t: 4, h: -5240106199834540916, v: 2, op: "u", ns: "config.mongos", o2: { _id: "mongovm16:20015" }, o: { $set: { ping: new Date(1459929188221), up: 61, waiting: true } } } ], id: 23953707769, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:53.903-0500 c20011| 2016-04-06T02:53:08.225-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 388 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:53.906-0500 c20011| 2016-04-06T02:53:08.225-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|2, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:53.908-0500 c20011| 2016-04-06T02:53:08.225-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929188000|3 and ending at ts: Timestamp 1459929188000|3 [js_test:multi_coll_drop] 2016-04-06T02:53:53.916-0500 c20011| 2016-04-06T02:53:08.225-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 390 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|2, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:53.916-0500 c20011| 2016-04-06T02:53:08.225-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:53.919-0500 c20011| 2016-04-06T02:53:08.225-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:53.921-0500 c20011| 2016-04-06T02:53:08.225-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.923-0500 c20011| 2016-04-06T02:53:08.225-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 390 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:53.925-0500 c20011| 2016-04-06T02:53:08.225-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.926-0500 c20011| 2016-04-06T02:53:08.225-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.927-0500 c20011| 2016-04-06T02:53:08.225-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 391 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:53.928-0500 c20011| 2016-04-06T02:53:08.225-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.929-0500 c20011| 2016-04-06T02:53:08.225-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.930-0500 c20011| 2016-04-06T02:53:08.225-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.932-0500 c20011| 2016-04-06T02:53:08.225-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.934-0500 c20011| 2016-04-06T02:53:08.225-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.938-0500 c20011| 2016-04-06T02:53:08.226-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.944-0500 c20011| 2016-04-06T02:53:08.226-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.948-0500 c20011| 2016-04-06T02:53:08.226-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 390 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:53.950-0500 c20011| 2016-04-06T02:53:08.226-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.951-0500 c20011| 2016-04-06T02:53:08.226-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.952-0500 c20011| 2016-04-06T02:53:08.226-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:53.954-0500 c20011| 2016-04-06T02:53:08.226-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.955-0500 c20011| 2016-04-06T02:53:08.226-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.960-0500 c20011| 2016-04-06T02:53:08.226-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.962-0500 c20011| 2016-04-06T02:53:08.226-0500 D QUERY [repl writer worker 13] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:53.967-0500 c20011| 2016-04-06T02:53:08.226-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.968-0500 c20011| 2016-04-06T02:53:08.226-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.969-0500 c20011| 2016-04-06T02:53:08.226-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.973-0500 c20011| 2016-04-06T02:53:08.226-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.974-0500 c20011| 2016-04-06T02:53:08.226-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.978-0500 c20011| 2016-04-06T02:53:08.226-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.979-0500 c20011| 2016-04-06T02:53:08.226-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.980-0500 c20011| 2016-04-06T02:53:08.226-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.983-0500 c20011| 2016-04-06T02:53:08.226-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.983-0500 c20011| 2016-04-06T02:53:08.226-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.986-0500 c20011| 2016-04-06T02:53:08.226-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.987-0500 c20011| 2016-04-06T02:53:08.226-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.991-0500 c20011| 2016-04-06T02:53:08.226-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.995-0500 c20011| 2016-04-06T02:53:08.226-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:53.995-0500 c20011| 2016-04-06T02:53:08.226-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.025-0500 c20011| 2016-04-06T02:53:08.226-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.027-0500 c20011| 2016-04-06T02:53:08.226-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.028-0500 c20011| 2016-04-06T02:53:08.227-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:54.061-0500 c20011| 2016-04-06T02:53:08.227-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|3, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:54.064-0500 c20011| 2016-04-06T02:53:08.227-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 394 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|3, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:54.066-0500 c20011| 2016-04-06T02:53:08.227-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 394 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:54.067-0500 c20011| 2016-04-06T02:53:08.227-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 394 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.072-0500 c20011| 2016-04-06T02:53:08.228-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 396 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.228-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929185000|4, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:54.073-0500 c20011| 2016-04-06T02:53:08.228-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 396 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:54.075-0500 c20011| 2016-04-06T02:53:08.230-0500 I ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:54.077-0500 c20011| 2016-04-06T02:53:08.230-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 391 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:54.079-0500 c20011| 2016-04-06T02:53:08.241-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 397 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:53:18.241-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.080-0500 c20011| 2016-04-06T02:53:08.246-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 397 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:54.083-0500 c20011| 2016-04-06T02:53:08.248-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 397 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 4, primaryId: 2, durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, opTime: { ts: Timestamp 1459929185000|1, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:54.085-0500 c20011| 2016-04-06T02:53:08.248-0500 D REPL [ReplicationExecutor] Ignoring older committed snapshot optime: { ts: Timestamp 1459929185000|1, t: 4 }, currentCommittedOpTime: { ts: Timestamp 1459929185000|4, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.089-0500 c20011| 2016-04-06T02:53:08.248-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:53:10.248Z [js_test:multi_coll_drop] 2016-04-06T02:53:54.093-0500 c20011| 2016-04-06T02:53:08.270-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|3, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|3, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:54.096-0500 c20011| 2016-04-06T02:53:08.270-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 399 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|3, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|3, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:54.097-0500 c20011| 2016-04-06T02:53:08.270-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 399 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:54.098-0500 c20011| 2016-04-06T02:53:08.271-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 396 finished with response: { cursor: { nextBatch: [], id: 23953707769, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.099-0500 c20011| 2016-04-06T02:53:08.271-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 399 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.100-0500 c20011| 2016-04-06T02:53:08.273-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929188000|3, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.100-0500 c20011| 2016-04-06T02:53:08.273-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:54.101-0500 c20011| 2016-04-06T02:53:08.276-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 402 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.276-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|3, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:54.102-0500 c20011| 2016-04-06T02:53:08.276-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 402 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:54.105-0500 c20011| 2016-04-06T02:53:08.277-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 402 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929188000|4, t: 4, h: -8682658828438772587, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 23953707769, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.107-0500 c20011| 2016-04-06T02:53:08.280-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929188000|4 and ending at ts: Timestamp 1459929188000|4 [js_test:multi_coll_drop] 2016-04-06T02:53:54.109-0500 c20011| 2016-04-06T02:53:08.280-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:54.110-0500 c20011| 2016-04-06T02:53:08.280-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.113-0500 c20011| 2016-04-06T02:53:08.280-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.114-0500 c20011| 2016-04-06T02:53:08.280-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.115-0500 c20011| 2016-04-06T02:53:08.280-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.115-0500 c20011| 2016-04-06T02:53:08.280-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.116-0500 c20011| 2016-04-06T02:53:08.280-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.117-0500 c20011| 2016-04-06T02:53:08.280-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.118-0500 c20011| 2016-04-06T02:53:08.280-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.119-0500 c20011| 2016-04-06T02:53:08.280-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.119-0500 c20011| 2016-04-06T02:53:08.280-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.122-0500 c20011| 2016-04-06T02:53:08.281-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.123-0500 c20011| 2016-04-06T02:53:08.281-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.124-0500 c20011| 2016-04-06T02:53:08.281-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.125-0500 c20011| 2016-04-06T02:53:08.281-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.125-0500 c20011| 2016-04-06T02:53:08.281-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:54.126-0500 c20011| 2016-04-06T02:53:08.281-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.127-0500 c20011| 2016-04-06T02:53:08.281-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.127-0500 c20011| 2016-04-06T02:53:08.281-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:54.130-0500 c20011| 2016-04-06T02:53:08.281-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.131-0500 c20011| 2016-04-06T02:53:08.281-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.132-0500 c20011| 2016-04-06T02:53:08.281-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.132-0500 c20011| 2016-04-06T02:53:08.281-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.134-0500 c20011| 2016-04-06T02:53:08.281-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.135-0500 c20011| 2016-04-06T02:53:08.281-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.137-0500 c20011| 2016-04-06T02:53:08.281-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.139-0500 c20011| 2016-04-06T02:53:08.281-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.139-0500 c20011| 2016-04-06T02:53:08.281-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.140-0500 c20011| 2016-04-06T02:53:08.282-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.140-0500 c20011| 2016-04-06T02:53:08.282-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.141-0500 c20011| 2016-04-06T02:53:08.282-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.143-0500 c20011| 2016-04-06T02:53:08.282-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.144-0500 c20011| 2016-04-06T02:53:08.282-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.145-0500 c20011| 2016-04-06T02:53:08.282-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.145-0500 c20011| 2016-04-06T02:53:08.282-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.147-0500 c20011| 2016-04-06T02:53:08.282-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:54.154-0500 c20011| 2016-04-06T02:53:08.282-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|3, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|4, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:54.161-0500 c20011| 2016-04-06T02:53:08.282-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 404 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|3, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|4, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:54.163-0500 c20011| 2016-04-06T02:53:08.282-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 404 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:54.168-0500 c20011| 2016-04-06T02:53:08.282-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 405 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.282-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|3, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:54.169-0500 c20011| 2016-04-06T02:53:08.282-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 405 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:54.171-0500 c20011| 2016-04-06T02:53:08.285-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 404 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.177-0500 c20011| 2016-04-06T02:53:08.311-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|4, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|4, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:54.181-0500 c20011| 2016-04-06T02:53:08.311-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 407 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|4, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|4, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:54.182-0500 c20011| 2016-04-06T02:53:08.311-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 407 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:54.184-0500 c20011| 2016-04-06T02:53:08.311-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 407 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.189-0500 c20011| 2016-04-06T02:53:08.312-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 405 finished with response: { cursor: { nextBatch: [], id: 23953707769, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.192-0500 c20011| 2016-04-06T02:53:08.312-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929188000|4, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.193-0500 c20011| 2016-04-06T02:53:08.312-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:54.197-0500 c20011| 2016-04-06T02:53:08.312-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 410 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.312-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|4, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:54.198-0500 c20011| 2016-04-06T02:53:08.312-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 410 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:54.200-0500 c20011| 2016-04-06T02:53:08.313-0500 D COMMAND [conn54] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|4, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.200-0500 c20011| 2016-04-06T02:53:08.313-0500 D COMMAND [conn54] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|4, t: 4 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:54.204-0500 c20011| 2016-04-06T02:53:08.313-0500 D COMMAND [conn54] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|4, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.206-0500 c20011| 2016-04-06T02:53:08.313-0500 D QUERY [conn54] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:54.211-0500 c20011| 2016-04-06T02:53:08.313-0500 I COMMAND [conn54] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|4, t: 4 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:54.212-0500 c20011| 2016-04-06T02:53:08.314-0500 D COMMAND [conn54] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|74 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|4, t: 4 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.214-0500 c20011| 2016-04-06T02:53:08.314-0500 D COMMAND [conn54] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|4, t: 4 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:54.216-0500 c20011| 2016-04-06T02:53:08.314-0500 D COMMAND [conn54] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|74 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|4, t: 4 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.218-0500 c20011| 2016-04-06T02:53:08.314-0500 D QUERY [conn54] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:54.222-0500 c20011| 2016-04-06T02:53:08.314-0500 I COMMAND [conn54] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|74 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|4, t: 4 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:712 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:54.225-0500 c20011| 2016-04-06T02:53:08.315-0500 D COMMAND [conn54] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|4, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.227-0500 c20011| 2016-04-06T02:53:08.315-0500 D COMMAND [conn54] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|4, t: 4 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:54.229-0500 c20011| 2016-04-06T02:53:08.315-0500 D COMMAND [conn54] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|4, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.238-0500 c20011| 2016-04-06T02:53:08.315-0500 D QUERY [conn54] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:54.241-0500 c20011| 2016-04-06T02:53:08.315-0500 I COMMAND [conn54] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|4, t: 4 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:54.247-0500 c20011| 2016-04-06T02:53:08.322-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 410 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929188000|5, t: 4, h: -3166850081498560888, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c06465c17830b843f1c9'), state: 2, when: new Date(1459929188315), why: "splitting chunk [{ _id: -63.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 23953707769, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.248-0500 c20011| 2016-04-06T02:53:08.328-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929188000|5 and ending at ts: Timestamp 1459929188000|5 [js_test:multi_coll_drop] 2016-04-06T02:53:54.253-0500 c20011| 2016-04-06T02:53:08.328-0500 D REPL [rsBackgroundSync-0] bgsync buffer has 0 bytes [js_test:multi_coll_drop] 2016-04-06T02:53:54.266-0500 c20011| 2016-04-06T02:53:08.328-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:54.271-0500 c20011| 2016-04-06T02:53:08.328-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.271-0500 c20011| 2016-04-06T02:53:08.328-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.274-0500 c20011| 2016-04-06T02:53:08.328-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.275-0500 c20011| 2016-04-06T02:53:08.328-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.275-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.295-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.301-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.301-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.305-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.306-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.308-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.309-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.310-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.310-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.312-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.313-0500 c20011| 2016-04-06T02:53:08.329-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:54.315-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.316-0500 c20011| 2016-04-06T02:53:08.329-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:54.317-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.319-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.320-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.322-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.323-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.324-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.324-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.325-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.329-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.331-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.332-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.333-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.335-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.336-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.339-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.343-0500 c20011| 2016-04-06T02:53:08.329-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.346-0500 c20011| 2016-04-06T02:53:08.330-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:54.351-0500 c20011| 2016-04-06T02:53:08.330-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|4, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|5, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:54.354-0500 c20011| 2016-04-06T02:53:08.330-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 412 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|4, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|5, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:54.357-0500 c20011| 2016-04-06T02:53:08.330-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 412 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:54.359-0500 c20011| 2016-04-06T02:53:08.330-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 412 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.362-0500 c20011| 2016-04-06T02:53:08.336-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 414 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.336-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|4, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:54.364-0500 c20011| 2016-04-06T02:53:08.336-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 414 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:54.374-0500 c20011| 2016-04-06T02:53:08.358-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|5, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|5, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:54.379-0500 c20011| 2016-04-06T02:53:08.358-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 415 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|5, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|5, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:54.383-0500 c20011| 2016-04-06T02:53:08.358-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 415 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:54.387-0500 c20011| 2016-04-06T02:53:08.358-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 415 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.388-0500 c20011| 2016-04-06T02:53:08.359-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 414 finished with response: { cursor: { nextBatch: [], id: 23953707769, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.409-0500 c20011| 2016-04-06T02:53:08.359-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929188000|5, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.413-0500 c20011| 2016-04-06T02:53:08.362-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:54.430-0500 c20011| 2016-04-06T02:53:08.362-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 418 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.362-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|5, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:54.432-0500 c20011| 2016-04-06T02:53:08.362-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 418 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:54.433-0500 c20011| 2016-04-06T02:53:08.362-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:33940 #56 (6 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:54.434-0500 c20011| 2016-04-06T02:53:08.365-0500 D COMMAND [conn56] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20010" } [js_test:multi_coll_drop] 2016-04-06T02:53:54.436-0500 c20011| 2016-04-06T02:53:08.365-0500 I COMMAND [conn56] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20010" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:54.439-0500 c20011| 2016-04-06T02:53:08.366-0500 D COMMAND [conn56] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|76 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|5, t: 4 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.441-0500 c20011| 2016-04-06T02:53:08.366-0500 D COMMAND [conn56] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|5, t: 4 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:54.450-0500 c20011| 2016-04-06T02:53:08.366-0500 D COMMAND [conn56] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|76 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|5, t: 4 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.453-0500 c20011| 2016-04-06T02:53:08.367-0500 D QUERY [conn56] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:54.455-0500 c20011| 2016-04-06T02:53:08.367-0500 I COMMAND [conn56] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|76 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|5, t: 4 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:54.457-0500 c20011| 2016-04-06T02:53:08.369-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 418 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929188000|6, t: 4, h: -6079188038794452835, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-63.0", lastmod: Timestamp 1000|77, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -63.0 }, max: { _id: -62.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-63.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-62.0", lastmod: Timestamp 1000|78, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -62.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-62.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 23953707769, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.458-0500 c20011| 2016-04-06T02:53:08.372-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929188000|6 and ending at ts: Timestamp 1459929188000|6 [js_test:multi_coll_drop] 2016-04-06T02:53:54.459-0500 c20011| 2016-04-06T02:53:08.372-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:54.459-0500 c20011| 2016-04-06T02:53:08.372-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.459-0500 c20011| 2016-04-06T02:53:08.372-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.460-0500 c20011| 2016-04-06T02:53:08.372-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.462-0500 c20011| 2016-04-06T02:53:08.372-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.463-0500 c20011| 2016-04-06T02:53:08.372-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.464-0500 c20011| 2016-04-06T02:53:08.372-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.464-0500 c20011| 2016-04-06T02:53:08.372-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.465-0500 c20011| 2016-04-06T02:53:08.372-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.465-0500 c20011| 2016-04-06T02:53:08.372-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.465-0500 c20011| 2016-04-06T02:53:08.372-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.466-0500 c20011| 2016-04-06T02:53:08.372-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.466-0500 c20011| 2016-04-06T02:53:08.372-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.467-0500 c20011| 2016-04-06T02:53:08.372-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.467-0500 c20011| 2016-04-06T02:53:08.372-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.469-0500 c20011| 2016-04-06T02:53:08.372-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:54.470-0500 c20011| 2016-04-06T02:53:08.372-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.471-0500 c20011| 2016-04-06T02:53:08.373-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.472-0500 c20011| 2016-04-06T02:53:08.373-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-63.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:54.476-0500 c20011| 2016-04-06T02:53:08.373-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-62.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:54.477-0500 c20011| 2016-04-06T02:53:08.373-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.477-0500 c20011| 2016-04-06T02:53:08.373-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.478-0500 c20011| 2016-04-06T02:53:08.373-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.481-0500 c20011| 2016-04-06T02:53:08.373-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.482-0500 c20011| 2016-04-06T02:53:08.373-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.482-0500 c20011| 2016-04-06T02:53:08.373-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.484-0500 c20011| 2016-04-06T02:53:08.373-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.485-0500 c20011| 2016-04-06T02:53:08.373-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.487-0500 c20011| 2016-04-06T02:53:08.373-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.489-0500 c20011| 2016-04-06T02:53:08.373-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.491-0500 c20011| 2016-04-06T02:53:08.373-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.492-0500 c20011| 2016-04-06T02:53:08.373-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.494-0500 c20011| 2016-04-06T02:53:08.373-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.497-0500 c20011| 2016-04-06T02:53:08.373-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.498-0500 c20011| 2016-04-06T02:53:08.373-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.499-0500 c20011| 2016-04-06T02:53:08.373-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.500-0500 c20011| 2016-04-06T02:53:08.373-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:54.504-0500 c20011| 2016-04-06T02:53:08.373-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|5, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|6, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:54.513-0500 c20011| 2016-04-06T02:53:08.373-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 420 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|5, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|6, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:54.514-0500 c20011| 2016-04-06T02:53:08.373-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 420 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:54.521-0500 c20011| 2016-04-06T02:53:08.374-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 420 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.527-0500 c20011| 2016-04-06T02:53:08.374-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 422 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.374-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|5, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:54.528-0500 c20011| 2016-04-06T02:53:08.374-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 422 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:54.531-0500 c20011| 2016-04-06T02:53:08.378-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|6, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|6, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:54.535-0500 c20011| 2016-04-06T02:53:08.378-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 423 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|6, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|6, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:54.539-0500 c20011| 2016-04-06T02:53:08.378-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 423 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:54.549-0500 c20011| 2016-04-06T02:53:08.378-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 423 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.550-0500 c20011| 2016-04-06T02:53:08.379-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 422 finished with response: { cursor: { nextBatch: [], id: 23953707769, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.552-0500 c20011| 2016-04-06T02:53:08.380-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929188000|6, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.553-0500 c20011| 2016-04-06T02:53:08.380-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:54.557-0500 c20011| 2016-04-06T02:53:08.382-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 426 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.382-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|6, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:54.558-0500 c20011| 2016-04-06T02:53:08.382-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 426 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:54.563-0500 c20011| 2016-04-06T02:53:08.382-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 426 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929188000|7, t: 4, h: -5413670652354134036, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:53:08.379-0500-5704c06465c17830b843f1ca", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929188379), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -63.0 }, max: { _id: MaxKey } }, left: { min: { _id: -63.0 }, max: { _id: -62.0 }, lastmod: Timestamp 1000|77, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -62.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|78, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 23953707769, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.565-0500 c20011| 2016-04-06T02:53:08.383-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929188000|7 and ending at ts: Timestamp 1459929188000|7 [js_test:multi_coll_drop] 2016-04-06T02:53:54.567-0500 c20011| 2016-04-06T02:53:08.383-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:54.568-0500 c20011| 2016-04-06T02:53:08.383-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.570-0500 c20011| 2016-04-06T02:53:08.383-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.572-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:54.573-0500 s20014| 2016-04-06T02:53:36.389-0500 D NETWORK [Balancer] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:53:54.577-0500 s20014| 2016-04-06T02:53:36.389-0500 D NETWORK [Balancer] SocketException: remote: (NONE):0 error: 9001 socket exception [CLOSED] server [192.168.100.28:20013] [js_test:multi_coll_drop] 2016-04-06T02:53:54.579-0500 s20014| 2016-04-06T02:53:36.389-0500 D - [Balancer] User Assertion: 6:network error while attempting to run command 'ismaster' on host 'mongovm16:20013' [js_test:multi_coll_drop] 2016-04-06T02:53:54.580-0500 s20014| 2016-04-06T02:53:36.390-0500 D NETWORK [Balancer] Marking host mongovm16:20013 as failed [js_test:multi_coll_drop] 2016-04-06T02:53:54.581-0500 s20014| 2016-04-06T02:53:36.390-0500 W NETWORK [Balancer] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:54.593-0500 s20014| 2016-04-06T02:53:36.397-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 61.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:54.604-0500 s20014| 2016-04-06T02:53:36.397-0500 D ASIO [conn1] startCommand: RemoteCommand 689 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:06.397-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.606-0500 s20014| 2016-04-06T02:53:36.398-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 689 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:54.610-0500 s20014| 2016-04-06T02:53:36.398-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 689 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.612-0500 s20014| 2016-04-06T02:53:36.398-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:54.614-0500 s20014| 2016-04-06T02:53:36.398-0500 D NETWORK [conn1] polling for status of connection to 192.168.100.28:20010, no events [js_test:multi_coll_drop] 2016-04-06T02:53:54.620-0500 s20014| 2016-04-06T02:53:36.402-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 62.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:54.626-0500 s20014| 2016-04-06T02:53:36.402-0500 D ASIO [conn1] startCommand: RemoteCommand 691 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:06.402-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.627-0500 s20014| 2016-04-06T02:53:36.402-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 691 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:54.629-0500 s20014| 2016-04-06T02:53:36.402-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 691 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.630-0500 s20014| 2016-04-06T02:53:36.402-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:54.633-0500 s20014| 2016-04-06T02:53:36.405-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 63.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:54.635-0500 s20014| 2016-04-06T02:53:36.405-0500 D ASIO [conn1] startCommand: RemoteCommand 693 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:06.405-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.637-0500 s20014| 2016-04-06T02:53:36.405-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 693 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:54.640-0500 s20014| 2016-04-06T02:53:36.405-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 693 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.642-0500 s20014| 2016-04-06T02:53:36.405-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:54.644-0500 s20014| 2016-04-06T02:53:36.407-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 64.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:54.647-0500 s20014| 2016-04-06T02:53:36.408-0500 D ASIO [conn1] startCommand: RemoteCommand 695 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:06.408-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.647-0500 s20014| 2016-04-06T02:53:36.414-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 695 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:54.650-0500 s20014| 2016-04-06T02:53:36.415-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 695 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.650-0500 s20014| 2016-04-06T02:53:36.415-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:54.654-0500 s20014| 2016-04-06T02:53:36.416-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 65.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:54.656-0500 s20014| 2016-04-06T02:53:36.417-0500 D ASIO [conn1] startCommand: RemoteCommand 697 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:54:06.417-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.658-0500 s20014| 2016-04-06T02:53:36.418-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:54.659-0500 s20014| 2016-04-06T02:53:36.419-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 698 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:54.662-0500 s20014| 2016-04-06T02:53:36.419-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:54.666-0500 s20014| 2016-04-06T02:53:36.419-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 698 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:54.667-0500 s20014| 2016-04-06T02:53:36.419-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 697 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:54.675-0500 s20014| 2016-04-06T02:53:36.420-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 697 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.677-0500 s20014| 2016-04-06T02:53:36.420-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:54.682-0500 s20014| 2016-04-06T02:53:36.422-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 66.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:54.687-0500 s20014| 2016-04-06T02:53:36.422-0500 D ASIO [conn1] startCommand: RemoteCommand 700 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:54:06.422-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.690-0500 2016-04-06T02:53:39.745-0500s20014| 2016-04-06T02:53:36.422-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 700 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:54.694-0500 s20014| 2016-04-06T02:53:36.422-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 700 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.697-0500 I NETWORK [thread2] trying reconnect to mongovm16:20013 (192.168.100.28) failed [js_test:multi_coll_drop] 2016-04-06T02:53:54.698-0500 [js_test:multi_coll_drop] 2016-04-06T02:53:54.698-0500 [js_test:multi_coll_drop] 2016-04-06T02:53:54.707-0500 ---- [js_test:multi_coll_drop] 2016-04-06T02:53:54.708-0500 Dropping sharded collection... [js_test:multi_coll_drop] 2016-04-06T02:53:54.708-0500 ---- [js_test:multi_coll_drop] 2016-04-06T02:53:54.709-0500 [js_test:multi_coll_drop] 2016-04-06T02:53:54.709-0500 [js_test:multi_coll_drop] 2016-04-06T02:53:54.710-0500 s20014| 2016-04-06T02:53:36.423-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:54.724-0500 s20014| 2016-04-06T02:53:36.425-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 67.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:54.738-0500 s20014| 2016-04-06T02:53:36.425-0500 D ASIO [conn1] startCommand: RemoteCommand 702 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:06.425-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.752-0500 s20014| 2016-04-06T02:53:36.426-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 702 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:54.759-0500 s20014| 2016-04-06T02:53:36.426-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 702 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.759-0500 s20014| 2016-04-06T02:53:36.426-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:54.762-0500 s20014| 2016-04-06T02:53:36.429-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 68.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:54.764-0500 s20014| 2016-04-06T02:53:36.429-0500 D ASIO [conn1] startCommand: RemoteCommand 704 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:54:06.429-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.765-0500 s20014| 2016-04-06T02:53:36.429-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 704 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:54.770-0500 s20014| 2016-04-06T02:53:36.429-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 704 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.770-0500 s20014| 2016-04-06T02:53:36.430-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:54.777-0500 s20014| 2016-04-06T02:53:36.432-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 69.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:54.811-0500 s20014| 2016-04-06T02:53:36.432-0500 D ASIO [conn1] startCommand: RemoteCommand 706 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:54:06.432-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.811-0500 s20014| 2016-04-06T02:53:36.432-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 706 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:54.817-0500 s20014| 2016-04-06T02:53:36.432-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 706 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.819-0500 s20014| 2016-04-06T02:53:36.432-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:54.824-0500 s20014| 2016-04-06T02:53:36.435-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 70.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:54.831-0500 s20014| 2016-04-06T02:53:36.435-0500 D ASIO [conn1] startCommand: RemoteCommand 708 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:06.435-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.832-0500 s20014| 2016-04-06T02:53:36.435-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 708 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:54.837-0500 s20014| 2016-04-06T02:53:36.436-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 708 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.843-0500 s20014| 2016-04-06T02:53:36.436-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:54.849-0500 s20014| 2016-04-06T02:53:36.439-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 71.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:54.854-0500 s20014| 2016-04-06T02:53:36.439-0500 D ASIO [conn1] startCommand: RemoteCommand 710 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:54:06.439-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.856-0500 s20014| 2016-04-06T02:53:36.439-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 710 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:54.860-0500 s20014| 2016-04-06T02:53:36.440-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 710 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.861-0500 s20014| 2016-04-06T02:53:36.440-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:54.868-0500 s20014| 2016-04-06T02:53:36.442-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 72.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:54.874-0500 s20014| 2016-04-06T02:53:36.442-0500 D ASIO [conn1] startCommand: RemoteCommand 712 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:54:06.442-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.878-0500 s20014| 2016-04-06T02:53:36.442-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 712 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:54.880-0500 s20014| 2016-04-06T02:53:36.443-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 712 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.882-0500 s20014| 2016-04-06T02:53:36.443-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:54.890-0500 s20014| 2016-04-06T02:53:36.445-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 73.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:54.892-0500 s20014| 2016-04-06T02:53:36.445-0500 D ASIO [conn1] startCommand: RemoteCommand 714 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:06.445-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.895-0500 s20014| 2016-04-06T02:53:36.445-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 714 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:54.899-0500 s20014| 2016-04-06T02:53:36.446-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 714 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.900-0500 s20014| 2016-04-06T02:53:36.446-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:54.905-0500 s20014| 2016-04-06T02:53:36.448-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 74.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:54.909-0500 s20014| 2016-04-06T02:53:36.448-0500 D ASIO [conn1] startCommand: RemoteCommand 716 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:06.448-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.914-0500 s20014| 2016-04-06T02:53:36.448-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 716 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:54.919-0500 s20014| 2016-04-06T02:53:36.449-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 716 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.920-0500 s20014| 2016-04-06T02:53:36.449-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:54.926-0500 s20014| 2016-04-06T02:53:36.452-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 75.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:54.927-0500 s20014| 2016-04-06T02:53:36.453-0500 D ASIO [conn1] startCommand: RemoteCommand 718 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:54:06.453-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.930-0500 s20014| 2016-04-06T02:53:36.453-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 718 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:54.936-0500 s20014| 2016-04-06T02:53:36.453-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 718 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.938-0500 s20014| 2016-04-06T02:53:36.453-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:54.945-0500 s20014| 2016-04-06T02:53:36.456-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 76.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:54.945-0500 s20014| 2016-04-06T02:53:36.456-0500 D ASIO [conn1] startCommand: RemoteCommand 720 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:06.456-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.945-0500 s20014| 2016-04-06T02:53:36.456-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 720 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:54.946-0500 s20014| 2016-04-06T02:53:36.456-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 720 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.946-0500 s20014| 2016-04-06T02:53:36.456-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:54.946-0500 s20014| 2016-04-06T02:53:36.461-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 77.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:54.947-0500 s20014| 2016-04-06T02:53:36.462-0500 D ASIO [conn1] startCommand: RemoteCommand 722 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:06.462-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.947-0500 s20014| 2016-04-06T02:53:36.462-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 722 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:54.947-0500 s20014| 2016-04-06T02:53:36.462-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 722 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.947-0500 s20014| 2016-04-06T02:53:36.463-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:54.948-0500 s20014| 2016-04-06T02:53:36.468-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 78.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:54.951-0500 s20014| 2016-04-06T02:53:36.468-0500 D ASIO [conn1] startCommand: RemoteCommand 724 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:06.468-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.952-0500 s20014| 2016-04-06T02:53:36.468-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 724 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:54.953-0500 s20014| 2016-04-06T02:53:36.469-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 724 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.956-0500 s20014| 2016-04-06T02:53:36.469-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:54.960-0500 s20014| 2016-04-06T02:53:36.480-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 79.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:54.964-0500 s20014| 2016-04-06T02:53:36.480-0500 D ASIO [conn1] startCommand: RemoteCommand 726 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:06.480-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.965-0500 s20014| 2016-04-06T02:53:36.480-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 726 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:54.970-0500 s20014| 2016-04-06T02:53:36.481-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 726 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.971-0500 s20014| 2016-04-06T02:53:36.481-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:54.980-0500 s20014| 2016-04-06T02:53:36.484-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 80.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:54.994-0500 s20014| 2016-04-06T02:53:36.485-0500 D ASIO [conn1] startCommand: RemoteCommand 728 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:06.485-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:54.995-0500 s20014| 2016-04-06T02:53:36.485-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 728 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:55.027-0500 s20014| 2016-04-06T02:53:36.485-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 728 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.034-0500 s20014| 2016-04-06T02:53:36.486-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:55.044-0500 s20014| 2016-04-06T02:53:36.494-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 81.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:55.047-0500 s20014| 2016-04-06T02:53:36.494-0500 D ASIO [conn1] startCommand: RemoteCommand 730 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:06.494-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.047-0500 s20014| 2016-04-06T02:53:36.494-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 730 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:55.053-0500 s20014| 2016-04-06T02:53:36.494-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 730 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.053-0500 s20014| 2016-04-06T02:53:36.494-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:55.057-0500 s20014| 2016-04-06T02:53:36.501-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 82.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:55.059-0500 s20014| 2016-04-06T02:53:36.501-0500 D ASIO [conn1] startCommand: RemoteCommand 732 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:06.501-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.062-0500 s20014| 2016-04-06T02:53:36.502-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 732 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:55.069-0500 s20014| 2016-04-06T02:53:36.502-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 732 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.070-0500 s20014| 2016-04-06T02:53:36.502-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:55.077-0500 s20014| 2016-04-06T02:53:36.505-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 83.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:55.080-0500 s20014| 2016-04-06T02:53:36.505-0500 D ASIO [conn1] startCommand: RemoteCommand 734 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:54:06.505-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.080-0500 s20014| 2016-04-06T02:53:36.505-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 734 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:55.084-0500 s20014| 2016-04-06T02:53:36.506-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 734 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.088-0500 s20014| 2016-04-06T02:53:36.506-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:55.094-0500 s20014| 2016-04-06T02:53:36.508-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 84.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:55.105-0500 s20014| 2016-04-06T02:53:36.508-0500 D ASIO [conn1] startCommand: RemoteCommand 736 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:06.508-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.106-0500 s20014| 2016-04-06T02:53:36.508-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 736 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:55.107-0500 s20014| 2016-04-06T02:53:36.509-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 736 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.109-0500 s20014| 2016-04-06T02:53:36.509-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:55.115-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.116-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.119-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.121-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.123-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.124-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.124-0500 c20011| 2016-04-06T02:53:08.384-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:55.127-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.128-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.129-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.130-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.130-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.132-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.132-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.133-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.135-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.136-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.138-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.139-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.139-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.140-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.141-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.141-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.142-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.147-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.148-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.149-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.149-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.150-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.151-0500 c20011| 2016-04-06T02:53:08.384-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.154-0500 c20011| 2016-04-06T02:53:08.385-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:55.161-0500 c20011| 2016-04-06T02:53:08.385-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|6, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|7, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:55.166-0500 c20011| 2016-04-06T02:53:08.385-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 428 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|6, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|7, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:55.172-0500 c20011| 2016-04-06T02:53:08.385-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 428 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:55.181-0500 c20011| 2016-04-06T02:53:08.385-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 428 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.203-0500 c20011| 2016-04-06T02:53:08.386-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 430 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.386-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|6, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:55.203-0500 c20011| 2016-04-06T02:53:08.386-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 430 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:55.217-0500 c20011| 2016-04-06T02:53:08.404-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|7, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|7, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:55.225-0500 c20011| 2016-04-06T02:53:08.404-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 431 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|7, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|7, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:55.232-0500 c20011| 2016-04-06T02:53:08.404-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 431 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:55.235-0500 c20011| 2016-04-06T02:53:08.405-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 431 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.238-0500 c20011| 2016-04-06T02:53:08.407-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 430 finished with response: { cursor: { nextBatch: [], id: 23953707769, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.239-0500 c20011| 2016-04-06T02:53:08.407-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929188000|7, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.240-0500 c20011| 2016-04-06T02:53:08.408-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:55.245-0500 c20011| 2016-04-06T02:53:08.408-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 434 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.408-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|7, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:55.246-0500 c20011| 2016-04-06T02:53:08.410-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 434 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:55.253-0500 c20011| 2016-04-06T02:53:08.415-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 434 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929188000|8, t: 4, h: -4362257609073136726, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 23953707769, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.255-0500 c20011| 2016-04-06T02:53:08.415-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929188000|8 and ending at ts: Timestamp 1459929188000|8 [js_test:multi_coll_drop] 2016-04-06T02:53:55.257-0500 c20011| 2016-04-06T02:53:08.416-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:55.257-0500 c20011| 2016-04-06T02:53:08.416-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.257-0500 c20011| 2016-04-06T02:53:08.416-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.259-0500 c20011| 2016-04-06T02:53:08.416-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.260-0500 c20011| 2016-04-06T02:53:08.416-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.260-0500 c20011| 2016-04-06T02:53:08.416-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.262-0500 c20011| 2016-04-06T02:53:08.416-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.262-0500 c20011| 2016-04-06T02:53:08.416-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.263-0500 c20011| 2016-04-06T02:53:08.416-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.264-0500 c20011| 2016-04-06T02:53:08.416-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.266-0500 c20011| 2016-04-06T02:53:08.416-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.266-0500 c20011| 2016-04-06T02:53:08.416-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.267-0500 c20011| 2016-04-06T02:53:08.416-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.267-0500 c20011| 2016-04-06T02:53:08.416-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.268-0500 c20011| 2016-04-06T02:53:08.416-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.269-0500 c20011| 2016-04-06T02:53:08.416-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:55.270-0500 c20011| 2016-04-06T02:53:08.416-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.270-0500 c20011| 2016-04-06T02:53:08.416-0500 D QUERY [repl writer worker 13] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:55.271-0500 c20011| 2016-04-06T02:53:08.416-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.273-0500 c20011| 2016-04-06T02:53:08.417-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.274-0500 c20011| 2016-04-06T02:53:08.417-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.291-0500 c20011| 2016-04-06T02:53:08.417-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.294-0500 c20011| 2016-04-06T02:53:08.417-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.310-0500 c20011| 2016-04-06T02:53:08.417-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.317-0500 c20011| 2016-04-06T02:53:08.417-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.320-0500 c20011| 2016-04-06T02:53:08.417-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.324-0500 c20011| 2016-04-06T02:53:08.417-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.328-0500 c20011| 2016-04-06T02:53:08.417-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.332-0500 c20011| 2016-04-06T02:53:08.417-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.333-0500 c20011| 2016-04-06T02:53:08.417-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.333-0500 c20011| 2016-04-06T02:53:08.417-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.335-0500 c20011| 2016-04-06T02:53:08.417-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.338-0500 c20011| 2016-04-06T02:53:08.417-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.339-0500 c20011| 2016-04-06T02:53:08.417-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.341-0500 c20011| 2016-04-06T02:53:08.417-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.342-0500 c20011| 2016-04-06T02:53:08.417-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:55.349-0500 c20011| 2016-04-06T02:53:08.417-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|7, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|8, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:55.353-0500 c20011| 2016-04-06T02:53:08.417-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 436 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|7, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|8, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:55.355-0500 c20011| 2016-04-06T02:53:08.417-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 436 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:55.358-0500 c20011| 2016-04-06T02:53:08.418-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 437 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.418-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|7, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:55.361-0500 c20011| 2016-04-06T02:53:08.418-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 436 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.368-0500 c20011| 2016-04-06T02:53:08.419-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 437 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:55.379-0500 c20011| 2016-04-06T02:53:08.430-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|8, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|8, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:55.382-0500 c20011| 2016-04-06T02:53:08.430-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 439 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|8, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|8, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:55.386-0500 c20011| 2016-04-06T02:53:08.431-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 439 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:55.388-0500 c20011| 2016-04-06T02:53:08.431-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 439 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.389-0500 c20011| 2016-04-06T02:53:08.431-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 437 finished with response: { cursor: { nextBatch: [], id: 23953707769, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.393-0500 c20011| 2016-04-06T02:53:08.433-0500 D COMMAND [conn54] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|8, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.395-0500 c20011| 2016-04-06T02:53:08.433-0500 D REPL [conn54] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929188000|8, t: 4 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929188000|7, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.397-0500 c20011| 2016-04-06T02:53:08.433-0500 D REPL [conn54] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999980μs [js_test:multi_coll_drop] 2016-04-06T02:53:55.398-0500 2016-04-06T02:53:40.702-0500 I NETWORK [thread2] reconnect mongovm16:20013 (192.168.100.28) ok [js_test:multi_coll_drop] 2016-04-06T02:53:55.403-0500 c20011| 2016-04-06T02:53:08.434-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929188000|8, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.405-0500 c20011| 2016-04-06T02:53:08.434-0500 D COMMAND [conn54] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|8, t: 4 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:55.406-0500 c20011| 2016-04-06T02:53:08.434-0500 D COMMAND [conn54] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|8, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.406-0500 c20011| 2016-04-06T02:53:08.434-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:55.410-0500 c20011| 2016-04-06T02:53:08.434-0500 D QUERY [conn54] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:55.416-0500 c20011| 2016-04-06T02:53:08.434-0500 I COMMAND [conn54] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|8, t: 4 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:55.419-0500 c20011| 2016-04-06T02:53:08.434-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 442 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.434-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|8, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:55.423-0500 c20011| 2016-04-06T02:53:08.434-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 442 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:55.426-0500 c20011| 2016-04-06T02:53:08.435-0500 D COMMAND [conn54] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|76 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|8, t: 4 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.430-0500 c20011| 2016-04-06T02:53:08.435-0500 D COMMAND [conn54] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|8, t: 4 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:55.435-0500 c20011| 2016-04-06T02:53:08.435-0500 D COMMAND [conn54] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|76 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|8, t: 4 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.438-0500 c20011| 2016-04-06T02:53:08.435-0500 D QUERY [conn54] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:55.444-0500 c20011| 2016-04-06T02:53:08.435-0500 I COMMAND [conn54] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|76 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|8, t: 4 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:712 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:55.445-0500 c20011| 2016-04-06T02:53:08.677-0500 D COMMAND [conn51] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.446-0500 c20011| 2016-04-06T02:53:08.678-0500 D COMMAND [conn51] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:55.447-0500 c20011| 2016-04-06T02:53:08.678-0500 I COMMAND [conn51] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:55.450-0500 c20011| 2016-04-06T02:53:08.734-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 442 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929188000|9, t: 4, h: 6545560476923728443, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c06465c17830b843f1cb'), state: 2, when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 23953707769, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.456-0500 c20011| 2016-04-06T02:53:08.734-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929188000|9 and ending at ts: Timestamp 1459929188000|9 [js_test:multi_coll_drop] 2016-04-06T02:53:55.459-0500 c20011| 2016-04-06T02:53:08.735-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:55.463-0500 c20011| 2016-04-06T02:53:08.735-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.466-0500 c20011| 2016-04-06T02:53:08.735-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.468-0500 c20011| 2016-04-06T02:53:08.735-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.469-0500 c20011| 2016-04-06T02:53:08.735-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.469-0500 c20011| 2016-04-06T02:53:08.735-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.472-0500 c20011| 2016-04-06T02:53:08.735-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.478-0500 c20011| 2016-04-06T02:53:08.735-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.481-0500 c20011| 2016-04-06T02:53:08.735-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.482-0500 c20011| 2016-04-06T02:53:08.735-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:55.484-0500 c20011| 2016-04-06T02:53:08.735-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.485-0500 c20011| 2016-04-06T02:53:08.735-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.486-0500 c20011| 2016-04-06T02:53:08.735-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.488-0500 c20011| 2016-04-06T02:53:08.735-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:55.491-0500 c20011| 2016-04-06T02:53:08.735-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.493-0500 c20011| 2016-04-06T02:53:08.736-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.495-0500 c20011| 2016-04-06T02:53:08.736-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 444 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.736-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|8, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:55.496-0500 c20011| 2016-04-06T02:53:08.737-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.499-0500 c20011| 2016-04-06T02:53:08.737-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.501-0500 c20011| 2016-04-06T02:53:08.737-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.503-0500 c20011| 2016-04-06T02:53:08.737-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 444 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:55.504-0500 c20011| 2016-04-06T02:53:08.737-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.508-0500 c20011| 2016-04-06T02:53:08.737-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.510-0500 c20011| 2016-04-06T02:53:08.737-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.511-0500 c20011| 2016-04-06T02:53:08.737-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.511-0500 c20011| 2016-04-06T02:53:08.737-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.512-0500 c20011| 2016-04-06T02:53:08.737-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.514-0500 c20011| 2016-04-06T02:53:08.737-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.514-0500 c20011| 2016-04-06T02:53:08.737-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.517-0500 c20011| 2016-04-06T02:53:08.737-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.518-0500 c20011| 2016-04-06T02:53:08.739-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.518-0500 c20011| 2016-04-06T02:53:08.739-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.519-0500 c20011| 2016-04-06T02:53:08.739-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.521-0500 c20011| 2016-04-06T02:53:08.739-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.521-0500 c20011| 2016-04-06T02:53:08.739-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.523-0500 c20011| 2016-04-06T02:53:08.739-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.524-0500 c20011| 2016-04-06T02:53:08.739-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.526-0500 c20011| 2016-04-06T02:53:08.739-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:55.565-0500 c20011| 2016-04-06T02:53:08.740-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|8, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|9, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:55.568-0500 c20011| 2016-04-06T02:53:08.740-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 445 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|8, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|9, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:55.569-0500 c20011| 2016-04-06T02:53:08.740-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 445 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:55.572-0500 c20011| 2016-04-06T02:53:08.740-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 445 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.579-0500 c20011| 2016-04-06T02:53:08.755-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|9, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|9, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:55.587-0500 c20011| 2016-04-06T02:53:08.755-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 447 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|9, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|9, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:55.589-0500 c20011| 2016-04-06T02:53:08.755-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 447 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:55.590-0500 c20011| 2016-04-06T02:53:08.755-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 447 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.593-0500 c20011| 2016-04-06T02:53:08.756-0500 D COMMAND [conn56] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|9, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.597-0500 c20011| 2016-04-06T02:53:08.756-0500 D REPL [conn56] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929188000|9, t: 4 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929188000|8, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.600-0500 c20011| 2016-04-06T02:53:08.756-0500 D REPL [conn56] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999970μs [js_test:multi_coll_drop] 2016-04-06T02:53:55.620-0500 c20011| 2016-04-06T02:53:08.763-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 444 finished with response: { cursor: { nextBatch: [], id: 23953707769, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.622-0500 c20011| 2016-04-06T02:53:08.769-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929188000|9, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.625-0500 c20011| 2016-04-06T02:53:08.769-0500 D COMMAND [conn56] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|9, t: 4 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:55.629-0500 c20011| 2016-04-06T02:53:08.769-0500 D COMMAND [conn56] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|9, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.631-0500 c20011| 2016-04-06T02:53:08.769-0500 D QUERY [conn56] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:55.635-0500 c20011| 2016-04-06T02:53:08.769-0500 I COMMAND [conn56] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|9, t: 4 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:492 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 13ms [js_test:multi_coll_drop] 2016-04-06T02:53:55.638-0500 c20011| 2016-04-06T02:53:08.770-0500 D COMMAND [conn56] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|78 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|9, t: 4 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.644-0500 2016-04-06T02:53:40.920-0500 I NETWORK [ReplicaSetMonitorWatcher] Socket closed remotely, no longer connected (idle 10 secs, remote host 192.168.100.28:20013) [js_test:multi_coll_drop] 2016-04-06T02:53:55.645-0500 c20011| 2016-04-06T02:53:08.770-0500 D COMMAND [conn56] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|9, t: 4 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:55.646-0500 c20011| 2016-04-06T02:53:08.770-0500 D COMMAND [conn56] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|78 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|9, t: 4 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.649-0500 c20011| 2016-04-06T02:53:08.770-0500 D QUERY [conn56] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:55.657-0500 c20011| 2016-04-06T02:53:08.770-0500 I COMMAND [conn56] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|78 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929188000|9, t: 4 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:55.666-0500 c20011| 2016-04-06T02:53:08.772-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:55.668-0500 c20011| 2016-04-06T02:53:08.772-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 450 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.772-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|9, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:55.670-0500 c20011| 2016-04-06T02:53:08.772-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 450 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:55.676-0500 c20011| 2016-04-06T02:53:08.776-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 450 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929188000|10, t: 4, h: -7436856840318092141, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-62.0", lastmod: Timestamp 1000|79, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -62.0 }, max: { _id: -61.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-62.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-61.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 23953707769, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.679-0500 c20011| 2016-04-06T02:53:08.776-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929188000|10 and ending at ts: Timestamp 1459929188000|10 [js_test:multi_coll_drop] 2016-04-06T02:53:55.680-0500 c20011| 2016-04-06T02:53:08.776-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:55.681-0500 c20011| 2016-04-06T02:53:08.776-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.682-0500 c20011| 2016-04-06T02:53:08.776-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.683-0500 c20011| 2016-04-06T02:53:08.777-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.685-0500 c20011| 2016-04-06T02:53:08.777-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.691-0500 c20011| 2016-04-06T02:53:08.777-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.693-0500 c20011| 2016-04-06T02:53:08.777-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.698-0500 c20011| 2016-04-06T02:53:08.777-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.700-0500 c20011| 2016-04-06T02:53:08.777-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.700-0500 c20011| 2016-04-06T02:53:08.777-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.702-0500 c20011| 2016-04-06T02:53:08.777-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.715-0500 c20011| 2016-04-06T02:53:08.777-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.717-0500 c20011| 2016-04-06T02:53:08.777-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.718-0500 c20011| 2016-04-06T02:53:08.777-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.718-0500 c20011| 2016-04-06T02:53:08.777-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:55.719-0500 c20011| 2016-04-06T02:53:08.777-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.721-0500 c20011| 2016-04-06T02:53:08.777-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-62.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:55.725-0500 c20011| 2016-04-06T02:53:08.777-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-61.0" } [js_test:multi_coll_drop] 2016-04-06T02:53:55.726-0500 c20011| 2016-04-06T02:53:08.777-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.726-0500 c20011| 2016-04-06T02:53:08.777-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.728-0500 c20011| 2016-04-06T02:53:08.778-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.730-0500 c20011| 2016-04-06T02:53:08.778-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.733-0500 c20011| 2016-04-06T02:53:08.778-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.764-0500 c20011| 2016-04-06T02:53:08.778-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.764-0500 c20011| 2016-04-06T02:53:08.778-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.770-0500 c20011| 2016-04-06T02:53:08.778-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 452 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.778-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|9, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:55.771-0500 c20011| 2016-04-06T02:53:08.778-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 452 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:55.772-0500 c20011| 2016-04-06T02:53:08.778-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.772-0500 c20011| 2016-04-06T02:53:08.778-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.772-0500 c20011| 2016-04-06T02:53:08.778-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.776-0500 c20011| 2016-04-06T02:53:08.778-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.793-0500 c20011| 2016-04-06T02:53:08.778-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.794-0500 c20011| 2016-04-06T02:53:08.778-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.796-0500 c20011| 2016-04-06T02:53:08.778-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.798-0500 c20011| 2016-04-06T02:53:08.778-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.799-0500 c20011| 2016-04-06T02:53:08.778-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.801-0500 c20011| 2016-04-06T02:53:08.778-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.805-0500 c20011| 2016-04-06T02:53:08.779-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:55.806-0500 c20011| 2016-04-06T02:53:08.779-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:55.819-0500 c20011| 2016-04-06T02:53:08.779-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|9, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|10, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:55.826-0500 c20011| 2016-04-06T02:53:08.779-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 453 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|9, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|10, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:55.828-0500 c20011| 2016-04-06T02:53:08.779-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 453 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:55.830-0500 s20015| 2016-04-06T02:53:37.373-0500 D ASIO [UserCacheInvalidator] startCommand: RemoteCommand 129 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:54:07.373-0500 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:55.833-0500 s20015| 2016-04-06T02:53:37.374-0500 I ASIO [UserCacheInvalidator] dropping unhealthy pooled connection to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:55.834-0500 s20015| 2016-04-06T02:53:37.374-0500 I ASIO [UserCacheInvalidator] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:53:55.837-0500 d20010| 2016-04-06T02:53:36.386-0500 I NETWORK [conn5] Socket closed remotely, no longer connected (idle 7 secs, remote host 192.168.100.28:20013) [js_test:multi_coll_drop] 2016-04-06T02:53:55.842-0500 d20010| 2016-04-06T02:53:36.396-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:55.846-0500 d20010| 2016-04-06T02:53:36.397-0500 I COMMAND [conn5] command admin.$cmd command: splitChunk { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 61.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } numYields:0 reslen:250 locks:{} protocol:op_command 4449ms [js_test:multi_coll_drop] 2016-04-06T02:53:55.857-0500 d20010| 2016-04-06T02:53:36.398-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 62.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:55.859-0500 d20010| 2016-04-06T02:53:36.402-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:55.863-0500 d20010| 2016-04-06T02:53:36.402-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 63.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:55.868-0500 d20010| 2016-04-06T02:53:36.405-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:55.880-0500 d20010| 2016-04-06T02:53:36.405-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 64.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:55.884-0500 d20010| 2016-04-06T02:53:36.407-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:55.885-0500 d20010| 2016-04-06T02:53:36.415-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 65.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:55.886-0500 d20010| 2016-04-06T02:53:36.416-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:55.890-0500 d20010| 2016-04-06T02:53:36.420-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 66.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:55.893-0500 d20010| 2016-04-06T02:53:36.422-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:55.897-0500 d20010| 2016-04-06T02:53:36.423-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 67.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:55.899-0500 d20010| 2016-04-06T02:53:36.425-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:55.903-0500 d20010| 2016-04-06T02:53:36.426-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 68.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:55.905-0500 d20010| 2016-04-06T02:53:36.429-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:55.911-0500 d20010| 2016-04-06T02:53:36.430-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 69.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:55.913-0500 d20010| 2016-04-06T02:53:36.432-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:55.915-0500 d20010| 2016-04-06T02:53:36.432-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 70.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:55.917-0500 d20010| 2016-04-06T02:53:36.435-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:55.923-0500 d20010| 2016-04-06T02:53:36.436-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 71.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:55.924-0500 d20010| 2016-04-06T02:53:36.439-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:55.930-0500 d20010| 2016-04-06T02:53:36.440-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 72.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:55.933-0500 d20010| 2016-04-06T02:53:36.442-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:55.935-0500 d20010| 2016-04-06T02:53:36.443-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 73.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:55.940-0500 d20010| 2016-04-06T02:53:36.445-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:55.952-0500 d20010| 2016-04-06T02:53:36.446-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 74.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:55.955-0500 d20010| 2016-04-06T02:53:36.448-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:55.959-0500 d20010| 2016-04-06T02:53:36.449-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 75.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:55.962-0500 d20010| 2016-04-06T02:53:36.452-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:55.964-0500 d20010| 2016-04-06T02:53:36.453-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 76.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:55.969-0500 d20010| 2016-04-06T02:53:36.456-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:55.972-0500 d20010| 2016-04-06T02:53:36.456-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 77.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:55.976-0500 d20010| 2016-04-06T02:53:36.461-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:55.982-0500 d20010| 2016-04-06T02:53:36.463-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 78.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:55.984-0500 d20010| 2016-04-06T02:53:36.468-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:55.990-0500 d20010| 2016-04-06T02:53:36.469-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 79.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:55.993-0500 d20010| 2016-04-06T02:53:36.480-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:55.996-0500 d20010| 2016-04-06T02:53:36.481-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 80.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:55.999-0500 d20010| 2016-04-06T02:53:36.484-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:56.023-0500 d20010| 2016-04-06T02:53:36.486-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 81.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:56.026-0500 d20010| 2016-04-06T02:53:36.493-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:56.040-0500 d20010| 2016-04-06T02:53:36.495-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 82.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:56.046-0500 d20010| 2016-04-06T02:53:36.501-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:56.053-0500 d20010| 2016-04-06T02:53:36.502-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 83.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:56.056-0500 d20010| 2016-04-06T02:53:36.505-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:56.060-0500 d20010| 2016-04-06T02:53:36.506-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 84.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:56.062-0500 d20010| 2016-04-06T02:53:36.508-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:56.065-0500 d20010| 2016-04-06T02:53:36.509-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 85.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:56.067-0500 d20010| 2016-04-06T02:53:36.514-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:56.069-0500 d20010| 2016-04-06T02:53:36.515-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 86.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:56.073-0500 d20010| 2016-04-06T02:53:36.518-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:56.077-0500 d20010| 2016-04-06T02:53:36.519-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 87.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:56.079-0500 d20010| 2016-04-06T02:53:36.521-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:56.083-0500 d20010| 2016-04-06T02:53:36.522-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 88.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:56.087-0500 d20010| 2016-04-06T02:53:36.530-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:56.092-0500 d20010| 2016-04-06T02:53:36.532-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 89.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:56.096-0500 d20010| 2016-04-06T02:53:36.539-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:56.098-0500 d20010| 2016-04-06T02:53:36.539-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 90.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:56.102-0500 d20010| 2016-04-06T02:53:36.555-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:56.106-0500 d20010| 2016-04-06T02:53:36.557-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 91.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:56.107-0500 d20010| 2016-04-06T02:53:36.574-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:56.114-0500 d20010| 2016-04-06T02:53:36.576-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 92.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:56.117-0500 d20010| 2016-04-06T02:53:36.584-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:56.138-0500 d20010| 2016-04-06T02:53:36.585-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 93.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:56.140-0500 d20010| 2016-04-06T02:53:36.595-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:56.146-0500 d20010| 2016-04-06T02:53:36.596-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 94.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:56.147-0500 d20010| 2016-04-06T02:53:36.604-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:56.153-0500 d20010| 2016-04-06T02:53:36.605-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 95.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:56.155-0500 d20010| 2016-04-06T02:53:36.608-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:56.159-0500 d20010| 2016-04-06T02:53:36.609-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 96.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:56.162-0500 d20010| 2016-04-06T02:53:36.618-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:56.167-0500 d20010| 2016-04-06T02:53:36.619-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 97.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:56.171-0500 d20010| 2016-04-06T02:53:36.622-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:56.177-0500 d20010| 2016-04-06T02:53:36.623-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 98.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:56.182-0500 d20010| 2016-04-06T02:53:36.627-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:56.189-0500 d20010| 2016-04-06T02:53:36.628-0500 I SHARDING [conn5] received splitChunk request: { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 99.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } [js_test:multi_coll_drop] 2016-04-06T02:53:56.192-0500 d20010| 2016-04-06T02:53:36.639-0500 W SHARDING [conn5] could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:56.197-0500 d20010| 2016-04-06T02:53:39.744-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:35660 #6 (6 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:56.200-0500 d20010| 2016-04-06T02:53:39.761-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:35663 #7 (7 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:56.202-0500 c20012| 2016-04-06T02:53:36.385-0500 D COMMAND [conn37] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 6 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.202-0500 c20012| 2016-04-06T02:53:36.385-0500 D COMMAND [conn37] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:56.204-0500 c20012| 2016-04-06T02:53:36.385-0500 I COMMAND [conn37] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 6 } numYields:0 reslen:500 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.207-0500 c20012| 2016-04-06T02:53:36.385-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1391 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.209-0500 c20012| 2016-04-06T02:53:36.385-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:56.210-0500 c20012| 2016-04-06T02:53:36.385-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1398 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:56.213-0500 c20012| 2016-04-06T02:53:36.385-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1410 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:56.214-0500 c20012| 2016-04-06T02:53:36.385-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:56.217-0500 c20012| 2016-04-06T02:53:36.385-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1411 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:56.224-0500 c20012| 2016-04-06T02:53:36.385-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:56.225-0500 c20012| 2016-04-06T02:53:36.385-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1403 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:56.226-0500 c20012| 2016-04-06T02:53:36.385-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:56.229-0500 c20012| 2016-04-06T02:53:36.385-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1407 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:56.231-0500 c20012| 2016-04-06T02:53:36.386-0500 D COMMAND [conn31] run command local.$cmd { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:56.245-0500 c20012| 2016-04-06T02:53:36.386-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to execute command: RemoteCommand 1410 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:41.995-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 7 } reason: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:56.251-0500 c20012| 2016-04-06T02:53:36.386-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1410 finished with response: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:56.257-0500 c20012| 2016-04-06T02:53:36.386-0500 D QUERY [conn31] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: 1 } projection: {} limit: 1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:56.264-0500 c20012| 2016-04-06T02:53:36.386-0500 I COMMAND [conn31] command local.oplog.rs command: find { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:274 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.266-0500 c20012| 2016-04-06T02:53:36.386-0500 I REPL [ReplicationExecutor] Error in heartbeat request to mongovm16:20013; HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:56.269-0500 c20012| 2016-04-06T02:53:36.386-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:36.386Z [js_test:multi_coll_drop] 2016-04-06T02:53:56.272-0500 c20012| 2016-04-06T02:53:36.386-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1415 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:41.995-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.276-0500 c20012| 2016-04-06T02:53:36.386-0500 I ASIO [ReplicationExecutor] dropping unhealthy pooled connection to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:56.280-0500 c20012| 2016-04-06T02:53:36.386-0500 I ASIO [ReplicationExecutor] dropping unhealthy pooled connection to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:56.283-0500 c20012| 2016-04-06T02:53:36.386-0500 I ASIO [ReplicationExecutor] dropping unhealthy pooled connection to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:56.285-0500 c20012| 2016-04-06T02:53:36.386-0500 I ASIO [ReplicationExecutor] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:53:56.294-0500 c20012| 2016-04-06T02:53:36.387-0500 D COMMAND [conn40] run command local.$cmd { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929210000|1 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.314-0500 c20012| 2016-04-06T02:53:36.387-0500 I COMMAND [conn40] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929210000|1 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 7 } planSummary: COLLSCAN cursorid:23538204668 keysExamined:0 docsExamined:2 numYields:0 nreturned:2 reslen:638 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.319-0500 c20012| 2016-04-06T02:53:36.389-0500 D COMMAND [conn40] run command local.$cmd { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } [js_test:multi_coll_drop] 2016-04-06T02:53:56.319-0500 c20012| 2016-04-06T02:53:36.395-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.320-0500 c20012| 2016-04-06T02:53:36.395-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.321-0500 c20012| 2016-04-06T02:53:36.395-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.324-0500 c20012| 2016-04-06T02:53:36.396-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25711 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.327-0500 c20012| 2016-04-06T02:53:36.397-0500 D COMMAND [conn39] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929216000|1, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:56.329-0500 c20012| 2016-04-06T02:53:36.397-0500 D COMMAND [conn39] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:56.331-0500 c20012| 2016-04-06T02:53:36.397-0500 D REPL [conn39] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929216000|1, t: 7 } and is durable through: { ts: Timestamp 1459929210000|1, t: 6 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.333-0500 c20012| 2016-04-06T02:53:36.397-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:56.336-0500 c20012| 2016-04-06T02:53:36.397-0500 D REPL [conn39] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929201000|1, t: 5 } and is durable through: { ts: Timestamp 1459929201000|1, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.341-0500 c20012| 2016-04-06T02:53:36.397-0500 I COMMAND [conn39] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929216000|1, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.344-0500 c20012| 2016-04-06T02:53:36.397-0500 D COMMAND [conn39] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929216000|1, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:56.344-0500 c20012| 2016-04-06T02:53:36.397-0500 D COMMAND [conn39] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:56.347-0500 c20012| 2016-04-06T02:53:36.397-0500 D REPL [conn39] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929216000|1, t: 7 } and is durable through: { ts: Timestamp 1459929210000|1, t: 6 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.353-0500 c20012| 2016-04-06T02:53:36.397-0500 D REPL [conn39] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929201000|1, t: 5 } and is durable through: { ts: Timestamp 1459929201000|1, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.366-0500 c20012| 2016-04-06T02:53:36.397-0500 I COMMAND [conn39] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929216000|1, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.383-0500 c20012| 2016-04-06T02:53:36.397-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1416 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:56.390-0500 c20012| 2016-04-06T02:53:36.398-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.394-0500 c20012| 2016-04-06T02:53:36.398-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:56.406-0500 c20012| 2016-04-06T02:53:36.398-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.407-0500 c20012| 2016-04-06T02:53:36.398-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:56.410-0500 c20012| 2016-04-06T02:53:36.398-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.411-0500 c20012| 2016-04-06T02:53:36.398-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:41422 #45 (14 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:56.413-0500 c20012| 2016-04-06T02:53:36.398-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:56.414-0500 c20012| 2016-04-06T02:53:36.398-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1416 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:56.416-0500 c20012| 2016-04-06T02:53:36.398-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1415 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:56.467-0500 c20012| 2016-04-06T02:53:36.398-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f247'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216398), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.469-0500 c20012| 2016-04-06T02:53:36.398-0500 D COMMAND [conn45] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20011" } [js_test:multi_coll_drop] 2016-04-06T02:53:56.472-0500 c20012| 2016-04-06T02:53:36.399-0500 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20011" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.476-0500 c20012| 2016-04-06T02:53:36.399-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:56.479-0500 c20012| 2016-04-06T02:53:36.399-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:56.484-0500 c20012| 2016-04-06T02:53:36.399-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.487-0500 c20012| 2016-04-06T02:53:36.399-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:56.488-0500 c20012| 2016-04-06T02:53:36.399-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:53:56.488-0500 c20012| 2016-04-06T02:53:36.399-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:53:56.494-0500 c20012| 2016-04-06T02:53:36.399-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f247'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216398), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:56.498-0500 c20012| 2016-04-06T02:53:36.399-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f247'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216398), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f247'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216398), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.501-0500 c20012| 2016-04-06T02:53:36.399-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1415 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 7, primaryId: 1, durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, opTime: { ts: Timestamp 1459929210000|1, t: 6 } } [js_test:multi_coll_drop] 2016-04-06T02:53:56.501-0500 c20012| 2016-04-06T02:53:36.399-0500 I REPL [ReplicationExecutor] Member mongovm16:20013 is now in state SECONDARY [js_test:multi_coll_drop] 2016-04-06T02:53:56.502-0500 c20012| 2016-04-06T02:53:36.399-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:38.399Z [js_test:multi_coll_drop] 2016-04-06T02:53:56.507-0500 c20012| 2016-04-06T02:53:36.399-0500 D COMMAND [conn45] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|1, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:56.511-0500 c20012| 2016-04-06T02:53:36.399-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.514-0500 c20012| 2016-04-06T02:53:36.399-0500 D COMMAND [conn45] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:53:56.516-0500 c20012| 2016-04-06T02:53:36.399-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:56.518-0500 c20012| 2016-04-06T02:53:36.399-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.520-0500 c20012| 2016-04-06T02:53:36.399-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:56.523-0500 c20012| 2016-04-06T02:53:36.399-0500 D REPL [conn45] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929216000|1, t: 7 } and is durable through: { ts: Timestamp 1459929216000|1, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.524-0500 c20012| 2016-04-06T02:53:36.399-0500 D REPL [conn45] Updating _lastCommittedOpTime to { ts: Timestamp 1459929216000|1, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.526-0500 c20012| 2016-04-06T02:53:36.399-0500 D REPL [conn45] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929201000|1, t: 5 } and is durable through: { ts: Timestamp 1459929201000|1, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.534-0500 c20012| 2016-04-06T02:53:36.400-0500 I COMMAND [conn45] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|1, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.540-0500 c20012| 2016-04-06T02:53:36.400-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929210000|1, t: 6 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.542-0500 c20012| 2016-04-06T02:53:36.400-0500 I COMMAND [conn40] command local.oplog.rs command: getMore { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929210000|1, t: 6 } } cursorid:23538204668 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 10ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.544-0500 c20012| 2016-04-06T02:53:36.400-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.546-0500 c20012| 2016-04-06T02:53:36.400-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:56.550-0500 c20012| 2016-04-06T02:53:36.400-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.552-0500 c20012| 2016-04-06T02:53:36.400-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:56.556-0500 c20012| 2016-04-06T02:53:36.400-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.558-0500 c20012| 2016-04-06T02:53:36.400-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.560-0500 c20012| 2016-04-06T02:53:36.401-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.562-0500 c20012| 2016-04-06T02:53:36.402-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.564-0500 c20012| 2016-04-06T02:53:36.402-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:56.567-0500 c20012| 2016-04-06T02:53:36.402-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.568-0500 c20012| 2016-04-06T02:53:36.402-0500 D COMMAND [conn40] run command local.$cmd { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:53:56.570-0500 c20012| 2016-04-06T02:53:36.402-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:56.575-0500 c20012| 2016-04-06T02:53:36.402-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.577-0500 c20012| 2016-04-06T02:53:36.402-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f248'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216402), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.577-0500 c20012| 2016-04-06T02:53:36.402-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:56.578-0500 c20012| 2016-04-06T02:53:36.402-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:56.581-0500 c20012| 2016-04-06T02:53:36.402-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.583-0500 c20012| 2016-04-06T02:53:36.403-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:56.584-0500 c20012| 2016-04-06T02:53:36.403-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:53:56.586-0500 c20012| 2016-04-06T02:53:36.403-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:53:56.592-0500 c20012| 2016-04-06T02:53:36.403-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f248'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216402), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:56.597-0500 c20012| 2016-04-06T02:53:36.403-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f248'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216402), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f248'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216402), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.602-0500 c20012| 2016-04-06T02:53:36.403-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.603-0500 c20012| 2016-04-06T02:53:36.403-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:56.604-0500 c20012| 2016-04-06T02:53:36.403-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.607-0500 c20012| 2016-04-06T02:53:36.403-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:56.613-0500 c20012| 2016-04-06T02:53:36.403-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.618-0500 c20012| 2016-04-06T02:53:36.403-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.620-0500 c20012| 2016-04-06T02:53:36.403-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:56.633-0500 c20012| 2016-04-06T02:53:36.403-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.636-0500 c20012| 2016-04-06T02:53:36.403-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:56.639-0500 c20012| 2016-04-06T02:53:36.403-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.649-0500 2016-04-06T02:53:41.484-0500 I NETWORK [ReplicaSetMonitorWatcher] Socket closed remotely, no longer connected (idle 11 secs, remote host 192.168.100.28:20011) [js_test:multi_coll_drop] 2016-04-06T02:53:56.652-0500 c20012| 2016-04-06T02:53:36.404-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.656-0500 c20012| 2016-04-06T02:53:36.404-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.660-0500 c20012| 2016-04-06T02:53:36.405-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.665-0500 c20012| 2016-04-06T02:53:36.405-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:56.669-0500 c20012| 2016-04-06T02:53:36.405-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.672-0500 c20012| 2016-04-06T02:53:36.405-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:56.680-0500 c20012| 2016-04-06T02:53:36.405-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.683-0500 c20012| 2016-04-06T02:53:36.405-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f249'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216405), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.685-0500 c20012| 2016-04-06T02:53:36.405-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:56.688-0500 c20012| 2016-04-06T02:53:36.406-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:56.691-0500 c20012| 2016-04-06T02:53:36.406-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.692-0500 c20012| 2016-04-06T02:53:36.406-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:56.693-0500 c20012| 2016-04-06T02:53:36.406-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:53:56.694-0500 c20012| 2016-04-06T02:53:36.406-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:53:56.698-0500 c20012| 2016-04-06T02:53:36.406-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f249'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216405), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:56.719-0500 c20012| 2016-04-06T02:53:36.406-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f249'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216405), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f249'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216405), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.725-0500 c20012| 2016-04-06T02:53:36.406-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.736-0500 c20012| 2016-04-06T02:53:36.406-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:56.738-0500 c20012| 2016-04-06T02:53:36.406-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.740-0500 c20012| 2016-04-06T02:53:36.406-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:56.745-0500 c20012| 2016-04-06T02:53:36.406-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.748-0500 c20012| 2016-04-06T02:53:36.406-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.750-0500 c20012| 2016-04-06T02:53:36.406-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:56.754-0500 c20012| 2016-04-06T02:53:36.406-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.759-0500 c20012| 2016-04-06T02:53:36.406-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:56.765-0500 c20012| 2016-04-06T02:53:36.406-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.765-0500 c20012| 2016-04-06T02:53:36.407-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.766-0500 c20012| 2016-04-06T02:53:36.407-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.775-0500 c20012| 2016-04-06T02:53:36.414-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.780-0500 c20012| 2016-04-06T02:53:36.414-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:56.788-0500 c20012| 2016-04-06T02:53:36.414-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.792-0500 c20012| 2016-04-06T02:53:36.414-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:56.804-0500 c20012| 2016-04-06T02:53:36.414-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.812-0500 c20012| 2016-04-06T02:53:36.415-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f24a'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216415), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.817-0500 c20012| 2016-04-06T02:53:36.415-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:56.820-0500 c20012| 2016-04-06T02:53:36.415-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:56.821-0500 c20012| 2016-04-06T02:53:36.415-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.839-0500 c20012| 2016-04-06T02:53:36.415-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:56.840-0500 c20012| 2016-04-06T02:53:36.415-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:53:56.841-0500 c20012| 2016-04-06T02:53:36.415-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:53:56.843-0500 c20012| 2016-04-06T02:53:36.415-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f24a'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216415), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:56.848-0500 c20012| 2016-04-06T02:53:36.415-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f24a'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216415), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f24a'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216415), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.853-0500 c20012| 2016-04-06T02:53:36.415-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.856-0500 c20012| 2016-04-06T02:53:36.415-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:56.858-0500 c20012| 2016-04-06T02:53:36.415-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.859-0500 c20012| 2016-04-06T02:53:36.415-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:56.864-0500 c20012| 2016-04-06T02:53:36.415-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.868-0500 c20012| 2016-04-06T02:53:36.415-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.871-0500 c20012| 2016-04-06T02:53:36.415-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:56.874-0500 c20012| 2016-04-06T02:53:36.415-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.875-0500 c20012| 2016-04-06T02:53:36.416-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:56.880-0500 c20012| 2016-04-06T02:53:36.416-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.882-0500 c20012| 2016-04-06T02:53:36.416-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.887-0500 c20012| 2016-04-06T02:53:36.416-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.893-0500 c20012| 2016-04-06T02:53:36.420-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f24b'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216420), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.899-0500 c20012| 2016-04-06T02:53:36.420-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:56.903-0500 c20012| 2016-04-06T02:53:36.420-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:56.905-0500 c20012| 2016-04-06T02:53:36.420-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.906-0500 c20012| 2016-04-06T02:53:36.420-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:56.907-0500 c20012| 2016-04-06T02:53:36.420-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:53:56.909-0500 c20012| 2016-04-06T02:53:36.420-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:53:56.920-0500 c20012| 2016-04-06T02:53:36.420-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f24b'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216420), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:56.926-0500 c20012| 2016-04-06T02:53:36.420-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f24b'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216420), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f24b'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216420), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.930-0500 c20012| 2016-04-06T02:53:36.420-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.932-0500 c20012| 2016-04-06T02:53:36.420-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:56.941-0500 c20012| 2016-04-06T02:53:36.421-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.942-0500 c20012| 2016-04-06T02:53:36.421-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:56.951-0500 c20012| 2016-04-06T02:53:36.421-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.957-0500 c20012| 2016-04-06T02:53:36.421-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.963-0500 c20012| 2016-04-06T02:53:36.421-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:56.964-0500 c20012| 2016-04-06T02:53:36.421-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.967-0500 c20012| 2016-04-06T02:53:36.421-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:56.971-0500 c20012| 2016-04-06T02:53:36.421-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.972-0500 c20012| 2016-04-06T02:53:36.421-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.972-0500 c20012| 2016-04-06T02:53:36.422-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:56.977-0500 c20012| 2016-04-06T02:53:36.423-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f24c'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216423), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.979-0500 c20012| 2016-04-06T02:53:36.423-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:56.982-0500 c20012| 2016-04-06T02:53:36.423-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:56.984-0500 c20012| 2016-04-06T02:53:36.423-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:56.985-0500 c20012| 2016-04-06T02:53:36.423-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:56.988-0500 c20012| 2016-04-06T02:53:36.423-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:53:56.989-0500 c20012| 2016-04-06T02:53:36.423-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:53:56.995-0500 c20012| 2016-04-06T02:53:36.423-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f24c'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216423), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.015-0500 c20012| 2016-04-06T02:53:36.423-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f24c'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216423), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f24c'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216423), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.025-0500 c20012| 2016-04-06T02:53:36.423-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.025-0500 c20012| 2016-04-06T02:53:36.423-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:57.031-0500 c20012| 2016-04-06T02:53:36.423-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.033-0500 c20012| 2016-04-06T02:53:36.423-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:57.046-0500 c20012| 2016-04-06T02:53:36.423-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.076-0500 c20012| 2016-04-06T02:53:36.424-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.084-0500 c20012| 2016-04-06T02:53:36.424-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:57.100-0500 c20012| 2016-04-06T02:53:36.424-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.103-0500 c20012| 2016-04-06T02:53:36.424-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:57.115-0500 c20012| 2016-04-06T02:53:36.424-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.117-0500 c20012| 2016-04-06T02:53:36.424-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.118-0500 c20012| 2016-04-06T02:53:36.425-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.124-0500 c20012| 2016-04-06T02:53:36.426-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.125-0500 c20012| 2016-04-06T02:53:36.426-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:57.130-0500 c20012| 2016-04-06T02:53:36.426-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.130-0500 c20012| 2016-04-06T02:53:36.426-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:57.134-0500 c20012| 2016-04-06T02:53:36.426-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.139-0500 c20012| 2016-04-06T02:53:36.426-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f24d'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216426), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.145-0500 c20012| 2016-04-06T02:53:36.427-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.151-0500 c20012| 2016-04-06T02:53:36.427-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.154-0500 c20012| 2016-04-06T02:53:36.427-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.156-0500 c20012| 2016-04-06T02:53:36.427-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.157-0500 c20012| 2016-04-06T02:53:36.427-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:53:57.158-0500 c20012| 2016-04-06T02:53:36.427-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:53:57.163-0500 c20012| 2016-04-06T02:53:36.427-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f24d'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216426), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.176-0500 c20012| 2016-04-06T02:53:36.427-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f24d'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216426), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f24d'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216426), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.183-0500 c20012| 2016-04-06T02:53:36.427-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.188-0500 c20012| 2016-04-06T02:53:36.427-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:57.193-0500 c20012| 2016-04-06T02:53:36.427-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.195-0500 c20012| 2016-04-06T02:53:36.427-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:57.200-0500 c20012| 2016-04-06T02:53:36.427-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.201-0500 c20012| 2016-04-06T02:53:36.428-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.202-0500 c20012| 2016-04-06T02:53:36.428-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:57.204-0500 c20012| 2016-04-06T02:53:36.428-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.207-0500 c20012| 2016-04-06T02:53:36.428-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:57.210-0500 c20012| 2016-04-06T02:53:36.428-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.211-0500 c20012| 2016-04-06T02:53:36.428-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.213-0500 c20012| 2016-04-06T02:53:36.429-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.217-0500 c20012| 2016-04-06T02:53:36.430-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f24e'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216430), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.218-0500 c20012| 2016-04-06T02:53:36.430-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.222-0500 c20012| 2016-04-06T02:53:36.430-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.225-0500 c20012| 2016-04-06T02:53:36.430-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.228-0500 c20012| 2016-04-06T02:53:36.430-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.229-0500 c20012| 2016-04-06T02:53:36.430-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:53:57.230-0500 c20012| 2016-04-06T02:53:36.430-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:53:57.234-0500 c20012| 2016-04-06T02:53:36.430-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f24e'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216430), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.243-0500 c20012| 2016-04-06T02:53:36.430-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f24e'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216430), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f24e'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216430), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.245-0500 c20012| 2016-04-06T02:53:36.430-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.249-0500 c20012| 2016-04-06T02:53:36.430-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:57.264-0500 c20012| 2016-04-06T02:53:36.430-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.267-0500 c20012| 2016-04-06T02:53:36.430-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:57.272-0500 c20012| 2016-04-06T02:53:36.430-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.272-0500 c20012| 2016-04-06T02:53:36.430-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.276-0500 c20012| 2016-04-06T02:53:36.430-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:57.281-0500 c20012| 2016-04-06T02:53:36.430-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.283-0500 c20012| 2016-04-06T02:53:36.430-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:57.286-0500 c20012| 2016-04-06T02:53:36.431-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.286-0500 c20012| 2016-04-06T02:53:36.431-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.288-0500 c20012| 2016-04-06T02:53:36.431-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.293-0500 c20012| 2016-04-06T02:53:36.433-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f24f'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216432), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.299-0500 c20012| 2016-04-06T02:53:36.433-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.303-0500 c20012| 2016-04-06T02:53:36.433-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.304-0500 c20012| 2016-04-06T02:53:36.433-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.309-0500 c20012| 2016-04-06T02:53:36.433-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.311-0500 c20012| 2016-04-06T02:53:36.433-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:53:57.311-0500 c20012| 2016-04-06T02:53:36.433-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:53:57.313-0500 c20012| 2016-04-06T02:53:36.433-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f24f'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216432), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.330-0500 c20012| 2016-04-06T02:53:36.433-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f24f'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216432), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f24f'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216432), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.341-0500 c20012| 2016-04-06T02:53:36.433-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.344-0500 c20012| 2016-04-06T02:53:36.433-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:57.346-0500 c20012| 2016-04-06T02:53:36.433-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.347-0500 c20012| 2016-04-06T02:53:36.433-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:57.368-0500 c20012| 2016-04-06T02:53:36.433-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.369-0500 c20012| 2016-04-06T02:53:36.433-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.371-0500 c20012| 2016-04-06T02:53:36.433-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:57.373-0500 c20012| 2016-04-06T02:53:36.433-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.374-0500 c20012| 2016-04-06T02:53:36.434-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:57.377-0500 c20012| 2016-04-06T02:53:36.434-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.380-0500 c20012| 2016-04-06T02:53:36.434-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.381-0500 c20012| 2016-04-06T02:53:36.434-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.386-0500 c20012| 2016-04-06T02:53:36.435-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.388-0500 c20012| 2016-04-06T02:53:36.435-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:57.390-0500 c20012| 2016-04-06T02:53:36.435-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.395-0500 c20012| 2016-04-06T02:53:36.435-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:57.398-0500 c20012| 2016-04-06T02:53:36.435-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.402-0500 c20012| 2016-04-06T02:53:36.436-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f250'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216436), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.404-0500 c20012| 2016-04-06T02:53:36.436-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.405-0500 c20012| 2016-04-06T02:53:36.436-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.406-0500 c20012| 2016-04-06T02:53:36.436-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.408-0500 c20012| 2016-04-06T02:53:36.436-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.409-0500 c20012| 2016-04-06T02:53:36.436-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:53:57.410-0500 c20012| 2016-04-06T02:53:36.436-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:53:57.413-0500 c20012| 2016-04-06T02:53:36.436-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f250'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216436), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.417-0500 c20012| 2016-04-06T02:53:36.436-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f250'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216436), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f250'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216436), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.418-0500 c20012| 2016-04-06T02:53:36.436-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.422-0500 c20012| 2016-04-06T02:53:36.436-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:57.424-0500 c20012| 2016-04-06T02:53:36.436-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.425-0500 c20012| 2016-04-06T02:53:36.436-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:57.428-0500 c20012| 2016-04-06T02:53:36.437-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.430-0500 c20012| 2016-04-06T02:53:36.437-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.434-0500 c20012| 2016-04-06T02:53:36.437-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:57.436-0500 c20012| 2016-04-06T02:53:36.437-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.437-0500 c20012| 2016-04-06T02:53:36.437-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:57.440-0500 c20012| 2016-04-06T02:53:36.437-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.441-0500 c20012| 2016-04-06T02:53:36.438-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.444-0500 c20012| 2016-04-06T02:53:36.439-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.449-0500 c20012| 2016-04-06T02:53:36.440-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f251'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216440), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.451-0500 c20012| 2016-04-06T02:53:36.440-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.453-0500 c20012| 2016-04-06T02:53:36.440-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.454-0500 c20012| 2016-04-06T02:53:36.440-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.456-0500 c20012| 2016-04-06T02:53:36.440-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.456-0500 c20012| 2016-04-06T02:53:36.440-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:53:57.457-0500 c20012| 2016-04-06T02:53:36.440-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:53:57.487-0500 c20012| 2016-04-06T02:53:36.440-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f251'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216440), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.492-0500 c20012| 2016-04-06T02:53:36.440-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f251'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216440), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f251'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216440), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.495-0500 c20012| 2016-04-06T02:53:36.440-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.497-0500 c20012| 2016-04-06T02:53:36.440-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:57.499-0500 c20012| 2016-04-06T02:53:36.440-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.499-0500 c20012| 2016-04-06T02:53:36.440-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:57.505-0500 c20012| 2016-04-06T02:53:36.440-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.510-0500 c20012| 2016-04-06T02:53:36.441-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.512-0500 c20012| 2016-04-06T02:53:36.441-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:57.533-0500 c20012| 2016-04-06T02:53:36.441-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.534-0500 c20012| 2016-04-06T02:53:36.441-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:57.538-0500 c20012| 2016-04-06T02:53:36.441-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.539-0500 c20012| 2016-04-06T02:53:36.441-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.542-0500 c20012| 2016-04-06T02:53:36.441-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.547-0500 c20012| 2016-04-06T02:53:36.443-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f252'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216443), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.548-0500 c20012| 2016-04-06T02:53:36.443-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.549-0500 c20012| 2016-04-06T02:53:36.443-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.551-0500 c20012| 2016-04-06T02:53:36.443-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.551-0500 c20012| 2016-04-06T02:53:36.443-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.552-0500 c20012| 2016-04-06T02:53:36.443-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:53:57.554-0500 c20012| 2016-04-06T02:53:36.443-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:53:57.560-0500 c20012| 2016-04-06T02:53:36.443-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f252'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216443), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.563-0500 c20012| 2016-04-06T02:53:36.443-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f252'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216443), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f252'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216443), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.589-0500 c20012| 2016-04-06T02:53:36.444-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.596-0500 c20012| 2016-04-06T02:53:36.444-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:57.608-0500 c20012| 2016-04-06T02:53:36.444-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.612-0500 c20012| 2016-04-06T02:53:36.444-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:57.628-0500 c20012| 2016-04-06T02:53:36.444-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.649-0500 c20012| 2016-04-06T02:53:36.444-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.651-0500 c20012| 2016-04-06T02:53:36.444-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:57.653-0500 c20012| 2016-04-06T02:53:36.444-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.658-0500 c20012| 2016-04-06T02:53:36.444-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:57.662-0500 c20012| 2016-04-06T02:53:36.444-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.664-0500 c20012| 2016-04-06T02:53:36.444-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.672-0500 c20012| 2016-04-06T02:53:36.445-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.674-0500 c20012| 2016-04-06T02:53:36.445-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.677-0500 c20012| 2016-04-06T02:53:36.445-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:57.684-0500 c20012| 2016-04-06T02:53:36.445-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.690-0500 c20012| 2016-04-06T02:53:36.445-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:57.695-0500 c20012| 2016-04-06T02:53:36.446-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.706-0500 c20012| 2016-04-06T02:53:36.446-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f253'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216446), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.710-0500 c20012| 2016-04-06T02:53:36.446-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.713-0500 c20012| 2016-04-06T02:53:36.446-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.718-0500 c20012| 2016-04-06T02:53:36.446-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.718-0500 c20012| 2016-04-06T02:53:36.446-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.720-0500 c20012| 2016-04-06T02:53:36.446-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:53:57.723-0500 c20012| 2016-04-06T02:53:36.446-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:53:57.731-0500 c20012| 2016-04-06T02:53:36.446-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f253'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216446), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:57.745-0500 c20012| 2016-04-06T02:53:36.446-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f253'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216446), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f253'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216446), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.764-0500 c20012| 2016-04-06T02:53:36.447-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.768-0500 c20012| 2016-04-06T02:53:36.447-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:57.771-0500 c20012| 2016-04-06T02:53:36.447-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.775-0500 c20011| 2016-04-06T02:53:08.779-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 453 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.779-0500 c20011| 2016-04-06T02:53:08.784-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|10, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|10, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:57.782-0500 c20011| 2016-04-06T02:53:08.784-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 455 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|10, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|10, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:57.783-0500 c20011| 2016-04-06T02:53:08.784-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 455 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:57.783-0500 c20011| 2016-04-06T02:53:08.784-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 455 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.786-0500 c20011| 2016-04-06T02:53:08.785-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 452 finished with response: { cursor: { nextBatch: [], id: 23953707769, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.797-0500 c20011| 2016-04-06T02:53:08.785-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929188000|10, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.805-0500 c20011| 2016-04-06T02:53:08.785-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:57.816-0500 c20011| 2016-04-06T02:53:08.785-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 458 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.785-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|10, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:57.823-0500 c20011| 2016-04-06T02:53:08.786-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 458 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:57.828-0500 c20011| 2016-04-06T02:53:08.787-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 458 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929188000|11, t: 4, h: -766951703923615705, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:53:08.786-0500-5704c06465c17830b843f1cc", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929188786), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -62.0 }, max: { _id: MaxKey } }, left: { min: { _id: -62.0 }, max: { _id: -61.0 }, lastmod: Timestamp 1000|79, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -61.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 23953707769, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.831-0500 c20011| 2016-04-06T02:53:08.787-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929188000|11 and ending at ts: Timestamp 1459929188000|11 [js_test:multi_coll_drop] 2016-04-06T02:53:57.834-0500 c20011| 2016-04-06T02:53:08.787-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:57.835-0500 c20011| 2016-04-06T02:53:08.788-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.838-0500 c20011| 2016-04-06T02:53:08.788-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.840-0500 c20011| 2016-04-06T02:53:08.788-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.843-0500 c20011| 2016-04-06T02:53:08.788-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.846-0500 c20011| 2016-04-06T02:53:08.788-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.847-0500 c20011| 2016-04-06T02:53:08.788-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.857-0500 c20011| 2016-04-06T02:53:08.788-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.860-0500 c20011| 2016-04-06T02:53:08.788-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.862-0500 c20012| 2016-04-06T02:53:36.447-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:57.867-0500 c20012| 2016-04-06T02:53:36.447-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.869-0500 c20012| 2016-04-06T02:53:36.447-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.885-0500 c20012| 2016-04-06T02:53:36.447-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:57.886-0500 c20012| 2016-04-06T02:53:36.447-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.888-0500 c20012| 2016-04-06T02:53:36.447-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:57.893-0500 c20012| 2016-04-06T02:53:36.447-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.894-0500 c20012| 2016-04-06T02:53:36.447-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.894-0500 c20012| 2016-04-06T02:53:36.448-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:57.897-0500 c20012| 2016-04-06T02:53:36.449-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.900-0500 c20012| 2016-04-06T02:53:36.449-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:57.903-0500 s20015| 2016-04-06T02:53:37.374-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:57.910-0500 c20013| 2016-04-06T02:52:41.906-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|13, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|13, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:57.916-0500 c20013| 2016-04-06T02:52:41.906-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1354 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|13, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|13, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:57.923-0500 c20013| 2016-04-06T02:52:41.906-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1354 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:57.925-0500 c20013| 2016-04-06T02:52:41.906-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1354 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:57.926-0500 c20011| 2016-04-06T02:53:08.789-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.927-0500 c20011| 2016-04-06T02:53:08.789-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.929-0500 c20011| 2016-04-06T02:53:08.789-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.930-0500 c20011| 2016-04-06T02:53:08.789-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.931-0500 c20011| 2016-04-06T02:53:08.789-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.932-0500 c20011| 2016-04-06T02:53:08.789-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:53:57.934-0500 c20011| 2016-04-06T02:53:08.789-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.938-0500 c20011| 2016-04-06T02:53:08.789-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 460 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.789-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|10, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:57.941-0500 c20011| 2016-04-06T02:53:08.790-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 460 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:57.942-0500 c20011| 2016-04-06T02:53:08.790-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.943-0500 c20011| 2016-04-06T02:53:08.790-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.944-0500 c20011| 2016-04-06T02:53:08.790-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.944-0500 c20011| 2016-04-06T02:53:08.790-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.945-0500 c20011| 2016-04-06T02:53:08.790-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.946-0500 c20011| 2016-04-06T02:53:08.790-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.947-0500 c20011| 2016-04-06T02:53:08.790-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.949-0500 c20011| 2016-04-06T02:53:08.790-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.954-0500 c20011| 2016-04-06T02:53:08.790-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.960-0500 c20011| 2016-04-06T02:53:08.791-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.961-0500 c20011| 2016-04-06T02:53:08.791-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.970-0500 c20011| 2016-04-06T02:53:08.791-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.973-0500 c20011| 2016-04-06T02:53:08.791-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.976-0500 c20011| 2016-04-06T02:53:08.791-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.977-0500 c20011| 2016-04-06T02:53:08.791-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.978-0500 c20011| 2016-04-06T02:53:08.791-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.979-0500 c20011| 2016-04-06T02:53:08.791-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.980-0500 c20011| 2016-04-06T02:53:08.791-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:53:57.980-0500 c20011| 2016-04-06T02:53:08.792-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:57.997-0500 c20011| 2016-04-06T02:53:08.793-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|10, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:58.003-0500 c20011| 2016-04-06T02:53:08.793-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 461 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|10, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:58.004-0500 c20011| 2016-04-06T02:53:08.793-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 461 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:58.016-0500 c20011| 2016-04-06T02:53:08.793-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 461 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.035-0500 c20011| 2016-04-06T02:53:08.796-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:58.045-0500 c20011| 2016-04-06T02:53:08.796-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 463 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:58.054-0500 c20011| 2016-04-06T02:53:08.796-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 463 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:58.066-0500 c20011| 2016-04-06T02:53:08.797-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 463 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.070-0500 c20011| 2016-04-06T02:53:08.799-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 460 finished with response: { cursor: { nextBatch: [], id: 23953707769, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.071-0500 c20011| 2016-04-06T02:53:08.799-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929188000|11, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.079-0500 c20011| 2016-04-06T02:53:08.799-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:58.094-0500 c20011| 2016-04-06T02:53:08.799-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 466 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.799-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:58.105-0500 c20011| 2016-04-06T02:53:08.799-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 466 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:58.118-0500 c20011| 2016-04-06T02:53:09.171-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 467 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:19.171-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.126-0500 c20011| 2016-04-06T02:53:09.171-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 467 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:58.129-0500 c20011| 2016-04-06T02:53:10.248-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 468 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:53:20.248-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.132-0500 c20011| 2016-04-06T02:53:10.250-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 468 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:58.134-0500 c20011| 2016-04-06T02:53:10.250-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 468 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20013", term: 4, primaryId: 2, durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, opTime: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:58.136-0500 c20011| 2016-04-06T02:53:10.250-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:53:12.250Z [js_test:multi_coll_drop] 2016-04-06T02:53:58.138-0500 c20011| 2016-04-06T02:53:10.697-0500 D COMMAND [conn53] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.139-0500 c20011| 2016-04-06T02:53:10.697-0500 D COMMAND [conn53] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:58.143-0500 c20011| 2016-04-06T02:53:10.697-0500 I COMMAND [conn53] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:58.149-0500 c20011| 2016-04-06T02:53:11.297-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20013: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:58.153-0500 c20011| 2016-04-06T02:53:11.297-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 470 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:53:58.154-0500 c20011| 2016-04-06T02:53:11.297-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 470 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:58.158-0500 c20011| 2016-04-06T02:53:12.250-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 471 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:53:22.250-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.163-0500 c20011| 2016-04-06T02:53:12.250-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 471 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:58.168-0500 c20011| 2016-04-06T02:53:12.251-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 471 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20013", term: 4, primaryId: 2, durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, opTime: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:58.169-0500 c20011| 2016-04-06T02:53:12.251-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:53:14.251Z [js_test:multi_coll_drop] 2016-04-06T02:53:58.170-0500 c20011| 2016-04-06T02:53:12.698-0500 D COMMAND [conn53] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.173-0500 c20011| 2016-04-06T02:53:12.698-0500 D COMMAND [conn53] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:58.176-0500 c20011| 2016-04-06T02:53:12.698-0500 I COMMAND [conn53] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } numYields:0 reslen:489 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:58.180-0500 c20011| 2016-04-06T02:53:13.799-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 466 timed out, adjusted timeout after getting connection from pool was 5000ms, op was id: 18, states: [ UNINITIALIZED, IN_PROGRESS ], start_time: 2016-04-06T02:53:08.799-0500, request: RemoteCommand 466 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.799-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:58.183-0500 c20011| 2016-04-06T02:53:13.799-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Operation timing out; original request was: RemoteCommand 466 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.799-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:58.208-0500 c20011| 2016-04-06T02:53:13.799-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Failed to execute command: RemoteCommand 466 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.799-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|11, t: 4 } } reason: ExceededTimeLimit: Operation timed out, request was RemoteCommand 466 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.799-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:58.218-0500 c20011| 2016-04-06T02:53:13.799-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 466 finished with response: ExceededTimeLimit: Operation timed out, request was RemoteCommand 466 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.799-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:58.223-0500 c20011| 2016-04-06T02:53:13.799-0500 D REPL [rsBackgroundSync-0] Error returned from oplog query: ExceededTimeLimit: Operation timed out, request was RemoteCommand 466 -- target:mongovm16:20013 db:local expDate:2016-04-06T02:53:13.799-0500 cmd:{ getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:58.224-0500 c20011| 2016-04-06T02:53:13.799-0500 D REPL [rsBackgroundSync] fetcher stopped reading remote oplog on mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:58.225-0500 c20011| 2016-04-06T02:53:13.799-0500 I REPL [ReplicationExecutor] could not find member to sync from [js_test:multi_coll_drop] 2016-04-06T02:53:58.227-0500 c20011| 2016-04-06T02:53:13.799-0500 D ASIO [ReplicationExecutor] Canceling operation; original request was: RemoteCommand 467 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:19.171-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.237-0500 c20011| 2016-04-06T02:53:13.799-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:53:13.799Z [js_test:multi_coll_drop] 2016-04-06T02:53:58.254-0500 c20011| 2016-04-06T02:53:13.799-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:13.799Z [js_test:multi_coll_drop] 2016-04-06T02:53:58.257-0500 c20011| 2016-04-06T02:53:13.799-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to execute command: RemoteCommand 467 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:19.171-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 4 } reason: CallbackCanceled: Callback canceled [js_test:multi_coll_drop] 2016-04-06T02:53:58.257-0500 c20011| 2016-04-06T02:53:13.799-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 467 finished with response: CallbackCanceled: Callback canceled [js_test:multi_coll_drop] 2016-04-06T02:53:58.261-0500 c20011| 2016-04-06T02:53:13.800-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 475 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:53:23.800-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.262-0500 c20011| 2016-04-06T02:53:13.800-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 475 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:58.264-0500 c20011| 2016-04-06T02:53:13.800-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 476 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:19.171-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.264-0500 c20011| 2016-04-06T02:53:13.800-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:58.268-0500 c20011| 2016-04-06T02:53:13.800-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 475 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20013", term: 4, primaryId: 2, durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, opTime: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:58.270-0500 c20011| 2016-04-06T02:53:13.800-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 477 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:58.272-0500 c20011| 2016-04-06T02:53:13.800-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:53:16.300Z [js_test:multi_coll_drop] 2016-04-06T02:53:58.277-0500 c20011| 2016-04-06T02:53:13.802-0500 D COMMAND [conn53] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.283-0500 c20011| 2016-04-06T02:53:13.802-0500 D COMMAND [conn53] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:58.295-0500 c20012| 2016-04-06T02:53:36.449-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.302-0500 s20015| 2016-04-06T02:53:37.374-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 130 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:58.313-0500 c20013| 2016-04-06T02:52:41.907-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1351 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.316-0500 c20013| 2016-04-06T02:52:41.908-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929161000|13, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.316-0500 c20013| 2016-04-06T02:52:41.908-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:53:58.323-0500 c20012| 2016-04-06T02:53:36.449-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:58.324-0500 s20015| 2016-04-06T02:53:37.376-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:58.337-0500 c20012| 2016-04-06T02:53:36.449-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:58.339-0500 c20011| 2016-04-06T02:53:13.804-0500 I COMMAND [conn53] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } numYields:0 reslen:458 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:58.342-0500 c20011| 2016-04-06T02:53:14.153-0500 I REPL [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 5000ms [js_test:multi_coll_drop] 2016-04-06T02:53:58.343-0500 c20011| 2016-04-06T02:53:14.153-0500 I REPL [ReplicationExecutor] conducting a dry run election to see if we could be elected [js_test:multi_coll_drop] 2016-04-06T02:53:58.357-0500 c20011| 2016-04-06T02:53:14.153-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 479 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:53:19.153-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 4, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:58.362-0500 c20011| 2016-04-06T02:53:14.153-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 480 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:19.153-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 4, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:58.363-0500 c20011| 2016-04-06T02:53:14.159-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:58.365-0500 s20015| 2016-04-06T02:53:37.376-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 130 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:58.367-0500 s20015| 2016-04-06T02:53:37.376-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 129 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:58.368-0500 s20015| 2016-04-06T02:53:37.376-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 129 finished with response: { cacheGeneration: ObjectId('5704c071525046a6a806333a'), ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.371-0500 s20015| 2016-04-06T02:53:37.376-0500 I ACCESS [UserCacheInvalidator] User cache generation changed from 5704c01f525046a6a8063338 to 5704c071525046a6a806333a; invalidating user cache [js_test:multi_coll_drop] 2016-04-06T02:53:58.373-0500 s20015| 2016-04-06T02:53:39.115-0500 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:58.376-0500 s20015| 2016-04-06T02:53:39.115-0500 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:53:58.377-0500 s20015| 2016-04-06T02:53:39.115-0500 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.100.28:20013, event detected [js_test:multi_coll_drop] 2016-04-06T02:53:58.378-0500 s20015| 2016-04-06T02:53:39.115-0500 I NETWORK [ReplicaSetMonitorWatcher] Socket closed remotely, no longer connected (idle 10 secs, remote host 192.168.100.28:20013) [js_test:multi_coll_drop] 2016-04-06T02:53:58.379-0500 s20015| 2016-04-06T02:53:39.115-0500 D NETWORK [ReplicaSetMonitorWatcher] creating new connection to:mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:58.379-0500 s20015| 2016-04-06T02:53:39.116-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:53:58.379-0500 s20015| 2016-04-06T02:53:39.116-0500 D NETWORK [ReplicaSetMonitorWatcher] connected to server mongovm16:20013 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:53:58.379-0500 s20015| 2016-04-06T02:53:39.121-0500 D NETWORK [ReplicaSetMonitorWatcher] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:53:58.382-0500 s20015| 2016-04-06T02:53:39.122-0500 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.100.28:20012, no events [js_test:multi_coll_drop] 2016-04-06T02:53:58.384-0500 s20015| 2016-04-06T02:53:39.122-0500 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.100.28:20011, event detected [js_test:multi_coll_drop] 2016-04-06T02:53:58.389-0500 s20015| 2016-04-06T02:53:39.122-0500 I NETWORK [ReplicaSetMonitorWatcher] Socket closed remotely, no longer connected (idle 11 secs, remote host 192.168.100.28:20011) [js_test:multi_coll_drop] 2016-04-06T02:53:58.390-0500 s20015| 2016-04-06T02:53:39.123-0500 D NETWORK [ReplicaSetMonitorWatcher] creating new connection to:mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:58.391-0500 s20015| 2016-04-06T02:53:39.126-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:53:58.392-0500 s20015| 2016-04-06T02:53:39.129-0500 D NETWORK [ReplicaSetMonitorWatcher] connected to server mongovm16:20011 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:53:58.396-0500 s20015| 2016-04-06T02:53:39.131-0500 D NETWORK [ReplicaSetMonitorWatcher] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:53:58.401-0500 s20015| 2016-04-06T02:53:39.746-0500 D ASIO [conn1] startCommand: RemoteCommand 132 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:09.746-0500 cmd:{ find: "databases", filter: { _id: "multidrop" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.403-0500 s20015| 2016-04-06T02:53:39.746-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 132 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:58.422-0500 s20015| 2016-04-06T02:53:39.751-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 132 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop", primary: "shard0000", partitioned: true } ], id: 0, ns: "config.databases" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.429-0500 s20015| 2016-04-06T02:53:39.752-0500 D ASIO [conn1] startCommand: RemoteCommand 134 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:09.752-0500 cmd:{ find: "databases", filter: { _id: "multidrop" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.430-0500 s20015| 2016-04-06T02:53:39.752-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 134 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:58.432-0500 s20015| 2016-04-06T02:53:39.753-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 134 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop", primary: "shard0000", partitioned: true } ], id: 0, ns: "config.databases" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.434-0500 s20015| 2016-04-06T02:53:39.753-0500 D ASIO [conn1] startCommand: RemoteCommand 136 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:09.753-0500 cmd:{ find: "collections", filter: { _id: /^multidrop\./ }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|3, t: 7 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.438-0500 s20015| 2016-04-06T02:53:39.754-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 136 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:58.440-0500 s20015| 2016-04-06T02:53:39.755-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 136 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), lastmod: new Date(4294967296), dropped: false, key: { _id: 1.0 }, unique: false } ], id: 0, ns: "config.collections" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.441-0500 s20015| 2016-04-06T02:53:39.755-0500 D SHARDING [conn1] major version query from 0|0||5704c02806c33406d4d9c0c0 and over 0 shards is query: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 0|0 } }, sort: { lastmod: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.442-0500 s20015| 2016-04-06T02:53:39.755-0500 D ASIO [conn1] startCommand: RemoteCommand 138 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:09.755-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 0|0 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|3, t: 7 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.443-0500 s20015| 2016-04-06T02:53:39.756-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 138 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:58.467-0500 s20015| 2016-04-06T02:53:39.758-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 138 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: MinKey }, max: { _id: -100.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-100.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -100.0 }, max: { _id: -99.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-99.0", lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -99.0 }, max: { _id: -98.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-98.0", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -98.0 }, max: { _id: -97.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-97.0", lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -97.0 }, max: { _id: -96.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-96.0", lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -96.0 }, max: { _id: -95.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-95.0", lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -95.0 }, max: { _id: -94.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-94.0", lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -94.0 }, max: { _id: -93.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-93.0", lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -93.0 }, max: { _id: -92.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-92.0", lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -92.0 }, max: { _id: -91.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-91.0", lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -91.0 }, max: { _id: -90.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-90.0", lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -90.0 }, max: { _id: -89.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-89.0", lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -89.0 }, max: { _id: -88.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-88.0", lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -88.0 }, max: { _id: -87.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-87.0", lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -87.0 }, max: { _id: -86.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-86.0", lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -86.0 }, max: { _id: -85.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-85.0", lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -85.0 }, max: { _id: -84.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-84.0", lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -84.0 }, max: { _id: -83.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-83.0", lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -83.0 }, max: { _id: -82.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-82.0", lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -82.0 }, max: { _id: -81.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-81.0", lastmod: Timestamp 1000|41, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -81.0 }, max: { _id: -80.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-80.0", lastmod: Timestamp 1000|43, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -80.0 }, max: { _id: -79.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-79.0", lastmod: Timestamp 1000|45, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -79.0 }, max: { _id: -78.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-78.0", lastmod: Timestamp 1000|47, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -78.0 }, max: { _id: -77.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-77.0", lastmod: Timestamp 1000|49, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -77.0 }, max: { _id: -76.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-76.0", lastmod: Timestamp 1000|51, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -76.0 }, max: { _id: -75.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-75.0", lastmod: Timestamp 1000|53, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -75.0 }, max: { _id: -74.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-74.0", lastmod: Timestamp 1000|55, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -74.0 }, max: { _id: -73.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-73.0", lastmod: Timestamp 1000|57, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -73.0 }, max: { _id: -72.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-72.0", lastmod: Timestamp 1000|59, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -72.0 }, max: { _id: -71.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-71.0", lastmod: Timestamp 1000|61, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -71.0 }, max: { _id: -70.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-70.0", lastmod: Timestamp 1000|63, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -70.0 }, max: { _id: -69.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-69.0", lastmod: Timestamp 1000|65, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -69.0 }, max: { _id: -68.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-68.0", lastmod: Timestamp 1000|67, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -68.0 }, max: { _id: -67.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-67.0", lastmod: Timestamp 1000|69, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -67.0 }, max: { _id: -66.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-66.0", lastmod: Timestamp 1000|71, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -66.0 }, max: { _id: -65.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-65.0", lastmod: Timestamp 1000|73, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -65.0 }, max: { _id: -64.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-64.0", lastmod: Timestamp 1000|75, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -64.0 }, max: { _id: -63.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-63.0", lastmod: Timestamp 1000|77, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -63.0 }, max: { _id: -62.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-62.0", lastmod: Timestamp 1000|79, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -62.0 }, max: { _id: -61.0 }, shard: "shard0000" }, { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.473-0500 s20014| 2016-04-06T02:53:36.514-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 85.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:58.477-0500 c20012| 2016-04-06T02:53:36.449-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f254'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216449), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.478-0500 c20012| 2016-04-06T02:53:36.449-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:58.479-0500 c20012| 2016-04-06T02:53:36.449-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:58.481-0500 c20012| 2016-04-06T02:53:36.449-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.484-0500 c20012| 2016-04-06T02:53:36.449-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:58.485-0500 c20012| 2016-04-06T02:53:36.449-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:53:58.487-0500 c20012| 2016-04-06T02:53:36.449-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:53:58.490-0500 c20012| 2016-04-06T02:53:36.449-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f254'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216449), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:58.501-0500 c20012| 2016-04-06T02:53:36.449-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f254'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216449), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f254'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216449), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:58.503-0500 c20012| 2016-04-06T02:53:36.450-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.507-0500 c20012| 2016-04-06T02:53:36.450-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:58.510-0500 c20012| 2016-04-06T02:53:36.450-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.512-0500 c20012| 2016-04-06T02:53:36.450-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:58.515-0500 c20012| 2016-04-06T02:53:36.450-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:58.519-0500 c20012| 2016-04-06T02:53:36.450-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.520-0500 c20012| 2016-04-06T02:53:36.450-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:58.521-0500 c20012| 2016-04-06T02:53:36.450-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.523-0500 c20012| 2016-04-06T02:53:36.450-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:58.537-0500 c20012| 2016-04-06T02:53:36.451-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:58.538-0500 c20012| 2016-04-06T02:53:36.451-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.539-0500 c20012| 2016-04-06T02:53:36.452-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:58.546-0500 c20012| 2016-04-06T02:53:36.453-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f255'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216453), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.557-0500 c20012| 2016-04-06T02:53:36.453-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:58.563-0500 c20012| 2016-04-06T02:53:36.453-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:58.571-0500 c20012| 2016-04-06T02:53:36.453-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.573-0500 c20012| 2016-04-06T02:53:36.454-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:58.575-0500 c20012| 2016-04-06T02:53:36.454-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:53:58.576-0500 c20012| 2016-04-06T02:53:36.454-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:53:58.588-0500 c20012| 2016-04-06T02:53:36.454-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f255'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216453), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:58.604-0500 c20012| 2016-04-06T02:53:36.454-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f255'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216453), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f255'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216453), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:58.608-0500 c20012| 2016-04-06T02:53:36.454-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.617-0500 c20012| 2016-04-06T02:53:36.454-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:58.623-0500 c20012| 2016-04-06T02:53:36.454-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.625-0500 c20012| 2016-04-06T02:53:36.454-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:58.634-0500 c20012| 2016-04-06T02:53:36.454-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:58.647-0500 c20012| 2016-04-06T02:53:36.454-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.666-0500 c20012| 2016-04-06T02:53:36.454-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:58.671-0500 c20012| 2016-04-06T02:53:36.454-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.672-0500 c20012| 2016-04-06T02:53:36.454-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:58.677-0500 c20012| 2016-04-06T02:53:36.454-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:58.678-0500 c20012| 2016-04-06T02:53:36.455-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.682-0500 c20012| 2016-04-06T02:53:36.456-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:58.684-0500 c20012| 2016-04-06T02:53:36.456-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.685-0500 c20012| 2016-04-06T02:53:36.456-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:58.689-0500 c20012| 2016-04-06T02:53:36.456-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.695-0500 c20012| 2016-04-06T02:53:36.456-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:58.698-0500 c20012| 2016-04-06T02:53:36.456-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:58.704-0500 c20012| 2016-04-06T02:53:36.457-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f256'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216456), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.707-0500 c20012| 2016-04-06T02:53:36.457-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:58.712-0500 c20012| 2016-04-06T02:53:36.457-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:58.714-0500 c20012| 2016-04-06T02:53:36.457-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.718-0500 c20012| 2016-04-06T02:53:36.457-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:58.719-0500 c20012| 2016-04-06T02:53:36.457-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:53:58.721-0500 c20012| 2016-04-06T02:53:36.457-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:53:58.760-0500 c20012| 2016-04-06T02:53:36.457-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f256'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216456), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:58.769-0500 c20012| 2016-04-06T02:53:36.457-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f256'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216456), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f256'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216456), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:58.771-0500 c20012| 2016-04-06T02:53:36.457-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.777-0500 c20012| 2016-04-06T02:53:36.457-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:58.778-0500 c20012| 2016-04-06T02:53:36.457-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.779-0500 c20012| 2016-04-06T02:53:36.457-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:58.781-0500 c20012| 2016-04-06T02:53:36.458-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:58.782-0500 c20012| 2016-04-06T02:53:36.458-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.783-0500 c20012| 2016-04-06T02:53:36.458-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:58.784-0500 c20012| 2016-04-06T02:53:36.458-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.797-0500 c20012| 2016-04-06T02:53:36.458-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:58.807-0500 c20012| 2016-04-06T02:53:36.458-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:58.809-0500 c20012| 2016-04-06T02:53:36.458-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.827-0500 c20012| 2016-04-06T02:53:36.460-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:58.834-0500 c20012| 2016-04-06T02:53:36.462-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.836-0500 c20012| 2016-04-06T02:53:36.462-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:58.838-0500 c20012| 2016-04-06T02:53:36.462-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.841-0500 c20012| 2016-04-06T02:53:36.462-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:58.846-0500 c20012| 2016-04-06T02:53:36.462-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:58.851-0500 c20012| 2016-04-06T02:53:36.463-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f257'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216463), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.857-0500 c20012| 2016-04-06T02:53:36.463-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:58.860-0500 c20012| 2016-04-06T02:53:36.463-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:58.863-0500 c20012| 2016-04-06T02:53:36.463-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.865-0500 c20012| 2016-04-06T02:53:36.463-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:58.869-0500 c20012| 2016-04-06T02:53:36.463-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:53:58.870-0500 c20012| 2016-04-06T02:53:36.463-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:53:58.873-0500 c20012| 2016-04-06T02:53:36.463-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f257'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216463), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:58.884-0500 c20012| 2016-04-06T02:53:36.463-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f257'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216463), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f257'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216463), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:58.891-0500 c20012| 2016-04-06T02:53:36.464-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.894-0500 c20012| 2016-04-06T02:53:36.464-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:58.896-0500 c20012| 2016-04-06T02:53:36.464-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.897-0500 c20012| 2016-04-06T02:53:36.464-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:58.900-0500 c20012| 2016-04-06T02:53:36.465-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:58.903-0500 c20012| 2016-04-06T02:53:36.465-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.909-0500 c20012| 2016-04-06T02:53:36.465-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:58.912-0500 c20012| 2016-04-06T02:53:36.465-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.913-0500 c20012| 2016-04-06T02:53:36.465-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:58.918-0500 c20012| 2016-04-06T02:53:36.466-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:58.919-0500 c20012| 2016-04-06T02:53:36.466-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.922-0500 c20012| 2016-04-06T02:53:36.467-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:58.928-0500 c20012| 2016-04-06T02:53:36.468-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.931-0500 c20012| 2016-04-06T02:53:36.468-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:58.935-0500 c20012| 2016-04-06T02:53:36.468-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.937-0500 c20012| 2016-04-06T02:53:36.469-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:58.942-0500 c20012| 2016-04-06T02:53:36.469-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:58.945-0500 c20012| 2016-04-06T02:53:36.469-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f258'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216469), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.946-0500 c20012| 2016-04-06T02:53:36.469-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:58.949-0500 c20012| 2016-04-06T02:53:36.469-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:58.952-0500 c20012| 2016-04-06T02:53:36.469-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.955-0500 c20012| 2016-04-06T02:53:36.470-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:58.958-0500 c20012| 2016-04-06T02:53:36.470-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:53:58.959-0500 c20012| 2016-04-06T02:53:36.470-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:53:58.963-0500 c20012| 2016-04-06T02:53:36.470-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f258'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216469), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:58.984-0500 c20012| 2016-04-06T02:53:36.470-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f258'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216469), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f258'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216469), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:58.995-0500 c20012| 2016-04-06T02:53:36.474-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:58.998-0500 c20012| 2016-04-06T02:53:36.475-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:59.000-0500 c20012| 2016-04-06T02:53:36.475-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.000-0500 c20012| 2016-04-06T02:53:36.475-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:59.000-0500 c20012| 2016-04-06T02:53:36.476-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.001-0500 c20012| 2016-04-06T02:53:36.476-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.021-0500 c20012| 2016-04-06T02:53:36.476-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:59.022-0500 c20012| 2016-04-06T02:53:36.476-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.022-0500 c20012| 2016-04-06T02:53:36.476-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:59.030-0500 c20012| 2016-04-06T02:53:36.477-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.033-0500 c20012| 2016-04-06T02:53:36.478-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.034-0500 c20012| 2016-04-06T02:53:36.479-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.038-0500 c20012| 2016-04-06T02:53:36.480-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.041-0500 c20012| 2016-04-06T02:53:36.480-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:59.047-0500 c20012| 2016-04-06T02:53:36.480-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.049-0500 c20012| 2016-04-06T02:53:36.480-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:59.054-0500 c20012| 2016-04-06T02:53:36.481-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.057-0500 c20012| 2016-04-06T02:53:36.481-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f259'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216481), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.059-0500 c20012| 2016-04-06T02:53:36.481-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:59.061-0500 c20012| 2016-04-06T02:53:36.481-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:59.062-0500 c20012| 2016-04-06T02:53:36.481-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.063-0500 c20012| 2016-04-06T02:53:36.481-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:59.064-0500 c20012| 2016-04-06T02:53:36.481-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:53:59.065-0500 c20012| 2016-04-06T02:53:36.481-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:53:59.067-0500 c20012| 2016-04-06T02:53:36.481-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f259'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216481), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:59.075-0500 c20012| 2016-04-06T02:53:36.481-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f259'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216481), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f259'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216481), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.076-0500 c20012| 2016-04-06T02:53:36.482-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.077-0500 c20012| 2016-04-06T02:53:36.482-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:59.080-0500 c20012| 2016-04-06T02:53:36.482-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.082-0500 c20012| 2016-04-06T02:53:36.482-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:59.083-0500 c20012| 2016-04-06T02:53:36.482-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.086-0500 c20012| 2016-04-06T02:53:36.482-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.086-0500 c20012| 2016-04-06T02:53:36.482-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:59.088-0500 c20012| 2016-04-06T02:53:36.482-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.089-0500 c20012| 2016-04-06T02:53:36.482-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:59.094-0500 c20012| 2016-04-06T02:53:36.483-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.095-0500 c20012| 2016-04-06T02:53:36.483-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.110-0500 c20012| 2016-04-06T02:53:36.484-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.122-0500 c20012| 2016-04-06T02:53:36.485-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.131-0500 c20012| 2016-04-06T02:53:36.485-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:59.134-0500 c20012| 2016-04-06T02:53:36.485-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.137-0500 c20012| 2016-04-06T02:53:36.485-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:59.139-0500 c20012| 2016-04-06T02:53:36.485-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.146-0500 c20012| 2016-04-06T02:53:36.486-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f25a'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216486), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.150-0500 c20012| 2016-04-06T02:53:36.486-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:59.166-0500 c20012| 2016-04-06T02:53:36.487-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:59.168-0500 c20012| 2016-04-06T02:53:36.487-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.174-0500 c20012| 2016-04-06T02:53:36.487-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:59.175-0500 c20012| 2016-04-06T02:53:36.487-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:53:59.176-0500 c20012| 2016-04-06T02:53:36.487-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:53:59.182-0500 c20012| 2016-04-06T02:53:36.487-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f25a'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216486), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:59.188-0500 c20012| 2016-04-06T02:53:36.487-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f25a'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216486), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f25a'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216486), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.189-0500 c20012| 2016-04-06T02:53:36.489-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.191-0500 c20012| 2016-04-06T02:53:36.489-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:59.193-0500 c20012| 2016-04-06T02:53:36.489-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.195-0500 c20012| 2016-04-06T02:53:36.489-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:59.198-0500 c20012| 2016-04-06T02:53:36.490-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.199-0500 c20012| 2016-04-06T02:53:36.491-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.201-0500 c20012| 2016-04-06T02:53:36.491-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:59.203-0500 c20012| 2016-04-06T02:53:36.491-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.204-0500 c20012| 2016-04-06T02:53:36.491-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:59.207-0500 c20012| 2016-04-06T02:53:36.492-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.208-0500 c20012| 2016-04-06T02:53:36.492-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.209-0500 c20012| 2016-04-06T02:53:36.493-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.212-0500 c20012| 2016-04-06T02:53:36.494-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.213-0500 c20012| 2016-04-06T02:53:36.494-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:59.216-0500 c20012| 2016-04-06T02:53:36.494-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.217-0500 c20012| 2016-04-06T02:53:36.494-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:59.228-0500 c20012| 2016-04-06T02:53:36.494-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.234-0500 c20012| 2016-04-06T02:53:36.495-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f25b'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216495), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.240-0500 c20012| 2016-04-06T02:53:36.495-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:59.257-0500 c20012| 2016-04-06T02:53:36.496-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:53:59.264-0500 c20012| 2016-04-06T02:53:36.496-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.272-0500 c20012| 2016-04-06T02:53:36.496-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:59.275-0500 c20012| 2016-04-06T02:53:36.496-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:53:59.275-0500 c20012| 2016-04-06T02:53:36.496-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:53:59.280-0500 c20012| 2016-04-06T02:53:36.496-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f25b'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216495), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:53:59.286-0500 c20012| 2016-04-06T02:53:36.496-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f25b'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216495), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f25b'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216495), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.289-0500 c20012| 2016-04-06T02:53:36.496-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.291-0500 c20012| 2016-04-06T02:53:36.496-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:59.295-0500 c20012| 2016-04-06T02:53:36.496-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.296-0500 c20012| 2016-04-06T02:53:36.496-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:59.299-0500 c20012| 2016-04-06T02:53:36.497-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.303-0500 c20012| 2016-04-06T02:53:36.499-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.305-0500 c20012| 2016-04-06T02:53:36.499-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:59.309-0500 c20012| 2016-04-06T02:53:36.499-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.312-0500 c20012| 2016-04-06T02:53:36.499-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:53:59.314-0500 c20012| 2016-04-06T02:53:36.499-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.316-0500 c20012| 2016-04-06T02:53:36.500-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.318-0500 c20012| 2016-04-06T02:53:36.501-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.324-0500 c20012| 2016-04-06T02:53:36.502-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.329-0500 s20014| 2016-04-06T02:53:36.514-0500 D ASIO [conn1] startCommand: RemoteCommand 738 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:54:06.514-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.329-0500 s20014| 2016-04-06T02:53:36.515-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 738 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:59.582-0500 s20014| 2016-04-06T02:53:36.515-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 738 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.583-0500 s20014| 2016-04-06T02:53:36.515-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:59.588-0500 s20014| 2016-04-06T02:53:36.518-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 86.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:59.590-0500 s20014| 2016-04-06T02:53:36.518-0500 D ASIO [conn1] startCommand: RemoteCommand 740 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:06.518-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.591-0500 s20014| 2016-04-06T02:53:36.518-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 740 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:59.593-0500 s20014| 2016-04-06T02:53:36.519-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 740 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.595-0500 s20014| 2016-04-06T02:53:36.519-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:59.602-0500 s20014| 2016-04-06T02:53:36.521-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 87.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:59.605-0500 s20014| 2016-04-06T02:53:36.521-0500 D ASIO [conn1] startCommand: RemoteCommand 742 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:54:06.521-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.606-0500 s20014| 2016-04-06T02:53:36.521-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 742 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:59.610-0500 s20014| 2016-04-06T02:53:36.522-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 742 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.611-0500 s20014| 2016-04-06T02:53:36.522-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:59.616-0500 s20014| 2016-04-06T02:53:36.530-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 88.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:59.619-0500 s20014| 2016-04-06T02:53:36.530-0500 D ASIO [conn1] startCommand: RemoteCommand 744 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:54:06.530-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.621-0500 s20014| 2016-04-06T02:53:36.530-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 744 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:59.624-0500 s20014| 2016-04-06T02:53:36.532-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 744 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.625-0500 s20014| 2016-04-06T02:53:36.532-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:59.627-0500 s20014| 2016-04-06T02:53:36.539-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 89.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:59.632-0500 s20014| 2016-04-06T02:53:36.539-0500 D ASIO [conn1] startCommand: RemoteCommand 746 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:06.539-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.636-0500 s20014| 2016-04-06T02:53:36.539-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 746 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:59.639-0500 s20014| 2016-04-06T02:53:36.539-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 746 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.649-0500 s20014| 2016-04-06T02:53:36.539-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:59.654-0500 s20014| 2016-04-06T02:53:36.556-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 90.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:59.662-0500 s20014| 2016-04-06T02:53:36.556-0500 D ASIO [conn1] startCommand: RemoteCommand 748 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:54:06.556-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.673-0500 s20014| 2016-04-06T02:53:36.556-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 748 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:59.677-0500 s20014| 2016-04-06T02:53:36.557-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 748 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.680-0500 s20014| 2016-04-06T02:53:36.557-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:53:59.685-0500 s20014| 2016-04-06T02:53:36.575-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 91.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:53:59.687-0500 s20014| 2016-04-06T02:53:36.575-0500 D ASIO [conn1] startCommand: RemoteCommand 750 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:54:06.575-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.688-0500 s20014| 2016-04-06T02:53:36.575-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 750 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:53:59.689-0500 s20015| 2016-04-06T02:53:39.759-0500 D SHARDING [conn1] loaded 41 chunks into new chunk manager for multidrop.coll with version 1|80||5704c02806c33406d4d9c0c0 [js_test:multi_coll_drop] 2016-04-06T02:53:59.691-0500 s20015| 2016-04-06T02:53:39.759-0500 I SHARDING [conn1] ChunkManager: time to load chunks for multidrop.coll: 3ms sequenceNumber: 2 version: 1|80||5704c02806c33406d4d9c0c0 based on: (empty) [js_test:multi_coll_drop] 2016-04-06T02:53:59.693-0500 s20015| 2016-04-06T02:53:39.759-0500 D SHARDING [conn1] found 1 collections left and 0 collections dropped for database multidrop [js_test:multi_coll_drop] 2016-04-06T02:53:59.697-0500 s20015| 2016-04-06T02:53:39.759-0500 D ASIO [conn1] startCommand: RemoteCommand 140 -- target:mongovm16:20010 db:multidrop cmd:{ find: "coll", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ] } [js_test:multi_coll_drop] 2016-04-06T02:53:59.698-0500 s20015| 2016-04-06T02:53:39.759-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-0-0] Connecting to mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:53:59.701-0500 s20015| 2016-04-06T02:53:39.761-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-0-0] Starting asynchronous command 141 on host mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:53:59.706-0500 s20015| 2016-04-06T02:53:39.761-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-0-0] Starting asynchronous command 141 on host mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:53:59.707-0500 s20015| 2016-04-06T02:53:39.761-0500 I ASIO [NetworkInterfaceASIO-TaskExecutorPool-0-0] Successfully connected to mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:53:59.709-0500 s20015| 2016-04-06T02:53:39.761-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-0-0] Request 141 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:59.709-0500 s20015| 2016-04-06T02:53:39.761-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-0-0] Starting asynchronous command 140 on host mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:53:59.712-0500 s20015| 2016-04-06T02:53:39.761-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-0-0] Request 140 finished with response: { waitedMS: 0, cursor: { firstBatch: [], id: 0, ns: "multidrop.coll" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.715-0500 s20015| 2016-04-06T02:53:40.070-0500 D ASIO [Balancer] startCommand: RemoteCommand 144 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:10.070-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929220070), up: 93, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.718-0500 s20015| 2016-04-06T02:53:40.070-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:59.719-0500 s20015| 2016-04-06T02:53:40.070-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 145 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:59.720-0500 s20015| 2016-04-06T02:53:40.071-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:59.722-0500 c20011| 2016-04-06T02:53:14.159-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 479 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:59.723-0500 c20011| 2016-04-06T02:53:14.159-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 481 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:59.725-0500 c20011| 2016-04-06T02:53:14.160-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 479 finished with response: { term: 4, voteGranted: true, reason: "", ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.726-0500 c20011| 2016-04-06T02:53:14.160-0500 I REPL [ReplicationExecutor] dry election run succeeded, running for election [js_test:multi_coll_drop] 2016-04-06T02:53:59.728-0500 c20011| 2016-04-06T02:53:14.160-0500 D QUERY [replExecDBWorker-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:59.734-0500 c20011| 2016-04-06T02:53:14.160-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 483 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:53:19.160-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 5, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:59.745-0500 c20011| 2016-04-06T02:53:14.160-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 484 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:19.160-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 5, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:59.747-0500 c20011| 2016-04-06T02:53:14.160-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:59.749-0500 c20011| 2016-04-06T02:53:14.160-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 483 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:59.750-0500 c20011| 2016-04-06T02:53:14.162-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 485 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:59.756-0500 c20011| 2016-04-06T02:53:14.162-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 483 finished with response: { term: 5, voteGranted: true, reason: "", ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.759-0500 c20011| 2016-04-06T02:53:14.162-0500 I REPL [ReplicationExecutor] election succeeded, assuming primary role in term 5 [js_test:multi_coll_drop] 2016-04-06T02:53:59.760-0500 c20011| 2016-04-06T02:53:14.163-0500 I REPL [ReplicationExecutor] transition to PRIMARY [js_test:multi_coll_drop] 2016-04-06T02:53:59.761-0500 c20011| 2016-04-06T02:53:14.163-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:53:14.163Z [js_test:multi_coll_drop] 2016-04-06T02:53:59.764-0500 c20011| 2016-04-06T02:53:14.163-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:14.163Z [js_test:multi_coll_drop] 2016-04-06T02:53:59.768-0500 c20011| 2016-04-06T02:53:14.163-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 487 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:53:24.163-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.771-0500 c20011| 2016-04-06T02:53:14.163-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 488 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:19.171-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.772-0500 c20011| 2016-04-06T02:53:14.163-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 487 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:59.775-0500 c20011| 2016-04-06T02:53:14.163-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:59.775-0500 c20011| 2016-04-06T02:53:14.163-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 489 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:59.779-0500 c20011| 2016-04-06T02:53:14.163-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 487 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 5, primaryId: 2, durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, opTime: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:59.782-0500 c20011| 2016-04-06T02:53:14.163-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:53:16.163Z [js_test:multi_coll_drop] 2016-04-06T02:53:59.783-0500 c20011| 2016-04-06T02:53:14.802-0500 D REPL [rsSync] Removing temporary collections from config [js_test:multi_coll_drop] 2016-04-06T02:53:59.784-0500 c20011| 2016-04-06T02:53:14.803-0500 I REPL [rsSync] transition to primary complete; database writes are now permitted [js_test:multi_coll_drop] 2016-04-06T02:53:59.788-0500 c20011| 2016-04-06T02:53:15.675-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:34278 #57 (7 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:59.790-0500 c20011| 2016-04-06T02:53:15.676-0500 D COMMAND [conn57] run command admin.$cmd { isMaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.794-0500 c20011| 2016-04-06T02:53:15.676-0500 I COMMAND [conn57] command admin.$cmd command: isMaster { isMaster: 1 } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.800-0500 c20011| 2016-04-06T02:53:16.163-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 491 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:53:26.163-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.801-0500 c20011| 2016-04-06T02:53:16.166-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 491 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:59.809-0500 c20011| 2016-04-06T02:53:16.191-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 491 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 5, primaryId: 2, durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, opTime: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:59.809-0500 c20011| 2016-04-06T02:53:16.196-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:53:18.196Z [js_test:multi_coll_drop] 2016-04-06T02:53:59.812-0500 c20011| 2016-04-06T02:53:16.305-0500 D COMMAND [conn53] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.813-0500 c20011| 2016-04-06T02:53:16.305-0500 D COMMAND [conn53] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:59.824-0500 c20011| 2016-04-06T02:53:16.310-0500 I COMMAND [conn53] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 5 } numYields:0 reslen:480 locks:{} protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.826-0500 c20011| 2016-04-06T02:53:16.822-0500 D COMMAND [conn53] run command local.$cmd { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:53:59.831-0500 c20011| 2016-04-06T02:53:16.822-0500 D QUERY [conn53] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: 1 } projection: {} limit: 1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:53:59.839-0500 c20011| 2016-04-06T02:53:16.822-0500 I COMMAND [conn53] command local.oplog.rs command: find { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:254 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.839-0500 c20011| 2016-04-06T02:53:16.823-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:34328 #58 (8 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:59.841-0500 c20011| 2016-04-06T02:53:16.823-0500 D COMMAND [conn58] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20012" } [js_test:multi_coll_drop] 2016-04-06T02:53:59.842-0500 c20011| 2016-04-06T02:53:16.826-0500 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20012" } numYields:0 reslen:482 locks:{} protocol:op_query 3ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.845-0500 c20011| 2016-04-06T02:53:16.828-0500 D COMMAND [conn58] run command local.$cmd { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929188000|11 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.848-0500 c20011| 2016-04-06T02:53:16.828-0500 I COMMAND [conn58] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929188000|11 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 5 } planSummary: COLLSCAN cursorid:19461455963 keysExamined:0 docsExamined:2 numYields:0 nreturned:2 reslen:1003 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.851-0500 c20011| 2016-04-06T02:53:16.833-0500 D COMMAND [conn58] run command local.$cmd { getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|11, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:53:59.855-0500 c20011| 2016-04-06T02:53:16.973-0500 D COMMAND [conn57] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.856-0500 c20011| 2016-04-06T02:53:16.973-0500 I COMMAND [conn57] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.858-0500 c20011| 2016-04-06T02:53:18.196-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 493 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:53:28.196-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.861-0500 c20011| 2016-04-06T02:53:18.196-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 493 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:53:59.865-0500 c20011| 2016-04-06T02:53:18.210-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 493 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 5, primaryId: 0, durableOpTime: { ts: Timestamp 1459929194000|2, t: 5 }, opTime: { ts: Timestamp 1459929194000|2, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:59.865-0500 c20011| 2016-04-06T02:53:18.210-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929194000|2, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.869-0500 c20011| 2016-04-06T02:53:18.210-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:53:20.210Z [js_test:multi_coll_drop] 2016-04-06T02:53:59.874-0500 c20011| 2016-04-06T02:53:18.210-0500 I COMMAND [conn58] command local.oplog.rs command: getMore { getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|11, t: 4 } } cursorid:19461455963 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 1377ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.878-0500 c20011| 2016-04-06T02:53:18.213-0500 D COMMAND [conn58] run command local.$cmd { getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929194000|2, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:59.883-0500 c20011| 2016-04-06T02:53:18.810-0500 D COMMAND [conn53] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.884-0500 c20011| 2016-04-06T02:53:18.810-0500 D COMMAND [conn53] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:59.890-0500 c20011| 2016-04-06T02:53:18.810-0500 I COMMAND [conn53] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 5 } numYields:0 reslen:480 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.891-0500 c20011| 2016-04-06T02:53:18.967-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:34459 #59 (9 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:59.894-0500 c20011| 2016-04-06T02:53:18.968-0500 D COMMAND [conn51] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.895-0500 c20011| 2016-04-06T02:53:18.968-0500 D COMMAND [conn51] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:53:59.898-0500 c20011| 2016-04-06T02:53:18.968-0500 I COMMAND [conn51] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } numYields:0 reslen:480 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.902-0500 c20011| 2016-04-06T02:53:18.969-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Failed to execute command: RemoteCommand 470 -- target:mongovm16:20013 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } reason: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:59.903-0500 c20011| 2016-04-06T02:53:18.969-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 470 finished with response: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:59.905-0500 c20011| 2016-04-06T02:53:18.969-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:34461 #60 (10 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:59.907-0500 c20011| 2016-04-06T02:53:18.969-0500 D COMMAND [conn60] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:59.910-0500 c20011| 2016-04-06T02:53:18.969-0500 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20015" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.910-0500 c20011| 2016-04-06T02:53:18.969-0500 D COMMAND [conn60] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.915-0500 c20011| 2016-04-06T02:53:18.970-0500 I COMMAND [conn60] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.916-0500 c20011| 2016-04-06T02:53:18.970-0500 D COMMAND [conn60] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.920-0500 c20011| 2016-04-06T02:53:18.970-0500 I COMMAND [conn60] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.922-0500 c20011| 2016-04-06T02:53:18.970-0500 D COMMAND [conn55] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929198271), up: 71, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.922-0500 c20011| 2016-04-06T02:53:18.970-0500 D QUERY [conn55] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:53:59.928-0500 c20011| 2016-04-06T02:53:18.970-0500 I WRITE [conn55] update config.mongos query: { _id: "mongovm16:20015" } update: { $set: { _id: "mongovm16:20015", ping: new Date(1459929198271), up: 71, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.932-0500 c20011| 2016-04-06T02:53:18.970-0500 I COMMAND [conn58] command local.oplog.rs command: getMore { getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929194000|2, t: 5 } } cursorid:19461455963 numYields:1 nreturned:1 reslen:522 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 757ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.934-0500 c20011| 2016-04-06T02:53:18.971-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:34463 #61 (11 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:59.936-0500 c20011| 2016-04-06T02:53:18.971-0500 D COMMAND [conn61] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20010" } [js_test:multi_coll_drop] 2016-04-06T02:53:59.938-0500 c20011| 2016-04-06T02:53:18.971-0500 I COMMAND [conn61] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20010" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.938-0500 c20011| 2016-04-06T02:53:18.971-0500 D COMMAND [conn61] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.943-0500 c20011| 2016-04-06T02:53:18.971-0500 I COMMAND [conn61] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.944-0500 c20011| 2016-04-06T02:53:18.971-0500 D COMMAND [conn61] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.946-0500 c20011| 2016-04-06T02:53:18.971-0500 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to mongovm16:20013: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:59.947-0500 c20011| 2016-04-06T02:53:18.971-0500 I COMMAND [conn61] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.951-0500 c20011| 2016-04-06T02:53:18.971-0500 D REPL [SyncSourceFeedback] The replication progress command (replSetUpdatePosition) failed and will be retried: HostUnreachable: End of file [js_test:multi_coll_drop] 2016-04-06T02:53:59.953-0500 c20011| 2016-04-06T02:53:18.971-0500 D COMMAND [conn54] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|78 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929194000|2, t: 5 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.956-0500 c20011| 2016-04-06T02:53:18.971-0500 D COMMAND [conn54] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929194000|2, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:53:59.958-0500 c20011| 2016-04-06T02:53:18.971-0500 D COMMAND [conn54] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|78 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929194000|2, t: 5 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.960-0500 c20011| 2016-04-06T02:53:18.972-0500 D QUERY [conn54] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:53:59.964-0500 c20011| 2016-04-06T02:53:18.972-0500 I COMMAND [conn54] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|78 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929194000|2, t: 5 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:712 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:53:59.967-0500 c20011| 2016-04-06T02:53:18.973-0500 D REPL [conn55] Required snapshot optime: { ts: Timestamp 1459929198000|1, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929194000|2, t: 5 }, name-id: "269" } [js_test:multi_coll_drop] 2016-04-06T02:53:59.968-0500 c20011| 2016-04-06T02:53:18.973-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:59.970-0500 c20011| 2016-04-06T02:53:18.973-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 481 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:59.970-0500 c20011| 2016-04-06T02:53:18.973-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:59.972-0500 c20011| 2016-04-06T02:53:18.973-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 489 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:59.973-0500 c20011| 2016-04-06T02:53:18.973-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:59.976-0500 c20011| 2016-04-06T02:53:18.973-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 485 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:59.980-0500 c20011| 2016-04-06T02:53:18.973-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:59.982-0500 c20011| 2016-04-06T02:53:18.973-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 477 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:53:59.984-0500 c20011| 2016-04-06T02:53:18.973-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 488 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:53:59.989-0500 c20011| 2016-04-06T02:53:18.973-0500 D COMMAND [conn56] run command config.$cmd { findAndModify: "lockpings", query: { _id: "mongovm16:20010:1459929128:185613966" }, update: { $set: { ping: new Date(1459929191721) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:53:59.990-0500 c20011| 2016-04-06T02:53:18.973-0500 D QUERY [conn56] Using idhack: { _id: "mongovm16:20010:1459929128:185613966" } [js_test:multi_coll_drop] 2016-04-06T02:53:59.991-0500 c20011| 2016-04-06T02:53:18.974-0500 D REPL [conn56] Required snapshot optime: { ts: Timestamp 1459929198000|1, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929194000|2, t: 5 }, name-id: "269" } [js_test:multi_coll_drop] 2016-04-06T02:53:59.993-0500 c20011| 2016-04-06T02:53:18.974-0500 D COMMAND [conn58] run command local.$cmd { getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929194000|2, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:53:59.993-0500 c20011| 2016-04-06T02:53:18.974-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:34465 #62 (12 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:53:59.996-0500 c20011| 2016-04-06T02:53:18.974-0500 D COMMAND [conn62] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20010" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.006-0500 c20011| 2016-04-06T02:53:18.974-0500 I COMMAND [conn62] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20010" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.007-0500 c20011| 2016-04-06T02:53:18.975-0500 I COMMAND [conn58] command local.oplog.rs command: getMore { getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929194000|2, t: 5 } } cursorid:19461455963 numYields:0 nreturned:1 reslen:524 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.012-0500 c20011| 2016-04-06T02:53:18.975-0500 D COMMAND [conn62] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c06e65c17830b843f1cd'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929198974), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.013-0500 c20011| 2016-04-06T02:53:18.975-0500 D QUERY [conn62] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.015-0500 c20011| 2016-04-06T02:53:18.975-0500 D QUERY [conn62] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.018-0500 c20011| 2016-04-06T02:53:18.975-0500 D QUERY [conn62] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.021-0500 c20011| 2016-04-06T02:53:18.975-0500 D - [conn62] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.022-0500 c20011| 2016-04-06T02:53:18.975-0500 D STORAGE [conn62] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:00.037-0500 c20013| 2016-04-06T02:52:41.908-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1357 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:46.908-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|13, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:00.038-0500 c20013| 2016-04-06T02:52:41.908-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1357 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:00.040-0500 c20013| 2016-04-06T02:52:41.908-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1357 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929161000|14, t: 3, h: -7747457248041316998, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.042-0500 c20013| 2016-04-06T02:52:41.908-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929161000|14 and ending at ts: Timestamp 1459929161000|14 [js_test:multi_coll_drop] 2016-04-06T02:54:00.042-0500 c20013| 2016-04-06T02:52:41.910-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:00.044-0500 s20015| 2016-04-06T02:53:40.071-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 145 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:54:00.046-0500 s20015| 2016-04-06T02:53:40.071-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 144 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:00.052-0500 s20015| 2016-04-06T02:53:40.080-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 144 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929220000|1, t: 7 }, electionId: ObjectId('7fffffff0000000000000007') } [js_test:multi_coll_drop] 2016-04-06T02:54:00.055-0500 s20015| 2016-04-06T02:53:40.080-0500 D ASIO [Balancer] startCommand: RemoteCommand 147 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:10.080-0500 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|1, t: 7 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.056-0500 s20015| 2016-04-06T02:53:40.080-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 147 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:00.060-0500 s20015| 2016-04-06T02:53:40.082-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 147 finished with response: { waitedMS: 1, cursor: { firstBatch: [ { _id: "shard0000", host: "mongovm16:20010" } ], id: 0, ns: "config.shards" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.061-0500 s20015| 2016-04-06T02:53:40.082-0500 D SHARDING [Balancer] found 1 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp 1459929220000|1, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.066-0500 s20015| 2016-04-06T02:53:40.082-0500 D ASIO [Balancer] startCommand: RemoteCommand 149 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:10.082-0500 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.067-0500 s20015| 2016-04-06T02:53:40.082-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 149 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:00.070-0500 s20015| 2016-04-06T02:53:40.083-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 149 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "chunksize", value: 50 } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.071-0500 s20015| 2016-04-06T02:53:40.083-0500 D SHARDING [Balancer] Refreshing MaxChunkSize: 50MB [js_test:multi_coll_drop] 2016-04-06T02:54:00.075-0500 s20015| 2016-04-06T02:53:40.083-0500 D ASIO [Balancer] startCommand: RemoteCommand 151 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:10.083-0500 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.077-0500 s20015| 2016-04-06T02:53:40.083-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 151 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:00.078-0500 s20015| 2016-04-06T02:53:40.083-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 151 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "balancer", stopped: true } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.079-0500 s20015| 2016-04-06T02:53:40.083-0500 D SHARDING [Balancer] skipping balancing round because balancing is disabled [js_test:multi_coll_drop] 2016-04-06T02:54:00.082-0500 s20015| 2016-04-06T02:53:40.083-0500 D ASIO [Balancer] startCommand: RemoteCommand 153 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:10.083-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929220083), up: 93, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.083-0500 s20015| 2016-04-06T02:53:40.083-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 153 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:00.086-0500 s20015| 2016-04-06T02:53:40.091-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 153 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929220000|2, t: 7 }, electionId: ObjectId('7fffffff0000000000000007') } [js_test:multi_coll_drop] 2016-04-06T02:54:00.089-0500 c20012| 2016-04-06T02:53:36.502-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:00.092-0500 s20014| 2016-04-06T02:53:36.576-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 750 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.094-0500 s20014| 2016-04-06T02:53:36.576-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:54:00.100-0500 s20014| 2016-04-06T02:53:36.584-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 92.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:54:00.103-0500 s20014| 2016-04-06T02:53:36.584-0500 D ASIO [conn1] startCommand: RemoteCommand 752 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:06.584-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.105-0500 s20014| 2016-04-06T02:53:36.584-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 752 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:00.110-0500 s20014| 2016-04-06T02:53:36.585-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 752 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.111-0500 s20014| 2016-04-06T02:53:36.585-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:54:00.117-0500 s20014| 2016-04-06T02:53:36.595-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 93.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:54:00.120-0500 s20014| 2016-04-06T02:53:36.596-0500 D ASIO [conn1] startCommand: RemoteCommand 754 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:54:06.596-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.123-0500 s20014| 2016-04-06T02:53:36.596-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 754 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:00.130-0500 s20014| 2016-04-06T02:53:36.596-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 754 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.131-0500 s20014| 2016-04-06T02:53:36.596-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:54:00.137-0500 s20014| 2016-04-06T02:53:36.604-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 94.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:54:00.138-0500 c20011| 2016-04-06T02:53:18.975-0500 D STORAGE [conn62] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:00.143-0500 c20011| 2016-04-06T02:53:18.975-0500 D COMMAND [conn62] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c06e65c17830b843f1cd'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929198974), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.148-0500 c20011| 2016-04-06T02:53:18.975-0500 I COMMAND [conn62] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c06e65c17830b843f1cd'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929198974), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c06e65c17830b843f1cd'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929198974), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.152-0500 c20011| 2016-04-06T02:53:18.975-0500 D COMMAND [conn62] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929194000|2, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.155-0500 c20011| 2016-04-06T02:53:18.975-0500 D COMMAND [conn62] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929194000|2, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:00.159-0500 c20011| 2016-04-06T02:53:18.975-0500 D COMMAND [conn62] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929194000|2, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.161-0500 s20014| 2016-04-06T02:53:36.604-0500 D ASIO [conn1] startCommand: RemoteCommand 756 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:54:06.604-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.164-0500 s20014| 2016-04-06T02:53:36.604-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 756 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:00.166-0500 s20014| 2016-04-06T02:53:36.604-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 756 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.167-0500 s20014| 2016-04-06T02:53:36.605-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:54:00.170-0500 c20013| 2016-04-06T02:52:41.910-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.172-0500 c20013| 2016-04-06T02:52:41.910-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.177-0500 c20012| 2016-04-06T02:53:36.502-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.181-0500 c20012| 2016-04-06T02:53:36.502-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:00.184-0500 c20012| 2016-04-06T02:53:36.502-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.188-0500 c20012| 2016-04-06T02:53:36.502-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f25c'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216502), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.195-0500 c20012| 2016-04-06T02:53:36.502-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.198-0500 c20012| 2016-04-06T02:53:36.502-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.202-0500 c20012| 2016-04-06T02:53:36.502-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.205-0500 c20012| 2016-04-06T02:53:36.503-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.206-0500 c20012| 2016-04-06T02:53:36.503-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:00.209-0500 c20012| 2016-04-06T02:53:36.503-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:00.212-0500 c20012| 2016-04-06T02:53:36.503-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f25c'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216502), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.219-0500 c20012| 2016-04-06T02:53:36.503-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f25c'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216502), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f25c'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216502), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.221-0500 c20012| 2016-04-06T02:53:36.503-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.227-0500 c20012| 2016-04-06T02:53:36.503-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:00.230-0500 c20012| 2016-04-06T02:53:36.503-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.230-0500 c20012| 2016-04-06T02:53:36.503-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:00.233-0500 c20012| 2016-04-06T02:53:36.503-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.235-0500 c20012| 2016-04-06T02:53:36.504-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.238-0500 c20012| 2016-04-06T02:53:36.504-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:00.243-0500 c20012| 2016-04-06T02:53:36.504-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.250-0500 s20014| 2016-04-06T02:53:36.608-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 95.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:54:00.253-0500 s20014| 2016-04-06T02:53:36.608-0500 D ASIO [conn1] startCommand: RemoteCommand 758 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:54:06.608-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.253-0500 s20014| 2016-04-06T02:53:36.608-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 758 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:00.258-0500 s20014| 2016-04-06T02:53:36.609-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 758 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.259-0500 s20014| 2016-04-06T02:53:36.609-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:54:00.264-0500 s20014| 2016-04-06T02:53:36.618-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 96.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:54:00.267-0500 s20014| 2016-04-06T02:53:36.618-0500 D ASIO [conn1] startCommand: RemoteCommand 760 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:54:06.618-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.268-0500 s20014| 2016-04-06T02:53:36.618-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 760 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:00.275-0500 s20014| 2016-04-06T02:53:36.619-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 760 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.275-0500 s20014| 2016-04-06T02:53:36.619-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:54:00.281-0500 s20014| 2016-04-06T02:53:36.622-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 97.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:54:00.284-0500 s20014| 2016-04-06T02:53:36.622-0500 D ASIO [conn1] startCommand: RemoteCommand 762 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:54:06.622-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.285-0500 s20014| 2016-04-06T02:53:36.622-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 762 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:00.288-0500 s20014| 2016-04-06T02:53:36.623-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 762 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.289-0500 s20014| 2016-04-06T02:53:36.623-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:54:00.293-0500 s20014| 2016-04-06T02:53:36.627-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 98.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:54:00.296-0500 s20014| 2016-04-06T02:53:36.627-0500 D ASIO [conn1] startCommand: RemoteCommand 764 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:54:06.627-0500 cmd:{ find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.297-0500 s20014| 2016-04-06T02:53:36.627-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 764 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:00.299-0500 s20014| 2016-04-06T02:53:36.628-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 764 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll-_id_-61.0", lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -61.0 }, max: { _id: MaxKey }, shard: "shard0000" } ], id: 0, ns: "config.chunks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.301-0500 s20014| 2016-04-06T02:53:36.628-0500 I COMMAND [conn1] splitting chunk [{ _id: -61.0 },{ _id: MaxKey }) in collection multidrop.coll on shard shard0000 [js_test:multi_coll_drop] 2016-04-06T02:54:00.305-0500 s20014| 2016-04-06T02:53:36.639-0500 W SHARDING [conn1] splitChunk cmd { splitChunk: "multidrop.coll", keyPattern: { _id: 1.0 }, min: { _id: -61.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 99.0 } ], configdb: "multidrop-configRS/mongovm16:20011,mongovm16:20012,mongovm16:20013", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ], epoch: ObjectId('5704c02806c33406d4d9c0c0') } failed :: caused by :: UnknownError: could not acquire collection lock for multidrop.coll to split chunk [{ _id: -61.0 },{ _id: MaxKey }) :: caused by :: LockBusy: timed out waiting for multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:54:00.305-0500 s20014| 2016-04-06T02:53:36.890-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:00.306-0500 s20014| 2016-04-06T02:53:36.890-0500 D NETWORK [Balancer] polling for status of connection to 192.168.100.28:20011, no events [js_test:multi_coll_drop] 2016-04-06T02:54:00.310-0500 s20014| 2016-04-06T02:53:36.891-0500 D ASIO [Balancer] startCommand: RemoteCommand 766 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:06.891-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929211993), up: 84, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.312-0500 s20014| 2016-04-06T02:53:36.894-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 766 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:00.316-0500 s20014| 2016-04-06T02:53:36.908-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 766 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929216000|2, t: 7 }, electionId: ObjectId('7fffffff0000000000000007') } [js_test:multi_coll_drop] 2016-04-06T02:54:00.319-0500 s20014| 2016-04-06T02:53:36.908-0500 D ASIO [Balancer] startCommand: RemoteCommand 768 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:54:06.908-0500 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|2, t: 7 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.321-0500 s20014| 2016-04-06T02:53:36.908-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 768 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:00.324-0500 s20014| 2016-04-06T02:53:36.909-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 768 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "shard0000", host: "mongovm16:20010" } ], id: 0, ns: "config.shards" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.325-0500 s20014| 2016-04-06T02:53:36.909-0500 D SHARDING [Balancer] found 1 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp 1459929216000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.329-0500 s20014| 2016-04-06T02:53:36.909-0500 D ASIO [Balancer] startCommand: RemoteCommand 770 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:06.909-0500 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.331-0500 s20014| 2016-04-06T02:53:36.909-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 770 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:00.334-0500 s20014| 2016-04-06T02:53:36.910-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 770 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "chunksize", value: 50 } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.334-0500 s20014| 2016-04-06T02:53:36.912-0500 D SHARDING [Balancer] Refreshing MaxChunkSize: 50MB [js_test:multi_coll_drop] 2016-04-06T02:54:00.337-0500 s20014| 2016-04-06T02:53:36.912-0500 D ASIO [Balancer] startCommand: RemoteCommand 772 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:54:06.912-0500 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.338-0500 s20014| 2016-04-06T02:53:36.912-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 772 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:00.341-0500 s20014| 2016-04-06T02:53:36.913-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 772 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "balancer", stopped: true } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.343-0500 s20014| 2016-04-06T02:53:36.914-0500 D SHARDING [Balancer] skipping balancing round because balancing is disabled [js_test:multi_coll_drop] 2016-04-06T02:54:00.348-0500 s20014| 2016-04-06T02:53:36.914-0500 D ASIO [Balancer] startCommand: RemoteCommand 774 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:06.914-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929216914), up: 89, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.350-0500 c20011| 2016-04-06T02:53:18.975-0500 D QUERY [conn62] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:00.352-0500 c20011| 2016-04-06T02:53:18.976-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 488 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 4, primaryId: 0, durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, opTime: { ts: Timestamp 1459929198000|2, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:54:00.353-0500 c20011| 2016-04-06T02:53:18.976-0500 I REPL [ReplicationExecutor] Member mongovm16:20013 is now in state SECONDARY [js_test:multi_coll_drop] 2016-04-06T02:54:00.356-0500 c20011| 2016-04-06T02:53:18.976-0500 I COMMAND [conn62] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929194000|2, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.364-0500 c20011| 2016-04-06T02:53:18.976-0500 D REPL [ReplicationExecutor] Required snapshot optime: { ts: Timestamp 1459929198000|1, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929194000|2, t: 5 }, name-id: "269" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.365-0500 c20011| 2016-04-06T02:53:18.976-0500 D REPL [ReplicationExecutor] Required snapshot optime: { ts: Timestamp 1459929198000|1, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929194000|2, t: 5 }, name-id: "269" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.366-0500 c20011| 2016-04-06T02:53:18.976-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:20.976Z [js_test:multi_coll_drop] 2016-04-06T02:54:00.368-0500 c20011| 2016-04-06T02:53:18.976-0500 D COMMAND [conn62] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|2, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.368-0500 c20011| 2016-04-06T02:53:18.976-0500 D REPL [conn62] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929198000|2, t: 5 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929194000|2, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.372-0500 c20011| 2016-04-06T02:53:18.976-0500 D REPL [conn62] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999986μs [js_test:multi_coll_drop] 2016-04-06T02:54:00.374-0500 c20011| 2016-04-06T02:53:18.976-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:34466 #63 (13 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:00.376-0500 c20011| 2016-04-06T02:53:18.976-0500 D COMMAND [conn59] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20012" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.379-0500 c20011| 2016-04-06T02:53:18.976-0500 D REPL [conn56] Required snapshot optime: { ts: Timestamp 1459929198000|1, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929194000|2, t: 5 }, name-id: "269" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.382-0500 c20011| 2016-04-06T02:53:18.976-0500 D REPL [conn56] Required snapshot optime: { ts: Timestamp 1459929198000|2, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929194000|2, t: 5 }, name-id: "269" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.384-0500 c20011| 2016-04-06T02:53:18.977-0500 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20012" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.385-0500 c20011| 2016-04-06T02:53:18.977-0500 D COMMAND [conn63] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.388-0500 c20011| 2016-04-06T02:53:18.977-0500 I COMMAND [conn63] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20014" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.391-0500 c20011| 2016-04-06T02:53:18.977-0500 D COMMAND [conn63] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.396-0500 c20011| 2016-04-06T02:53:18.977-0500 I COMMAND [conn63] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.402-0500 c20011| 2016-04-06T02:53:18.977-0500 D COMMAND [conn59] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929194000|2, t: 5 }, appliedOpTime: { ts: Timestamp 1459929194000|2, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:00.403-0500 c20011| 2016-04-06T02:53:18.977-0500 D COMMAND [conn59] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:00.403-0500 c20011| 2016-04-06T02:53:18.977-0500 D COMMAND [conn63] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.404-0500 c20011| 2016-04-06T02:53:18.977-0500 D REPL [conn59] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929194000|2, t: 5 } and is durable through: { ts: Timestamp 1459929194000|2, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.408-0500 c20011| 2016-04-06T02:53:18.977-0500 D REPL [conn59] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|8, t: 3 } and is durable through: { ts: Timestamp 1459929163000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.412-0500 c20011| 2016-04-06T02:53:18.977-0500 I COMMAND [conn59] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929194000|2, t: 5 }, appliedOpTime: { ts: Timestamp 1459929194000|2, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.413-0500 c20011| 2016-04-06T02:53:18.977-0500 I COMMAND [conn63] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.415-0500 s20014| 2016-04-06T02:53:36.914-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 774 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:00.420-0500 s20014| 2016-04-06T02:53:36.928-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 774 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929216000|3, t: 7 }, electionId: ObjectId('7fffffff0000000000000007') } [js_test:multi_coll_drop] 2016-04-06T02:54:00.422-0500 s20014| 2016-04-06T02:53:37.133-0500 D ASIO [UserCacheInvalidator] startCommand: RemoteCommand 776 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:54:07.133-0500 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.422-0500 s20014| 2016-04-06T02:53:37.136-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 776 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:00.424-0500 s20014| 2016-04-06T02:53:37.136-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 776 finished with response: { cacheGeneration: ObjectId('5704c04c1c974089be062348'), ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.424-0500 s20014| 2016-04-06T02:53:37.137-0500 I ACCESS [UserCacheInvalidator] User cache generation changed from 5704c01f525046a6a8063338 to 5704c04c1c974089be062348; invalidating user cache [js_test:multi_coll_drop] 2016-04-06T02:54:00.427-0500 s20014| 2016-04-06T02:53:39.744-0500 D ASIO [conn1] startCommand: RemoteCommand 778 -- target:mongovm16:20010 db:multidrop cmd:{ find: "coll", shardVersion: [ Timestamp 1000|80, ObjectId('5704c02806c33406d4d9c0c0') ] } [js_test:multi_coll_drop] 2016-04-06T02:54:00.428-0500 s20014| 2016-04-06T02:53:39.744-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-2-0] Connecting to mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:54:00.429-0500 c20012| 2016-04-06T02:53:36.504-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:00.439-0500 c20012| 2016-04-06T02:53:36.504-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.440-0500 c20012| 2016-04-06T02:53:36.504-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.444-0500 c20012| 2016-04-06T02:53:36.504-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.450-0500 c20012| 2016-04-06T02:53:36.506-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f25d'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216506), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.454-0500 c20012| 2016-04-06T02:53:36.506-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.456-0500 c20012| 2016-04-06T02:53:36.506-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.460-0500 c20012| 2016-04-06T02:53:36.506-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.460-0500 c20012| 2016-04-06T02:53:36.506-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.461-0500 c20012| 2016-04-06T02:53:36.506-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:00.463-0500 c20012| 2016-04-06T02:53:36.506-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:00.467-0500 c20012| 2016-04-06T02:53:36.506-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f25d'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216506), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.472-0500 c20012| 2016-04-06T02:53:36.506-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f25d'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216506), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f25d'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216506), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.474-0500 c20012| 2016-04-06T02:53:36.506-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.475-0500 c20012| 2016-04-06T02:53:36.506-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:00.479-0500 c20012| 2016-04-06T02:53:36.506-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.485-0500 c20012| 2016-04-06T02:53:36.506-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:00.490-0500 c20012| 2016-04-06T02:53:36.506-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.495-0500 c20012| 2016-04-06T02:53:36.507-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.498-0500 c20012| 2016-04-06T02:53:36.507-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:00.500-0500 c20012| 2016-04-06T02:53:36.507-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.501-0500 c20012| 2016-04-06T02:53:36.507-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:00.506-0500 c20012| 2016-04-06T02:53:36.507-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.506-0500 c20012| 2016-04-06T02:53:36.507-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.511-0500 c20012| 2016-04-06T02:53:36.508-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.513-0500 c20012| 2016-04-06T02:53:36.508-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.515-0500 c20012| 2016-04-06T02:53:36.508-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:00.518-0500 c20012| 2016-04-06T02:53:36.508-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.522-0500 c20012| 2016-04-06T02:53:36.508-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:00.530-0500 c20012| 2016-04-06T02:53:36.509-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.532-0500 c20012| 2016-04-06T02:53:36.509-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f25e'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216509), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.535-0500 c20012| 2016-04-06T02:53:36.509-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.536-0500 c20012| 2016-04-06T02:53:36.509-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.538-0500 c20012| 2016-04-06T02:53:36.509-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.540-0500 c20012| 2016-04-06T02:53:36.509-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.541-0500 c20012| 2016-04-06T02:53:36.509-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:00.543-0500 c20012| 2016-04-06T02:53:36.509-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:00.546-0500 c20012| 2016-04-06T02:53:36.509-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f25e'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216509), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.549-0500 c20012| 2016-04-06T02:53:36.509-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f25e'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216509), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f25e'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216509), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.552-0500 c20012| 2016-04-06T02:53:36.509-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.554-0500 c20012| 2016-04-06T02:53:36.509-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:00.557-0500 c20012| 2016-04-06T02:53:36.509-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.559-0500 c20012| 2016-04-06T02:53:36.509-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:00.563-0500 c20012| 2016-04-06T02:53:36.509-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.567-0500 c20012| 2016-04-06T02:53:36.510-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.571-0500 c20012| 2016-04-06T02:53:36.510-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:00.573-0500 c20012| 2016-04-06T02:53:36.510-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.574-0500 c20012| 2016-04-06T02:53:36.510-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:00.578-0500 c20012| 2016-04-06T02:53:36.510-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.579-0500 c20012| 2016-04-06T02:53:36.510-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.590-0500 c20012| 2016-04-06T02:53:36.511-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.608-0500 c20012| 2016-04-06T02:53:36.516-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f25f'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216515), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.612-0500 c20012| 2016-04-06T02:53:36.516-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.627-0500 c20012| 2016-04-06T02:53:36.516-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.628-0500 c20012| 2016-04-06T02:53:36.516-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.630-0500 c20012| 2016-04-06T02:53:36.516-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.632-0500 c20012| 2016-04-06T02:53:36.516-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:00.632-0500 c20012| 2016-04-06T02:53:36.516-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:00.637-0500 c20012| 2016-04-06T02:53:36.516-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f25f'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216515), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.642-0500 c20012| 2016-04-06T02:53:36.516-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f25f'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216515), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f25f'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216515), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.645-0500 c20012| 2016-04-06T02:53:36.516-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.647-0500 c20012| 2016-04-06T02:53:36.516-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:00.649-0500 c20012| 2016-04-06T02:53:36.516-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.651-0500 c20012| 2016-04-06T02:53:36.516-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:00.654-0500 c20012| 2016-04-06T02:53:36.516-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.657-0500 c20012| 2016-04-06T02:53:36.516-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.658-0500 c20012| 2016-04-06T02:53:36.516-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:00.660-0500 c20012| 2016-04-06T02:53:36.516-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.662-0500 c20012| 2016-04-06T02:53:36.516-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:00.671-0500 c20012| 2016-04-06T02:53:36.517-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.672-0500 c20012| 2016-04-06T02:53:36.517-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.674-0500 c20012| 2016-04-06T02:53:36.518-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.675-0500 c20012| 2016-04-06T02:53:36.519-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.678-0500 c20012| 2016-04-06T02:53:36.519-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:00.682-0500 c20012| 2016-04-06T02:53:36.519-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.684-0500 c20012| 2016-04-06T02:53:36.519-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:00.689-0500 c20012| 2016-04-06T02:53:36.519-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.694-0500 c20012| 2016-04-06T02:53:36.519-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f260'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216519), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.696-0500 c20012| 2016-04-06T02:53:36.519-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.698-0500 c20012| 2016-04-06T02:53:36.519-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.699-0500 c20012| 2016-04-06T02:53:36.519-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.701-0500 c20012| 2016-04-06T02:53:36.519-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.701-0500 c20012| 2016-04-06T02:53:36.519-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:00.702-0500 c20012| 2016-04-06T02:53:36.519-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:00.706-0500 c20012| 2016-04-06T02:53:36.519-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f260'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216519), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.712-0500 c20012| 2016-04-06T02:53:36.520-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f260'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216519), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f260'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216519), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.715-0500 c20012| 2016-04-06T02:53:36.520-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.716-0500 c20012| 2016-04-06T02:53:36.520-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:00.721-0500 c20012| 2016-04-06T02:53:36.520-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.721-0500 c20012| 2016-04-06T02:53:36.520-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:00.724-0500 c20012| 2016-04-06T02:53:36.520-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.727-0500 c20012| 2016-04-06T02:53:36.520-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.731-0500 c20012| 2016-04-06T02:53:36.520-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:00.733-0500 c20012| 2016-04-06T02:53:36.520-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.733-0500 c20012| 2016-04-06T02:53:36.520-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:00.738-0500 c20012| 2016-04-06T02:53:36.520-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.739-0500 c20012| 2016-04-06T02:53:36.520-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.740-0500 c20012| 2016-04-06T02:53:36.521-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.748-0500 c20012| 2016-04-06T02:53:36.522-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f261'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216522), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.748-0500 c20012| 2016-04-06T02:53:36.522-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.753-0500 c20012| 2016-04-06T02:53:36.522-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.757-0500 c20012| 2016-04-06T02:53:36.522-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.760-0500 c20012| 2016-04-06T02:53:36.522-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.762-0500 c20012| 2016-04-06T02:53:36.522-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:00.764-0500 c20012| 2016-04-06T02:53:36.522-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:00.773-0500 c20012| 2016-04-06T02:53:36.522-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f261'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216522), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.781-0500 c20012| 2016-04-06T02:53:36.522-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f261'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216522), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f261'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216522), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.783-0500 c20012| 2016-04-06T02:53:36.522-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.785-0500 c20012| 2016-04-06T02:53:36.522-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:00.792-0500 c20012| 2016-04-06T02:53:36.522-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.797-0500 c20012| 2016-04-06T02:53:36.522-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:00.803-0500 c20012| 2016-04-06T02:53:36.522-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.805-0500 c20012| 2016-04-06T02:53:36.523-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.808-0500 c20012| 2016-04-06T02:53:36.523-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:00.810-0500 c20012| 2016-04-06T02:53:36.523-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.813-0500 c20012| 2016-04-06T02:53:36.523-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:00.817-0500 c20012| 2016-04-06T02:53:36.524-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.820-0500 c20012| 2016-04-06T02:53:36.524-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.820-0500 c20012| 2016-04-06T02:53:36.529-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.832-0500 c20012| 2016-04-06T02:53:36.534-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f262'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216532), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.837-0500 c20012| 2016-04-06T02:53:36.534-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.840-0500 c20012| 2016-04-06T02:53:36.534-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.845-0500 c20012| 2016-04-06T02:53:36.534-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.848-0500 c20012| 2016-04-06T02:53:36.534-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.849-0500 c20012| 2016-04-06T02:53:36.534-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:00.852-0500 c20012| 2016-04-06T02:53:36.534-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:00.857-0500 c20012| 2016-04-06T02:53:36.535-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f262'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216532), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.862-0500 c20012| 2016-04-06T02:53:36.535-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f262'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216532), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f262'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216532), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:00.869-0500 c20012| 2016-04-06T02:53:36.535-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.877-0500 c20012| 2016-04-06T02:53:36.535-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:00.880-0500 c20012| 2016-04-06T02:53:36.535-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.882-0500 c20012| 2016-04-06T02:53:36.535-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:00.882-0500 c20013| 2016-04-06T02:52:41.910-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.883-0500 c20013| 2016-04-06T02:52:41.910-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.883-0500 c20013| 2016-04-06T02:52:41.910-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.885-0500 c20013| 2016-04-06T02:52:41.910-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.890-0500 c20013| 2016-04-06T02:52:41.910-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.890-0500 c20013| 2016-04-06T02:52:41.910-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.893-0500 c20013| 2016-04-06T02:52:41.910-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.894-0500 c20013| 2016-04-06T02:52:41.910-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.895-0500 c20013| 2016-04-06T02:52:41.910-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.895-0500 c20013| 2016-04-06T02:52:41.911-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.896-0500 c20013| 2016-04-06T02:52:41.911-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.898-0500 c20013| 2016-04-06T02:52:41.911-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.898-0500 c20013| 2016-04-06T02:52:41.911-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:00.902-0500 c20013| 2016-04-06T02:52:41.911-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.906-0500 c20013| 2016-04-06T02:52:41.911-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.925-0500 c20013| 2016-04-06T02:52:41.911-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:00.926-0500 c20013| 2016-04-06T02:52:41.911-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.926-0500 c20013| 2016-04-06T02:52:41.911-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.930-0500 c20013| 2016-04-06T02:52:41.911-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.931-0500 c20013| 2016-04-06T02:52:41.911-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1359 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:46.911-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|13, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:00.936-0500 c20013| 2016-04-06T02:52:41.911-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1359 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:00.956-0500 c20013| 2016-04-06T02:52:41.911-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.960-0500 c20013| 2016-04-06T02:52:41.911-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.961-0500 c20013| 2016-04-06T02:52:41.911-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.962-0500 c20013| 2016-04-06T02:52:41.911-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.968-0500 c20013| 2016-04-06T02:52:41.911-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.972-0500 c20013| 2016-04-06T02:52:41.911-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.972-0500 c20013| 2016-04-06T02:52:41.911-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.973-0500 c20013| 2016-04-06T02:52:41.911-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.974-0500 c20013| 2016-04-06T02:52:41.911-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.975-0500 c20013| 2016-04-06T02:52:41.911-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.976-0500 c20013| 2016-04-06T02:52:41.911-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.978-0500 c20013| 2016-04-06T02:52:41.911-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.980-0500 c20013| 2016-04-06T02:52:41.911-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:00.983-0500 c20013| 2016-04-06T02:52:41.912-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:00.988-0500 c20013| 2016-04-06T02:52:41.912-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|13, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|14, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:00.992-0500 c20013| 2016-04-06T02:52:41.912-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1360 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|13, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|14, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:00.992-0500 c20013| 2016-04-06T02:52:41.912-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1360 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:00.993-0500 c20013| 2016-04-06T02:52:41.913-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1360 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:00.996-0500 c20013| 2016-04-06T02:52:41.946-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|14, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|14, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:00.998-0500 c20013| 2016-04-06T02:52:41.946-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1362 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|14, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|14, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:00.998-0500 c20013| 2016-04-06T02:52:41.946-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1362 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:00.999-0500 c20013| 2016-04-06T02:52:41.947-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1362 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.000-0500 c20013| 2016-04-06T02:52:41.952-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1359 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.001-0500 c20013| 2016-04-06T02:52:41.952-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|14, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.001-0500 c20013| 2016-04-06T02:52:41.952-0500 D REPL [conn10] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929161000|14, t: 3 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929161000|13, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.001-0500 c20013| 2016-04-06T02:52:41.952-0500 D REPL [conn10] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999976μs [js_test:multi_coll_drop] 2016-04-06T02:54:01.002-0500 c20013| 2016-04-06T02:52:41.952-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929161000|14, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.012-0500 c20013| 2016-04-06T02:52:41.952-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|14, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:01.012-0500 c20013| 2016-04-06T02:52:41.952-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|14, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.013-0500 c20013| 2016-04-06T02:52:41.952-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:01.013-0500 c20013| 2016-04-06T02:52:41.953-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:01.027-0500 c20013| 2016-04-06T02:52:41.953-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|14, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:01.029-0500 c20013| 2016-04-06T02:52:41.953-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1365 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:46.953-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|14, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:01.033-0500 c20013| 2016-04-06T02:52:41.953-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1365 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:01.034-0500 c20013| 2016-04-06T02:52:41.953-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|54 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|14, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.036-0500 c20013| 2016-04-06T02:52:41.953-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|14, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:01.039-0500 c20013| 2016-04-06T02:52:41.953-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|54 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|14, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.040-0500 c20013| 2016-04-06T02:52:41.953-0500 D QUERY [conn10] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:01.045-0500 c20013| 2016-04-06T02:52:41.953-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|54 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|14, t: 3 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:712 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:01.058-0500 c20013| 2016-04-06T02:52:41.956-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1365 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929161000|15, t: 3, h: -3022412945155125212, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c04965c17830b843f1b5'), state: 2, when: new Date(1459929161955), why: "splitting chunk [{ _id: -73.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.068-0500 c20013| 2016-04-06T02:52:41.956-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929161000|15 and ending at ts: Timestamp 1459929161000|15 [js_test:multi_coll_drop] 2016-04-06T02:54:01.072-0500 c20013| 2016-04-06T02:52:41.958-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:01.076-0500 c20013| 2016-04-06T02:52:41.958-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.077-0500 c20013| 2016-04-06T02:52:41.958-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.078-0500 c20013| 2016-04-06T02:52:41.958-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.079-0500 c20013| 2016-04-06T02:52:41.958-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.080-0500 c20013| 2016-04-06T02:52:41.959-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.081-0500 c20013| 2016-04-06T02:52:41.959-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.082-0500 c20013| 2016-04-06T02:52:41.959-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.083-0500 c20013| 2016-04-06T02:52:41.959-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:01.083-0500 c20013| 2016-04-06T02:52:41.959-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.085-0500 c20013| 2016-04-06T02:52:41.959-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.088-0500 c20013| 2016-04-06T02:52:41.959-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.091-0500 c20013| 2016-04-06T02:52:41.959-0500 D QUERY [repl writer worker 8] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:01.094-0500 c20013| 2016-04-06T02:52:41.959-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.102-0500 c20013| 2016-04-06T02:52:41.959-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1367 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:46.959-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|14, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:01.103-0500 c20013| 2016-04-06T02:52:41.959-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.105-0500 c20013| 2016-04-06T02:52:41.959-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1367 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:01.105-0500 c20013| 2016-04-06T02:52:41.959-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.106-0500 c20013| 2016-04-06T02:52:41.959-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.107-0500 c20013| 2016-04-06T02:52:41.959-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.110-0500 c20013| 2016-04-06T02:52:41.959-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.111-0500 c20013| 2016-04-06T02:52:41.959-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.124-0500 c20013| 2016-04-06T02:52:41.959-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.135-0500 c20013| 2016-04-06T02:52:41.959-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.135-0500 c20013| 2016-04-06T02:52:41.959-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.136-0500 c20013| 2016-04-06T02:52:41.959-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.137-0500 c20013| 2016-04-06T02:52:41.959-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.141-0500 c20013| 2016-04-06T02:52:41.959-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.148-0500 c20013| 2016-04-06T02:52:41.959-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.149-0500 c20013| 2016-04-06T02:52:41.960-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.152-0500 c20013| 2016-04-06T02:52:41.960-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.153-0500 c20013| 2016-04-06T02:52:41.960-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.159-0500 c20013| 2016-04-06T02:52:41.960-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.165-0500 c20013| 2016-04-06T02:52:41.961-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.167-0500 c20013| 2016-04-06T02:52:41.961-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.168-0500 c20013| 2016-04-06T02:52:41.966-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.189-0500 c20013| 2016-04-06T02:52:41.966-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.190-0500 c20013| 2016-04-06T02:52:41.966-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:01.191-0500 c20013| 2016-04-06T02:52:41.967-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|14, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|15, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:01.193-0500 c20013| 2016-04-06T02:52:41.967-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1368 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|14, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|15, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:01.193-0500 c20013| 2016-04-06T02:52:41.967-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1368 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:01.194-0500 c20013| 2016-04-06T02:52:41.967-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1368 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.196-0500 c20013| 2016-04-06T02:52:41.996-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|15, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|15, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:01.199-0500 c20013| 2016-04-06T02:52:41.996-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1370 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|15, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|15, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:01.200-0500 c20013| 2016-04-06T02:52:41.996-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1370 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:01.200-0500 c20013| 2016-04-06T02:52:41.997-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1370 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.202-0500 c20013| 2016-04-06T02:52:41.997-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1367 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.204-0500 c20013| 2016-04-06T02:52:41.999-0500 D COMMAND [conn15] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|15, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.207-0500 c20013| 2016-04-06T02:52:41.999-0500 D REPL [conn15] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929161000|15, t: 3 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929161000|14, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.208-0500 c20013| 2016-04-06T02:52:41.999-0500 D REPL [conn15] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999977μs [js_test:multi_coll_drop] 2016-04-06T02:54:01.210-0500 c20013| 2016-04-06T02:52:42.004-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929161000|15, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.213-0500 c20013| 2016-04-06T02:52:42.005-0500 D COMMAND [conn15] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|15, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:01.231-0500 c20013| 2016-04-06T02:52:42.005-0500 D COMMAND [conn15] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|15, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.237-0500 c20013| 2016-04-06T02:52:42.005-0500 D QUERY [conn15] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:01.255-0500 c20013| 2016-04-06T02:52:42.007-0500 I COMMAND [conn15] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929161000|15, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:492 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 8ms [js_test:multi_coll_drop] 2016-04-06T02:54:01.262-0500 c20013| 2016-04-06T02:52:42.007-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:01.271-0500 c20013| 2016-04-06T02:52:42.008-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1373 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.008-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|15, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:01.286-0500 c20013| 2016-04-06T02:52:42.017-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1373 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:01.293-0500 c20013| 2016-04-06T02:52:42.020-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1373 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|1, t: 3, h: -2357057619402778341, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-73.0", lastmod: Timestamp 1000|57, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -73.0 }, max: { _id: -72.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-73.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-72.0", lastmod: Timestamp 1000|58, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -72.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-72.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.295-0500 c20013| 2016-04-06T02:52:42.021-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|1 and ending at ts: Timestamp 1459929162000|1 [js_test:multi_coll_drop] 2016-04-06T02:54:01.296-0500 c20013| 2016-04-06T02:52:42.022-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:01.299-0500 c20013| 2016-04-06T02:52:42.022-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.300-0500 c20013| 2016-04-06T02:52:42.022-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.305-0500 c20013| 2016-04-06T02:52:42.022-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.307-0500 c20013| 2016-04-06T02:52:42.022-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.309-0500 c20013| 2016-04-06T02:52:42.022-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.310-0500 c20013| 2016-04-06T02:52:42.022-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.311-0500 c20013| 2016-04-06T02:52:42.022-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.315-0500 c20013| 2016-04-06T02:52:42.023-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.316-0500 c20013| 2016-04-06T02:52:42.023-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.316-0500 c20013| 2016-04-06T02:52:42.023-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.321-0500 c20013| 2016-04-06T02:52:42.023-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.327-0500 c20013| 2016-04-06T02:52:42.023-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:01.337-0500 c20013| 2016-04-06T02:52:42.023-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.338-0500 c20013| 2016-04-06T02:52:42.023-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.339-0500 c20013| 2016-04-06T02:52:42.023-0500 D QUERY [repl writer worker 13] Using idhack: { _id: "multidrop.coll-_id_-73.0" } [js_test:multi_coll_drop] 2016-04-06T02:54:01.341-0500 c20013| 2016-04-06T02:52:42.023-0500 D QUERY [repl writer worker 13] Using idhack: { _id: "multidrop.coll-_id_-72.0" } [js_test:multi_coll_drop] 2016-04-06T02:54:01.343-0500 c20013| 2016-04-06T02:52:42.023-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.346-0500 c20013| 2016-04-06T02:52:42.023-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.347-0500 c20013| 2016-04-06T02:52:42.023-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.349-0500 c20013| 2016-04-06T02:52:42.024-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.349-0500 c20013| 2016-04-06T02:52:42.024-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.351-0500 c20013| 2016-04-06T02:52:42.024-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.352-0500 c20013| 2016-04-06T02:52:42.024-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.354-0500 c20013| 2016-04-06T02:52:42.024-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.357-0500 c20013| 2016-04-06T02:52:42.024-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1375 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.024-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929161000|15, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:01.358-0500 c20013| 2016-04-06T02:52:42.024-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.358-0500 c20013| 2016-04-06T02:52:42.024-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.359-0500 c20013| 2016-04-06T02:52:42.024-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.359-0500 c20013| 2016-04-06T02:52:42.024-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1375 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:01.361-0500 c20013| 2016-04-06T02:52:42.024-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.362-0500 c20013| 2016-04-06T02:52:42.024-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.365-0500 c20013| 2016-04-06T02:52:42.024-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.366-0500 c20013| 2016-04-06T02:52:42.025-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.367-0500 c20013| 2016-04-06T02:52:42.025-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.369-0500 c20013| 2016-04-06T02:52:42.025-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.371-0500 c20013| 2016-04-06T02:52:42.025-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.372-0500 c20013| 2016-04-06T02:52:42.025-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.373-0500 c20013| 2016-04-06T02:52:42.025-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:01.379-0500 c20013| 2016-04-06T02:52:42.026-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|15, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:01.382-0500 c20013| 2016-04-06T02:52:42.026-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1376 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|15, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:01.385-0500 c20013| 2016-04-06T02:52:42.026-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1376 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:01.386-0500 c20013| 2016-04-06T02:52:42.026-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1376 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.391-0500 c20013| 2016-04-06T02:52:42.034-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:01.396-0500 c20013| 2016-04-06T02:52:42.034-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1378 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:01.398-0500 c20013| 2016-04-06T02:52:42.034-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1378 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:01.403-0500 c20013| 2016-04-06T02:52:42.034-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1378 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.406-0500 c20013| 2016-04-06T02:52:42.035-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1375 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.407-0500 c20013| 2016-04-06T02:52:42.035-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|1, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.408-0500 c20013| 2016-04-06T02:52:42.035-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:01.410-0500 c20013| 2016-04-06T02:52:42.035-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1381 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.035-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|1, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:01.410-0500 c20013| 2016-04-06T02:52:42.035-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1381 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:01.420-0500 c20013| 2016-04-06T02:52:42.035-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1381 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|2, t: 3, h: -59394675175149910, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:42.035-0500-5704c04a65c17830b843f1b6", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162035), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -73.0 }, max: { _id: MaxKey } }, left: { min: { _id: -73.0 }, max: { _id: -72.0 }, lastmod: Timestamp 1000|57, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -72.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|58, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.422-0500 c20013| 2016-04-06T02:52:42.040-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|2 and ending at ts: Timestamp 1459929162000|2 [js_test:multi_coll_drop] 2016-04-06T02:54:01.423-0500 c20013| 2016-04-06T02:52:42.041-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:01.425-0500 c20013| 2016-04-06T02:52:42.042-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.426-0500 c20013| 2016-04-06T02:52:42.042-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.426-0500 c20013| 2016-04-06T02:52:42.042-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.433-0500 c20013| 2016-04-06T02:52:42.042-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.437-0500 c20013| 2016-04-06T02:52:42.042-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.443-0500 c20013| 2016-04-06T02:52:42.042-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.446-0500 c20013| 2016-04-06T02:52:42.042-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.447-0500 c20013| 2016-04-06T02:52:42.042-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.451-0500 c20013| 2016-04-06T02:52:42.042-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.453-0500 c20013| 2016-04-06T02:52:42.042-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.454-0500 c20013| 2016-04-06T02:52:42.042-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.455-0500 c20013| 2016-04-06T02:52:42.042-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.457-0500 c20013| 2016-04-06T02:52:42.042-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:01.459-0500 c20013| 2016-04-06T02:52:42.042-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.461-0500 c20013| 2016-04-06T02:52:42.042-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.463-0500 c20013| 2016-04-06T02:52:42.042-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.464-0500 c20013| 2016-04-06T02:52:42.042-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.466-0500 c20013| 2016-04-06T02:52:42.042-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.475-0500 c20013| 2016-04-06T02:52:42.042-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.479-0500 c20013| 2016-04-06T02:52:42.042-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1383 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.042-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|1, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:01.479-0500 c20013| 2016-04-06T02:52:42.042-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.483-0500 c20013| 2016-04-06T02:52:42.042-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.485-0500 c20013| 2016-04-06T02:52:42.042-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.485-0500 c20013| 2016-04-06T02:52:42.042-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.487-0500 c20013| 2016-04-06T02:52:42.042-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.488-0500 c20013| 2016-04-06T02:52:42.042-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.489-0500 c20013| 2016-04-06T02:52:42.042-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.491-0500 c20013| 2016-04-06T02:52:42.043-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.493-0500 c20013| 2016-04-06T02:52:42.043-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.496-0500 c20013| 2016-04-06T02:52:42.043-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1383 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:01.498-0500 c20013| 2016-04-06T02:52:42.042-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.501-0500 c20013| 2016-04-06T02:52:42.043-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.503-0500 c20013| 2016-04-06T02:52:42.043-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.505-0500 c20013| 2016-04-06T02:52:42.047-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.506-0500 c20013| 2016-04-06T02:52:42.047-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.515-0500 c20013| 2016-04-06T02:52:42.058-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:01.518-0500 c20013| 2016-04-06T02:52:42.059-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:01.525-0500 c20013| 2016-04-06T02:52:42.059-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1384 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:01.529-0500 c20013| 2016-04-06T02:52:42.059-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1384 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:01.530-0500 c20013| 2016-04-06T02:52:42.059-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1384 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.534-0500 c20013| 2016-04-06T02:52:42.064-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:01.553-0500 c20013| 2016-04-06T02:52:42.064-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1386 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:01.557-0500 c20013| 2016-04-06T02:52:42.064-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1386 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:01.560-0500 c20013| 2016-04-06T02:52:42.064-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1386 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.561-0500 c20013| 2016-04-06T02:52:42.065-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1383 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.565-0500 c20013| 2016-04-06T02:52:42.065-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|2, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.566-0500 c20013| 2016-04-06T02:52:42.065-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:01.568-0500 c20013| 2016-04-06T02:52:42.065-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1389 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.065-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:01.571-0500 c20013| 2016-04-06T02:52:42.065-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1389 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:01.573-0500 c20013| 2016-04-06T02:52:42.068-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1389 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|3, t: 3, h: 217075548524503605, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.577-0500 c20013| 2016-04-06T02:52:42.068-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|3 and ending at ts: Timestamp 1459929162000|3 [js_test:multi_coll_drop] 2016-04-06T02:54:01.580-0500 c20013| 2016-04-06T02:52:42.069-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:01.581-0500 c20013| 2016-04-06T02:52:42.069-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.582-0500 c20013| 2016-04-06T02:52:42.069-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.584-0500 c20013| 2016-04-06T02:52:42.069-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.586-0500 c20013| 2016-04-06T02:52:42.069-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.588-0500 c20013| 2016-04-06T02:52:42.070-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.588-0500 c20013| 2016-04-06T02:52:42.070-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:01.588-0500 c20013| 2016-04-06T02:52:42.070-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.589-0500 c20013| 2016-04-06T02:52:42.070-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.592-0500 c20013| 2016-04-06T02:52:42.070-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.594-0500 c20013| 2016-04-06T02:52:42.070-0500 D QUERY [repl writer worker 12] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:01.596-0500 c20013| 2016-04-06T02:52:42.070-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.598-0500 c20013| 2016-04-06T02:52:42.070-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.598-0500 c20013| 2016-04-06T02:52:42.070-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.601-0500 c20013| 2016-04-06T02:52:42.070-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.601-0500 c20013| 2016-04-06T02:52:42.070-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.602-0500 c20013| 2016-04-06T02:52:42.070-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.603-0500 c20013| 2016-04-06T02:52:42.070-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.604-0500 c20013| 2016-04-06T02:52:42.070-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.605-0500 c20013| 2016-04-06T02:52:42.070-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.608-0500 c20013| 2016-04-06T02:52:42.070-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.612-0500 c20013| 2016-04-06T02:52:42.070-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.613-0500 c20013| 2016-04-06T02:52:42.070-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.616-0500 c20013| 2016-04-06T02:52:42.070-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.621-0500 c20013| 2016-04-06T02:52:42.071-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1391 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.071-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:01.624-0500 c20013| 2016-04-06T02:52:42.071-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1391 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:01.625-0500 c20013| 2016-04-06T02:52:42.071-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.626-0500 c20013| 2016-04-06T02:52:42.071-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.628-0500 c20013| 2016-04-06T02:52:42.071-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.630-0500 c20013| 2016-04-06T02:52:42.071-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.633-0500 c20013| 2016-04-06T02:52:42.071-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.633-0500 c20013| 2016-04-06T02:52:42.071-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.634-0500 c20013| 2016-04-06T02:52:42.071-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.636-0500 c20013| 2016-04-06T02:52:42.071-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.639-0500 c20013| 2016-04-06T02:52:42.071-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.640-0500 c20013| 2016-04-06T02:52:42.071-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.646-0500 c20013| 2016-04-06T02:52:42.071-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.648-0500 c20013| 2016-04-06T02:52:42.072-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:01.651-0500 c20013| 2016-04-06T02:52:42.072-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:01.656-0500 c20013| 2016-04-06T02:52:42.072-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1392 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:01.658-0500 c20013| 2016-04-06T02:52:42.072-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1392 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:01.659-0500 c20013| 2016-04-06T02:52:42.072-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1392 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.671-0500 c20013| 2016-04-06T02:52:42.077-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|3, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:01.678-0500 c20013| 2016-04-06T02:52:42.077-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1394 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|3, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:01.680-0500 c20013| 2016-04-06T02:52:42.077-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1394 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:01.683-0500 c20013| 2016-04-06T02:52:42.077-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1394 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.686-0500 c20013| 2016-04-06T02:52:42.078-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1391 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.703-0500 c20013| 2016-04-06T02:52:42.078-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.704-0500 c20013| 2016-04-06T02:52:42.078-0500 D REPL [conn10] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929162000|3, t: 3 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929162000|2, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.705-0500 c20013| 2016-04-06T02:52:42.078-0500 D REPL [conn10] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999976μs [js_test:multi_coll_drop] 2016-04-06T02:54:01.707-0500 c20013| 2016-04-06T02:52:42.080-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|3, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.710-0500 c20013| 2016-04-06T02:52:42.080-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|3, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:01.712-0500 c20013| 2016-04-06T02:52:42.080-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.715-0500 c20013| 2016-04-06T02:52:42.080-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:01.716-0500 c20013| 2016-04-06T02:52:42.080-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:01.722-0500 c20013| 2016-04-06T02:52:42.080-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:54:01.724-0500 c20013| 2016-04-06T02:52:42.081-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1397 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.081-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|3, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:01.726-0500 c20013| 2016-04-06T02:52:42.081-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1397 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:01.730-0500 c20013| 2016-04-06T02:52:42.084-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.732-0500 c20013| 2016-04-06T02:52:42.084-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|3, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:01.736-0500 c20013| 2016-04-06T02:52:42.084-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.740-0500 c20013| 2016-04-06T02:52:42.084-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:01.745-0500 c20013| 2016-04-06T02:52:42.084-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:01.760-0500 c20013| 2016-04-06T02:52:42.089-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1397 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|4, t: 3, h: 3993313178579973780, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c04a65c17830b843f1b7'), state: 2, when: new Date(1459929162087), why: "splitting chunk [{ _id: -72.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.767-0500 c20013| 2016-04-06T02:52:42.089-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|4 and ending at ts: Timestamp 1459929162000|4 [js_test:multi_coll_drop] 2016-04-06T02:54:01.776-0500 c20013| 2016-04-06T02:52:42.089-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:01.782-0500 c20013| 2016-04-06T02:52:42.089-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.790-0500 c20013| 2016-04-06T02:52:42.089-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.792-0500 c20013| 2016-04-06T02:52:42.089-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.793-0500 c20013| 2016-04-06T02:52:42.089-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.796-0500 c20013| 2016-04-06T02:52:42.089-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.796-0500 c20013| 2016-04-06T02:52:42.089-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.797-0500 c20013| 2016-04-06T02:52:42.089-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.800-0500 c20013| 2016-04-06T02:52:42.089-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.803-0500 c20013| 2016-04-06T02:52:42.089-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.804-0500 c20013| 2016-04-06T02:52:42.089-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.805-0500 c20013| 2016-04-06T02:52:42.089-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.806-0500 c20013| 2016-04-06T02:52:42.090-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:01.807-0500 c20013| 2016-04-06T02:52:42.090-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.808-0500 c20013| 2016-04-06T02:52:42.089-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.809-0500 c20013| 2016-04-06T02:52:42.090-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.810-0500 c20013| 2016-04-06T02:52:42.090-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:01.811-0500 c20013| 2016-04-06T02:52:42.090-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.812-0500 c20013| 2016-04-06T02:52:42.090-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.813-0500 c20013| 2016-04-06T02:52:42.090-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.816-0500 c20013| 2016-04-06T02:52:42.090-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.819-0500 c20013| 2016-04-06T02:52:42.090-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.821-0500 c20013| 2016-04-06T02:52:42.090-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.823-0500 c20013| 2016-04-06T02:52:42.090-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.824-0500 c20013| 2016-04-06T02:52:42.090-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.825-0500 c20013| 2016-04-06T02:52:42.090-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.827-0500 c20013| 2016-04-06T02:52:42.090-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.828-0500 c20013| 2016-04-06T02:52:42.090-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.829-0500 c20013| 2016-04-06T02:52:42.090-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.839-0500 c20013| 2016-04-06T02:52:42.090-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.840-0500 c20013| 2016-04-06T02:52:42.090-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.843-0500 c20013| 2016-04-06T02:52:42.090-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.844-0500 c20013| 2016-04-06T02:52:42.090-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.846-0500 c20013| 2016-04-06T02:52:42.090-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.852-0500 c20013| 2016-04-06T02:52:42.091-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1399 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.091-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|3, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:01.853-0500 c20013| 2016-04-06T02:52:42.091-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1399 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:01.855-0500 c20013| 2016-04-06T02:52:42.092-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.855-0500 c20013| 2016-04-06T02:52:42.092-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:01.874-0500 c20013| 2016-04-06T02:52:42.092-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|3, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:01.887-0500 c20013| 2016-04-06T02:52:42.092-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1400 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|3, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:01.889-0500 c20013| 2016-04-06T02:52:42.092-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1400 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:01.890-0500 c20013| 2016-04-06T02:52:42.093-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1400 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.893-0500 c20013| 2016-04-06T02:52:42.097-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:01.902-0500 c20013| 2016-04-06T02:52:42.097-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1402 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:01.906-0500 c20013| 2016-04-06T02:52:42.097-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1402 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:01.908-0500 c20013| 2016-04-06T02:52:42.097-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1402 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.909-0500 c20013| 2016-04-06T02:52:42.098-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1399 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.913-0500 c20013| 2016-04-06T02:52:42.098-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|4, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.914-0500 c20013| 2016-04-06T02:52:42.098-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:01.923-0500 c20013| 2016-04-06T02:52:42.098-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1405 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.098-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|4, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:01.924-0500 c20013| 2016-04-06T02:52:42.098-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1405 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:01.932-0500 c20013| 2016-04-06T02:52:42.101-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1405 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|5, t: 3, h: -5434502165658029569, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-72.0", lastmod: Timestamp 1000|59, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -72.0 }, max: { _id: -71.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-72.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-71.0", lastmod: Timestamp 1000|60, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -71.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-71.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:01.934-0500 c20013| 2016-04-06T02:52:42.101-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|5 and ending at ts: Timestamp 1459929162000|5 [js_test:multi_coll_drop] 2016-04-06T02:54:01.935-0500 c20013| 2016-04-06T02:52:42.101-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:01.937-0500 c20013| 2016-04-06T02:52:42.101-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.939-0500 c20013| 2016-04-06T02:52:42.101-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.940-0500 c20013| 2016-04-06T02:52:42.101-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.943-0500 c20013| 2016-04-06T02:52:42.101-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.943-0500 c20013| 2016-04-06T02:52:42.101-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.944-0500 c20013| 2016-04-06T02:52:42.101-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.946-0500 c20013| 2016-04-06T02:52:42.102-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.952-0500 c20013| 2016-04-06T02:52:42.102-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.954-0500 c20013| 2016-04-06T02:52:42.102-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.955-0500 c20013| 2016-04-06T02:52:42.102-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.971-0500 c20013| 2016-04-06T02:52:42.101-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.971-0500 c20013| 2016-04-06T02:52:42.102-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.971-0500 c20013| 2016-04-06T02:52:42.102-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.972-0500 c20013| 2016-04-06T02:52:42.102-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:01.972-0500 c20013| 2016-04-06T02:52:42.102-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.972-0500 c20013| 2016-04-06T02:52:42.102-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.972-0500 c20013| 2016-04-06T02:52:42.102-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.973-0500 c20013| 2016-04-06T02:52:42.102-0500 D QUERY [repl writer worker 5] Using idhack: { _id: "multidrop.coll-_id_-72.0" } [js_test:multi_coll_drop] 2016-04-06T02:54:01.973-0500 c20013| 2016-04-06T02:52:42.102-0500 D QUERY [repl writer worker 5] Using idhack: { _id: "multidrop.coll-_id_-71.0" } [js_test:multi_coll_drop] 2016-04-06T02:54:01.973-0500 c20013| 2016-04-06T02:52:42.103-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.974-0500 c20013| 2016-04-06T02:52:42.103-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.974-0500 c20013| 2016-04-06T02:52:42.103-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.974-0500 c20013| 2016-04-06T02:52:42.103-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.975-0500 c20013| 2016-04-06T02:52:42.103-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.975-0500 c20013| 2016-04-06T02:52:42.103-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.975-0500 c20013| 2016-04-06T02:52:42.103-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1407 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.103-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|4, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:01.976-0500 c20013| 2016-04-06T02:52:42.103-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1407 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:01.976-0500 c20013| 2016-04-06T02:52:42.103-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.976-0500 c20013| 2016-04-06T02:52:42.104-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.977-0500 c20013| 2016-04-06T02:52:42.104-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.977-0500 c20013| 2016-04-06T02:52:42.104-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.977-0500 c20013| 2016-04-06T02:52:42.104-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.978-0500 c20013| 2016-04-06T02:52:42.104-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.978-0500 c20013| 2016-04-06T02:52:42.104-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.979-0500 c20013| 2016-04-06T02:52:42.104-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.979-0500 c20013| 2016-04-06T02:52:42.104-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.980-0500 c20013| 2016-04-06T02:52:42.104-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:01.984-0500 c20013| 2016-04-06T02:52:42.105-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:02.001-0500 c20013| 2016-04-06T02:52:42.105-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:02.042-0500 c20013| 2016-04-06T02:52:42.105-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1408 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:02.046-0500 c20013| 2016-04-06T02:52:42.105-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1408 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:02.050-0500 c20013| 2016-04-06T02:52:42.105-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1408 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.055-0500 c20013| 2016-04-06T02:52:42.109-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:02.066-0500 c20013| 2016-04-06T02:52:42.109-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1410 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:02.068-0500 c20013| 2016-04-06T02:52:42.109-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1410 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:02.069-0500 c20013| 2016-04-06T02:52:42.109-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1410 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.071-0500 c20013| 2016-04-06T02:52:42.110-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1407 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.071-0500 c20013| 2016-04-06T02:52:42.110-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|5, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.073-0500 c20013| 2016-04-06T02:52:42.110-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:02.076-0500 c20013| 2016-04-06T02:52:42.111-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1413 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.111-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|5, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:02.089-0500 c20013| 2016-04-06T02:52:42.111-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1413 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:02.097-0500 c20013| 2016-04-06T02:52:42.111-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1413 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|6, t: 3, h: -4967056731092326312, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:42.110-0500-5704c04a65c17830b843f1b8", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162110), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -72.0 }, max: { _id: MaxKey } }, left: { min: { _id: -72.0 }, max: { _id: -71.0 }, lastmod: Timestamp 1000|59, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -71.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|60, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.099-0500 c20013| 2016-04-06T02:52:42.111-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|6 and ending at ts: Timestamp 1459929162000|6 [js_test:multi_coll_drop] 2016-04-06T02:54:02.100-0500 c20013| 2016-04-06T02:52:42.112-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:02.101-0500 c20013| 2016-04-06T02:52:42.112-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.103-0500 c20013| 2016-04-06T02:52:42.112-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.104-0500 c20013| 2016-04-06T02:52:42.112-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.105-0500 c20013| 2016-04-06T02:52:42.112-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.106-0500 c20013| 2016-04-06T02:52:42.112-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.110-0500 c20013| 2016-04-06T02:52:42.112-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.111-0500 c20013| 2016-04-06T02:52:42.113-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.112-0500 c20013| 2016-04-06T02:52:42.113-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.114-0500 c20013| 2016-04-06T02:52:42.113-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.116-0500 c20013| 2016-04-06T02:52:42.113-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.118-0500 c20013| 2016-04-06T02:52:42.113-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.119-0500 c20013| 2016-04-06T02:52:42.113-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.119-0500 c20013| 2016-04-06T02:52:42.113-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.121-0500 c20013| 2016-04-06T02:52:42.113-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.122-0500 c20013| 2016-04-06T02:52:42.113-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.123-0500 c20013| 2016-04-06T02:52:42.113-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.124-0500 c20013| 2016-04-06T02:52:42.113-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:02.130-0500 c20013| 2016-04-06T02:52:42.114-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1415 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.114-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|5, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:02.133-0500 c20013| 2016-04-06T02:52:42.114-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1415 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:02.137-0500 c20013| 2016-04-06T02:52:42.115-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.137-0500 c20013| 2016-04-06T02:52:42.115-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.148-0500 c20013| 2016-04-06T02:52:42.115-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.149-0500 c20013| 2016-04-06T02:52:42.115-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.151-0500 c20013| 2016-04-06T02:52:42.115-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.153-0500 c20013| 2016-04-06T02:52:42.115-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.156-0500 c20013| 2016-04-06T02:52:42.115-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.157-0500 c20013| 2016-04-06T02:52:42.115-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.158-0500 c20013| 2016-04-06T02:52:42.115-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.158-0500 c20013| 2016-04-06T02:52:42.115-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.158-0500 c20013| 2016-04-06T02:52:42.115-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.159-0500 c20013| 2016-04-06T02:52:42.115-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.160-0500 c20013| 2016-04-06T02:52:42.116-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.161-0500 c20013| 2016-04-06T02:52:42.116-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.162-0500 c20013| 2016-04-06T02:52:42.116-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.166-0500 c20013| 2016-04-06T02:52:42.116-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.167-0500 c20013| 2016-04-06T02:52:42.116-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:02.171-0500 c20013| 2016-04-06T02:52:42.117-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|6, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:02.174-0500 c20013| 2016-04-06T02:52:42.117-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1416 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|6, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:02.175-0500 c20013| 2016-04-06T02:52:42.117-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1416 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:02.178-0500 c20013| 2016-04-06T02:52:42.117-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1416 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.181-0500 c20013| 2016-04-06T02:52:42.129-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|6, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|6, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:02.183-0500 c20013| 2016-04-06T02:52:42.129-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1418 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|6, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|6, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:02.184-0500 c20013| 2016-04-06T02:52:42.129-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1418 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:02.187-0500 c20013| 2016-04-06T02:52:42.130-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1418 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.194-0500 c20013| 2016-04-06T02:52:42.133-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1415 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|7, t: 3, h: 3362366614800144892, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.197-0500 c20013| 2016-04-06T02:52:42.133-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|6, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.209-0500 c20013| 2016-04-06T02:52:42.133-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|7 and ending at ts: Timestamp 1459929162000|7 [js_test:multi_coll_drop] 2016-04-06T02:54:02.210-0500 c20013| 2016-04-06T02:52:42.133-0500 D REPL [rsBackgroundSync-0] bgsync buffer has 0 bytes [js_test:multi_coll_drop] 2016-04-06T02:54:02.211-0500 c20013| 2016-04-06T02:52:42.134-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:02.212-0500 c20013| 2016-04-06T02:52:42.134-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.212-0500 c20013| 2016-04-06T02:52:42.134-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.212-0500 c20013| 2016-04-06T02:52:42.134-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.214-0500 c20013| 2016-04-06T02:52:42.134-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.215-0500 c20013| 2016-04-06T02:52:42.134-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.216-0500 c20013| 2016-04-06T02:52:42.134-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.216-0500 c20013| 2016-04-06T02:52:42.134-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.217-0500 c20013| 2016-04-06T02:52:42.134-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.217-0500 c20013| 2016-04-06T02:52:42.135-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.219-0500 c20013| 2016-04-06T02:52:42.135-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.219-0500 c20013| 2016-04-06T02:52:42.135-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.220-0500 c20013| 2016-04-06T02:52:42.135-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.221-0500 c20013| 2016-04-06T02:52:42.135-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.221-0500 c20013| 2016-04-06T02:52:42.135-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:02.224-0500 c20013| 2016-04-06T02:52:42.135-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.226-0500 c20013| 2016-04-06T02:52:42.135-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.226-0500 c20013| 2016-04-06T02:52:42.135-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:02.228-0500 c20013| 2016-04-06T02:52:42.135-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1421 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.135-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|6, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:02.230-0500 c20013| 2016-04-06T02:52:42.135-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1421 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:02.231-0500 c20013| 2016-04-06T02:52:42.135-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.232-0500 c20013| 2016-04-06T02:52:42.136-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.232-0500 c20013| 2016-04-06T02:52:42.136-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.233-0500 c20013| 2016-04-06T02:52:42.136-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.234-0500 c20013| 2016-04-06T02:52:42.136-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.235-0500 c20013| 2016-04-06T02:52:42.136-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.238-0500 c20013| 2016-04-06T02:52:42.136-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.242-0500 c20013| 2016-04-06T02:52:42.136-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.243-0500 c20013| 2016-04-06T02:52:42.136-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.244-0500 c20013| 2016-04-06T02:52:42.136-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.245-0500 c20013| 2016-04-06T02:52:42.136-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.251-0500 c20013| 2016-04-06T02:52:42.136-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.251-0500 c20013| 2016-04-06T02:52:42.136-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.254-0500 c20013| 2016-04-06T02:52:42.136-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.257-0500 c20013| 2016-04-06T02:52:42.136-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.260-0500 c20013| 2016-04-06T02:52:42.136-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.260-0500 c20013| 2016-04-06T02:52:42.137-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.261-0500 c20013| 2016-04-06T02:52:42.138-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:02.264-0500 c20013| 2016-04-06T02:52:42.141-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|6, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|7, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:02.267-0500 c20013| 2016-04-06T02:52:42.141-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1422 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|6, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|7, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:02.268-0500 c20013| 2016-04-06T02:52:42.141-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1422 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:02.269-0500 c20013| 2016-04-06T02:52:42.141-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1422 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.272-0500 c20013| 2016-04-06T02:52:42.148-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|7, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|7, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:02.274-0500 c20013| 2016-04-06T02:52:42.148-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1424 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|7, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|7, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:02.276-0500 c20013| 2016-04-06T02:52:42.148-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1424 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:02.277-0500 c20013| 2016-04-06T02:52:42.148-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1424 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.279-0500 c20013| 2016-04-06T02:52:42.149-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1421 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.280-0500 c20013| 2016-04-06T02:52:42.149-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|7, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.284-0500 c20013| 2016-04-06T02:52:42.149-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:02.286-0500 c20013| 2016-04-06T02:52:42.149-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1427 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.149-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|7, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:02.287-0500 c20013| 2016-04-06T02:52:42.149-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1427 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:02.289-0500 c20013| 2016-04-06T02:52:42.152-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|58 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|7, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.293-0500 c20013| 2016-04-06T02:52:42.152-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|7, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:02.299-0500 c20013| 2016-04-06T02:52:42.152-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|58 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|7, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.317-0500 c20013| 2016-04-06T02:52:42.152-0500 D QUERY [conn10] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:02.335-0500 c20013| 2016-04-06T02:52:42.153-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|58 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|7, t: 3 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:712 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:02.345-0500 c20013| 2016-04-06T02:52:42.156-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1427 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|8, t: 3, h: 3399076997503720080, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c04a65c17830b843f1b9'), state: 2, when: new Date(1459929162154), why: "splitting chunk [{ _id: -71.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.353-0500 c20013| 2016-04-06T02:52:42.156-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|8 and ending at ts: Timestamp 1459929162000|8 [js_test:multi_coll_drop] 2016-04-06T02:54:02.360-0500 c20013| 2016-04-06T02:52:42.157-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:02.362-0500 c20013| 2016-04-06T02:52:42.157-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.362-0500 c20013| 2016-04-06T02:52:42.157-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.363-0500 c20013| 2016-04-06T02:52:42.158-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.365-0500 c20013| 2016-04-06T02:52:42.158-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.365-0500 c20013| 2016-04-06T02:52:42.158-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.366-0500 c20013| 2016-04-06T02:52:42.158-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.367-0500 c20013| 2016-04-06T02:52:42.158-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.368-0500 c20013| 2016-04-06T02:52:42.158-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.370-0500 c20013| 2016-04-06T02:52:42.158-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.374-0500 c20013| 2016-04-06T02:52:42.158-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.376-0500 c20013| 2016-04-06T02:52:42.158-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:02.380-0500 c20013| 2016-04-06T02:52:42.158-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.383-0500 c20013| 2016-04-06T02:52:42.158-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.396-0500 c20013| 2016-04-06T02:52:42.158-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1429 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.158-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|7, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:02.396-0500 c20013| 2016-04-06T02:52:42.158-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:02.404-0500 c20013| 2016-04-06T02:52:42.158-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.430-0500 c20013| 2016-04-06T02:52:42.158-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.432-0500 c20013| 2016-04-06T02:52:42.158-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.434-0500 c20013| 2016-04-06T02:52:42.158-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1429 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:02.437-0500 c20013| 2016-04-06T02:52:42.158-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.438-0500 c20013| 2016-04-06T02:52:42.158-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.447-0500 c20013| 2016-04-06T02:52:42.158-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.448-0500 c20013| 2016-04-06T02:52:42.158-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.448-0500 c20013| 2016-04-06T02:52:42.158-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.451-0500 c20013| 2016-04-06T02:52:42.158-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.459-0500 c20013| 2016-04-06T02:52:42.158-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.463-0500 c20013| 2016-04-06T02:52:42.158-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.464-0500 c20013| 2016-04-06T02:52:42.158-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.465-0500 c20013| 2016-04-06T02:52:42.159-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.465-0500 c20013| 2016-04-06T02:52:42.159-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.466-0500 c20013| 2016-04-06T02:52:42.159-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.468-0500 c20013| 2016-04-06T02:52:42.159-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.469-0500 c20013| 2016-04-06T02:52:42.159-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.469-0500 c20013| 2016-04-06T02:52:42.159-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.470-0500 c20013| 2016-04-06T02:52:42.159-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.475-0500 c20013| 2016-04-06T02:52:42.159-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.476-0500 c20013| 2016-04-06T02:52:42.159-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:02.523-0500 c20013| 2016-04-06T02:52:42.159-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|7, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:02.535-0500 c20013| 2016-04-06T02:52:42.159-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1430 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|7, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:02.536-0500 c20013| 2016-04-06T02:52:42.160-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1430 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:02.538-0500 c20013| 2016-04-06T02:52:42.160-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1430 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.545-0500 c20013| 2016-04-06T02:52:42.175-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:02.549-0500 c20013| 2016-04-06T02:52:42.175-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1432 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:02.551-0500 c20013| 2016-04-06T02:52:42.175-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1432 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:02.552-0500 c20013| 2016-04-06T02:52:42.175-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1432 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.555-0500 c20013| 2016-04-06T02:52:42.175-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1429 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.556-0500 c20013| 2016-04-06T02:52:42.175-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.558-0500 c20013| 2016-04-06T02:52:42.175-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:02.561-0500 c20013| 2016-04-06T02:52:42.175-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1435 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.175-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:02.563-0500 c20013| 2016-04-06T02:52:42.176-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1435 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:02.564-0500 c20013| 2016-04-06T02:52:42.178-0500 D COMMAND [conn15] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|8, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.565-0500 c20013| 2016-04-06T02:52:42.178-0500 D COMMAND [conn15] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|8, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:02.569-0500 c20013| 2016-04-06T02:52:42.178-0500 D COMMAND [conn15] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|8, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.569-0500 c20013| 2016-04-06T02:52:42.178-0500 D QUERY [conn15] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:02.571-0500 c20013| 2016-04-06T02:52:42.178-0500 I COMMAND [conn15] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|8, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:492 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:02.574-0500 c20013| 2016-04-06T02:52:42.181-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1435 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|9, t: 3, h: 5374760814984136875, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-71.0", lastmod: Timestamp 1000|61, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -71.0 }, max: { _id: -70.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-71.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-70.0", lastmod: Timestamp 1000|62, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -70.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-70.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.575-0500 c20013| 2016-04-06T02:52:42.181-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|9 and ending at ts: Timestamp 1459929162000|9 [js_test:multi_coll_drop] 2016-04-06T02:54:02.578-0500 c20013| 2016-04-06T02:52:42.182-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:02.580-0500 c20013| 2016-04-06T02:52:42.182-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.580-0500 c20013| 2016-04-06T02:52:42.182-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.583-0500 c20013| 2016-04-06T02:52:42.182-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.584-0500 c20013| 2016-04-06T02:52:42.182-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.584-0500 c20013| 2016-04-06T02:52:42.182-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.585-0500 c20013| 2016-04-06T02:52:42.182-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.586-0500 c20013| 2016-04-06T02:52:42.182-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.588-0500 c20013| 2016-04-06T02:52:42.182-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.589-0500 c20013| 2016-04-06T02:52:42.182-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.589-0500 c20013| 2016-04-06T02:52:42.182-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.590-0500 c20013| 2016-04-06T02:52:42.182-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.592-0500 c20013| 2016-04-06T02:52:42.182-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.592-0500 c20013| 2016-04-06T02:52:42.182-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.594-0500 c20013| 2016-04-06T02:52:42.182-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.594-0500 c20013| 2016-04-06T02:52:42.183-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.598-0500 c20013| 2016-04-06T02:52:42.183-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:02.599-0500 c20013| 2016-04-06T02:52:42.183-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.600-0500 c20013| 2016-04-06T02:52:42.183-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll-_id_-71.0" } [js_test:multi_coll_drop] 2016-04-06T02:54:02.600-0500 c20013| 2016-04-06T02:52:42.183-0500 D QUERY [repl writer worker 2] Using idhack: { _id: "multidrop.coll-_id_-70.0" } [js_test:multi_coll_drop] 2016-04-06T02:54:02.601-0500 c20013| 2016-04-06T02:52:42.183-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.602-0500 c20013| 2016-04-06T02:52:42.183-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.603-0500 c20013| 2016-04-06T02:52:42.183-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.603-0500 c20013| 2016-04-06T02:52:42.183-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.605-0500 c20013| 2016-04-06T02:52:42.183-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.606-0500 c20013| 2016-04-06T02:52:42.183-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.611-0500 c20013| 2016-04-06T02:52:42.183-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.614-0500 c20013| 2016-04-06T02:52:42.183-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.616-0500 c20013| 2016-04-06T02:52:42.183-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.618-0500 c20013| 2016-04-06T02:52:42.183-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1437 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.183-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:02.620-0500 c20013| 2016-04-06T02:52:42.183-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.620-0500 c20013| 2016-04-06T02:52:42.183-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1437 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:02.621-0500 c20013| 2016-04-06T02:52:42.183-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.623-0500 c20013| 2016-04-06T02:52:42.184-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.623-0500 c20013| 2016-04-06T02:52:42.184-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.624-0500 c20013| 2016-04-06T02:52:42.184-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.625-0500 c20013| 2016-04-06T02:52:42.184-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.626-0500 c20013| 2016-04-06T02:52:42.184-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.629-0500 c20013| 2016-04-06T02:52:42.184-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:02.635-0500 c20013| 2016-04-06T02:52:42.185-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:02.638-0500 c20013| 2016-04-06T02:52:42.185-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1438 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:02.639-0500 c20013| 2016-04-06T02:52:42.185-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1438 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:02.640-0500 c20013| 2016-04-06T02:52:42.185-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1438 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.647-0500 c20013| 2016-04-06T02:52:42.190-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:02.651-0500 c20013| 2016-04-06T02:52:42.190-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1440 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:02.654-0500 c20013| 2016-04-06T02:52:42.190-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1440 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:02.656-0500 c20013| 2016-04-06T02:52:42.191-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1440 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.658-0500 c20013| 2016-04-06T02:52:42.191-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1437 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.663-0500 c20013| 2016-04-06T02:52:42.191-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|9, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.665-0500 c20013| 2016-04-06T02:52:42.191-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:02.669-0500 c20013| 2016-04-06T02:52:42.191-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1443 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.191-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|9, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:02.672-0500 c20013| 2016-04-06T02:52:42.191-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1443 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:02.676-0500 c20013| 2016-04-06T02:52:42.200-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1443 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|10, t: 3, h: 8727673497830278375, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:42.191-0500-5704c04a65c17830b843f1ba", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162191), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -71.0 }, max: { _id: MaxKey } }, left: { min: { _id: -71.0 }, max: { _id: -70.0 }, lastmod: Timestamp 1000|61, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -70.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|62, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.678-0500 c20013| 2016-04-06T02:52:42.200-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|10 and ending at ts: Timestamp 1459929162000|10 [js_test:multi_coll_drop] 2016-04-06T02:54:02.680-0500 c20013| 2016-04-06T02:52:42.202-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1445 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.202-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|9, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:02.684-0500 c20013| 2016-04-06T02:52:42.206-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:02.685-0500 c20013| 2016-04-06T02:52:42.206-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.689-0500 c20013| 2016-04-06T02:52:42.206-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.690-0500 c20013| 2016-04-06T02:52:42.206-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.692-0500 c20013| 2016-04-06T02:52:42.206-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.694-0500 c20013| 2016-04-06T02:52:42.206-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.696-0500 c20013| 2016-04-06T02:52:42.206-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.699-0500 c20013| 2016-04-06T02:52:42.206-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.701-0500 c20013| 2016-04-06T02:52:42.206-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.706-0500 c20013| 2016-04-06T02:52:42.206-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.708-0500 c20013| 2016-04-06T02:52:42.206-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:02.711-0500 c20013| 2016-04-06T02:52:42.206-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1445 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:02.711-0500 c20013| 2016-04-06T02:52:42.206-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.713-0500 c20013| 2016-04-06T02:52:42.207-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.725-0500 c20013| 2016-04-06T02:52:42.207-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.725-0500 c20013| 2016-04-06T02:52:42.207-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.729-0500 c20013| 2016-04-06T02:52:42.207-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.729-0500 c20013| 2016-04-06T02:52:42.207-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.730-0500 c20013| 2016-04-06T02:52:42.207-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.731-0500 c20013| 2016-04-06T02:52:42.207-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.731-0500 c20013| 2016-04-06T02:52:42.207-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.737-0500 c20013| 2016-04-06T02:52:42.207-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.738-0500 c20013| 2016-04-06T02:52:42.207-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.739-0500 c20013| 2016-04-06T02:52:42.207-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.742-0500 c20013| 2016-04-06T02:52:42.209-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.748-0500 c20013| 2016-04-06T02:52:42.209-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.748-0500 c20013| 2016-04-06T02:52:42.209-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.749-0500 c20013| 2016-04-06T02:52:42.209-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.750-0500 c20013| 2016-04-06T02:52:42.209-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.752-0500 c20013| 2016-04-06T02:52:42.209-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.752-0500 c20013| 2016-04-06T02:52:42.209-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.754-0500 c20013| 2016-04-06T02:52:42.209-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.754-0500 c20013| 2016-04-06T02:52:42.209-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.756-0500 c20013| 2016-04-06T02:52:42.210-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.759-0500 c20013| 2016-04-06T02:52:42.210-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:02.761-0500 c20013| 2016-04-06T02:52:42.210-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:02.763-0500 c20013| 2016-04-06T02:52:42.210-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|10, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:02.766-0500 c20013| 2016-04-06T02:52:42.210-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1446 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|10, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:02.766-0500 c20013| 2016-04-06T02:52:42.211-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1446 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:02.770-0500 c20013| 2016-04-06T02:52:42.211-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1446 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.772-0500 c20013| 2016-04-06T02:52:42.213-0500 D COMMAND [conn7] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.774-0500 c20013| 2016-04-06T02:52:42.213-0500 D COMMAND [conn7] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:54:02.776-0500 c20013| 2016-04-06T02:52:42.213-0500 I COMMAND [conn7] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } numYields:0 reslen:509 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:02.779-0500 c20013| 2016-04-06T02:52:42.234-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|10, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|10, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:02.785-0500 c20013| 2016-04-06T02:52:42.235-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1448 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|10, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|10, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:02.786-0500 s20014| 2016-04-06T02:53:39.744-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-2-0] Starting asynchronous command 779 on host mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:54:02.787-0500 s20014| 2016-04-06T02:53:39.745-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-2-0] Starting asynchronous command 779 on host mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:54:02.788-0500 s20014| 2016-04-06T02:53:39.745-0500 I ASIO [NetworkInterfaceASIO-TaskExecutorPool-2-0] Successfully connected to mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:54:02.791-0500 s20014| 2016-04-06T02:53:39.745-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-2-0] Request 779 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:54:02.797-0500 s20014| 2016-04-06T02:53:39.745-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-2-0] Starting asynchronous command 778 on host mongovm16:20010 [js_test:multi_coll_drop] 2016-04-06T02:54:02.802-0500 s20014| 2016-04-06T02:53:39.746-0500 D ASIO [NetworkInterfaceASIO-TaskExecutorPool-2-0] Request 778 finished with response: { waitedMS: 0, cursor: { firstBatch: [], id: 0, ns: "multidrop.coll" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.803-0500 s20014| 2016-04-06T02:53:40.701-0500 I COMMAND [conn1] DROP: multidrop.coll [js_test:multi_coll_drop] 2016-04-06T02:54:02.806-0500 s20014| 2016-04-06T02:53:40.701-0500 I SHARDING [conn1] about to log metadata event into changelog: { _id: "mongovm16-2016-04-06T02:53:40.701-0500-5704c08406c33406d4d9c0c4", server: "mongovm16", clientAddr: "127.0.0.1:55066", time: new Date(1459929220701), what: "dropCollection.start", ns: "multidrop.coll", details: {} } [js_test:multi_coll_drop] 2016-04-06T02:54:02.815-0500 s20014| 2016-04-06T02:53:40.701-0500 D ASIO [conn1] startCommand: RemoteCommand 782 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:10.701-0500 cmd:{ insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:53:40.701-0500-5704c08406c33406d4d9c0c4", server: "mongovm16", clientAddr: "127.0.0.1:55066", time: new Date(1459929220701), what: "dropCollection.start", ns: "multidrop.coll", details: {} } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.815-0500 s20014| 2016-04-06T02:53:40.701-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 782 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:02.817-0500 s20014| 2016-04-06T02:53:40.727-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 782 finished with response: { ok: 1, n: 1, opTime: { ts: Timestamp 1459929220000|3, t: 7 }, electionId: ObjectId('7fffffff0000000000000007') } [js_test:multi_coll_drop] 2016-04-06T02:54:02.818-0500 s20014| 2016-04-06T02:53:40.728-0500 D ASIO [conn1] startCommand: RemoteCommand 784 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:10.728-0500 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.820-0500 s20014| 2016-04-06T02:53:40.728-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 784 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:02.823-0500 s20014| 2016-04-06T02:53:40.728-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 784 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "shard0000", host: "mongovm16:20010" } ], id: 0, ns: "config.shards" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.825-0500 s20014| 2016-04-06T02:53:40.728-0500 D SHARDING [conn1] dropCollection multidrop.coll started [js_test:multi_coll_drop] 2016-04-06T02:54:02.830-0500 s20014| 2016-04-06T02:53:40.728-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c08406c33406d4d9c0c5, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:02.836-0500 s20014| 2016-04-06T02:53:40.728-0500 D ASIO [conn1] startCommand: RemoteCommand 786 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:10.728-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08406c33406d4d9c0c5'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929220728), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.839-0500 s20014| 2016-04-06T02:53:40.728-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 786 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:02.842-0500 s20014| 2016-04-06T02:53:40.729-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 786 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.845-0500 s20014| 2016-04-06T02:53:40.729-0500 D ASIO [conn1] startCommand: RemoteCommand 788 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:10.729-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.846-0500 s20014| 2016-04-06T02:53:40.729-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 788 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:02.850-0500 s20014| 2016-04-06T02:53:40.729-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 788 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.853-0500 s20014| 2016-04-06T02:53:40.729-0500 D ASIO [conn1] startCommand: RemoteCommand 790 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:10.729-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.853-0500 s20014| 2016-04-06T02:53:40.729-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 790 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:02.856-0500 s20014| 2016-04-06T02:53:40.730-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 790 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929191721) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.858-0500 s20014| 2016-04-06T02:53:40.730-0500 D ASIO [conn1] startCommand: RemoteCommand 792 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:54:10.730-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.858-0500 s20014| 2016-04-06T02:53:40.730-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 792 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:02.877-0500 s20014| 2016-04-06T02:53:40.732-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] warning: log line attempted (22kB) over max size (10kB), printing beginning and end ... Request 792 finished with response: { host: "mongovm16:20012", advisoryHostFQDNs: [], version: "3.3.4-37-g36f3ff8", process: "mongod", pid: 65723, uptime: 103.0, uptimeMillis: 103590, uptimeEstimate: 89.0, localTime: new Date(1459929220730), asserts: { regular: 0, warning: 0, msg: 0, user: 41, rollovers: 0 }, connections: { current: 17, available: 51183, totalCreated: 48 }, extra_info: { note: "fields vary by platform", heap_usage_bytes: 133856592, page_faults: 0 }, globalLock: { totalTime: 103587000, currentQueue: { total: 0, readers: 0, writers: 0 }, activeClients: { total: 34, readers: 0, writers: 0 } }, locks: { Global: { acquireCount: { r: 3747, w: 804, R: 172, W: 342 }, acquireWaitCount: { r: 18, w: 2, W: 9 }, timeAcquiringMicros: { r: 79690, w: 22138, W: 3261 } }, Database: { acquireCount: { r: 1226, w: 249, W: 555 }, acquireWaitCount: { r: 115, w: 1, W: 22 }, timeAcquiringMicros: { r: 15661, w: 7420, W: 5681 } }, Collection: { acquireCount: { r: 655, w: 219 } }, Metadata: { acquireCount: { w: 81, W: 490 }, acquireWaitCount: { W: 7 }, timeAcquiringMicros: { W: 620 } }, oplog: { acquireCount: { r: 585, w: 37, R: 1, W: 1 } } }, network: { bytesIn: 206770, bytesOut: 1352395, numRequests: 856 }, opcounters: { insert: 6, query: 261, update: 10, delete: 0, getmore: 113, command: 485 }, opcountersRepl: { insert: 61, query: 0, update: 170, delete: 0, getmore: 0, command: 0 }, repl: { hosts: [ "mongovm16:20011", "mongovm16:20012", "mongovm16:20013" ], setName: "multidrop-configRS", setVersion: 1, ismaster: true, secondary: false, primary: "mongovm16:20012", me: "mongovm16:20012", electionId: ObjectId('7fffffff0000000000000007'), rbid: 1287542267 }, storageEngine: { name: "wiredTiger", supportsCommittedReads: true, readOnly: false, persistent: true }, tcmalloc: { generic: { current_allocated_bytes: 133858112, heap_size: 138121216 }, tcmalloc: { pageheap_free_bytes: 1376256, pageheap_unmapped_bytes: 0, max_total_thread_cache_bytes: 1073741824, current_total_thread_cache_bytes: 1774720, total_free_bytes: 2886848, central_cache_free_bytes: 209984, transfer_cache_free_bytes: 902144, thread_cache_free_bytes: 1774720, aggressive_memory_decommit: 0, size_classes: [ { bytes_per_object: 0, pages_per_span: 0, num_spans: 0, num_thread_objs: 0, num_central_objs: 0, num_transfer_objs: 0, free_bytes: 0, allocated_bytes: 0 }, { bytes_per_object: 8, pages_per_span: 2, num_spans: 2, num_thread_objs: 154, num_central_objs: 920, num_transfer_objs: 0, free_bytes: 8592, allocated_bytes: 16384 }, { bytes_per_object: 16, pages_per_span: 2, num_spans: 4, num_thread_objs: 383, num_central_objs: 604, num_transfer_objs: 0, free_bytes: 15792, allocated_bytes: 32768 }, { bytes_per_object: 32, pages_per_span: 2, num_spans: 36, num_thread_objs: 1576, num_central_objs: 163, num_transfer_objs: 1280, free_bytes: 96608, allocated_bytes: 294912 }, { bytes_per_object: 48, pages_per_span: 2, num_spans: 24, num_thread_objs: 653, num_central_objs: 57, num_transfer_objs: 340, free_bytes: 50400, allocated_bytes: 196608 }, { bytes_per_object: 64, pages_per_span: 2, num_spans: 58, num_thread_objs: 529, num_central_objs: 97, num_transfer_objs: 5632, free_bytes: 400512, allocated_bytes: 475136 }, { bytes_per_object: 80, pages_per_span: 2, num_spans: 34, num_thread_objs: 469, num_central_objs: 65, num_transfer_objs: 1836, free_bytes: 189600, allocated_bytes: 278528 }, { bytes_per_object: 96, pages_pe .......... cheSetFilter: { failed: 0, total: 0 }, profile: { failed: 0, total: 0 }, reIndex: { failed: 0, total: 0 }, renameCollection: { failed: 0, total: 0 }, repairCursor: { failed: 0, total: 0 }, repairDatabase: { failed: 0, total: 0 }, replSetDeclareElectionWinner: { failed: 0, total: 0 }, replSetElect: { failed: 0, total: 0 }, replSetFreeze: { failed: 0, total: 0 }, replSetFresh: { failed: 0, total: 0 }, replSetGetConfig: { failed: 0, total: 0 }, replSetGetRBID: { failed: 0, total: 2 }, replSetGetStatus: { failed: 0, total: 0 }, replSetHeartbeat: { failed: 0, total: 74 }, replSetInitiate: { failed: 0, total: 0 }, replSetMaintenance: { failed: 0, total: 0 }, replSetReconfig: { failed: 0, total: 0 }, replSetRequestVotes: { failed: 0, total: 8 }, replSetStepDown: { failed: 0, total: 1 }, replSetSyncFrom: { failed: 0, total: 0 }, replSetTest: { failed: 0, total: 0 }, replSetUpdatePosition: { failed: 0, total: 128 }, resetError: { failed: 0, total: 0 }, resync: { failed: 0, total: 0 }, revokePrivilegesFromRole: { failed: 0, total: 0 }, revokeRolesFromRole: { failed: 0, total: 0 }, revokeRolesFromUser: { failed: 0, total: 0 }, rolesInfo: { failed: 0, total: 0 }, saslContinue: { failed: 0, total: 0 }, saslStart: { failed: 0, total: 0 }, serverStatus: { failed: 0, total: 40 }, setCommittedSnapshot: { failed: 0, total: 0 }, setParameter: { failed: 0, total: 0 }, setShardVersion: { failed: 0, total: 0 }, shardConnPoolStats: { failed: 0, total: 0 }, shardingState: { failed: 0, total: 0 }, shutdown: { failed: 0, total: 0 }, sleep: { failed: 0, total: 0 }, splitChunk: { failed: 0, total: 0 }, splitVector: { failed: 0, total: 0 }, stageDebug: { failed: 0, total: 0 }, top: { failed: 0, total: 0 }, touch: { failed: 0, total: 0 }, unsetSharding: { failed: 0, total: 0 }, update: { failed: 0, total: 10 }, updateRole: { failed: 0, total: 0 }, updateUser: { failed: 0, total: 0 }, usersInfo: { failed: 0, total: 0 }, validate: { failed: 0, total: 0 }, whatsmyuri: { failed: 0, total: 0 }, writebacklisten: { failed: 0, total: 0 } }, cursor: { timedOut: 0, open: { noTimeout: 0, pinned: 2, total: 2 } }, document: { deleted: 0, inserted: 12, returned: 426, updated: 22 }, getLastError: { wtime: { num: 34, totalMillis: 5770 }, wtimeouts: 0 }, operation: { fastmod: 0, idhack: 100, scanAndOrder: 0, writeConflicts: 0 }, queryExecutor: { scanned: 264, scannedObjects: 396 }, record: { moves: 0 }, repl: { executor: { counters: { eventCreated: 14, eventWait: 14, cancels: 459, waits: 1662, scheduledNetCmd: 92, scheduledDBWork: 3, scheduledXclWork: 0, scheduledWorkAt: 542, scheduledWork: 1817, schedulingFailures: 0 }, queues: { networkInProgress: 0, dbWorkInProgress: 0, exclusiveInProgress: 0, sleepers: 3, ready: 0, free: 30 }, unsignaledEvents: 3, eventWaiters: 0, shuttingDown: false, networkInterface: " [js_test:multi_coll_drop] 2016-04-06T02:54:02.877-0500 s20014| NetworkInterfaceASIO Operations' Diagnostic: [js_test:multi_coll_drop] 2016-04-06T02:54:02.877-0500 s20014| Operation: Count: [js_test:multi_coll_drop] 2016-04-06T02:54:02.878-0500 s20014| Connecting 0 [js_test:multi_coll_drop] 2016-04-06T02:54:02.878-0500 s20014| In Progress 0 [js_test:multi_coll_drop] 2016-04-06T02:54:02.878-0500 s20014| Succeeded 81 [js_test:multi_coll_drop] 2016-04-06T02:54:02.884-0500 s20014| Canceled..." }, apply: { batches: { num: 168, totalMillis: 0 }, ops: 196 }, buffer: { count: 0, maxSizeBytes: 268435456, sizeBytes: 0 }, network: { bytes: 67253, getmores: { num: 266, totalMillis: 15808 }, ops: 206, readersCreated: 1 }, preload: { docs: { num: 0, totalMillis: 0 }, indexes: { num: 0, totalMillis: 0 } } }, storage: { freelist: { search: { bucketExhausted: 0, requests: 0, scanned: 0 } } }, ttl: { deletedDocuments: 0, passes: 1 } }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.885-0500 s20014| 2016-04-06T02:53:40.732-0500 D SHARDING [conn1] distributed lock 'multidrop.coll' was not acquired. [js_test:multi_coll_drop] 2016-04-06T02:54:02.887-0500 s20014| 2016-04-06T02:53:41.232-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c08506c33406d4d9c0c6, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:02.893-0500 s20014| 2016-04-06T02:53:41.232-0500 D ASIO [conn1] startCommand: RemoteCommand 794 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:11.232-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08506c33406d4d9c0c6'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929221232), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.894-0500 s20014| 2016-04-06T02:53:41.232-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 794 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:02.895-0500 s20014| 2016-04-06T02:53:41.233-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 794 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.897-0500 s20014| 2016-04-06T02:53:41.233-0500 D ASIO [conn1] startCommand: RemoteCommand 796 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:11.233-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.897-0500 s20014| 2016-04-06T02:53:41.233-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 796 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:02.902-0500 s20014| 2016-04-06T02:53:41.233-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 796 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.907-0500 s20014| 2016-04-06T02:53:41.233-0500 D ASIO [conn1] startCommand: RemoteCommand 798 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:11.233-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.908-0500 s20014| 2016-04-06T02:53:41.233-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 798 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:02.912-0500 s20014| 2016-04-06T02:53:41.233-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 798 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929191721) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.915-0500 s20014| 2016-04-06T02:53:41.233-0500 D ASIO [conn1] startCommand: RemoteCommand 800 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:54:11.233-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.920-0500 s20014| 2016-04-06T02:53:41.233-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 800 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:02.926-0500 c20012| 2016-04-06T02:53:36.535-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:02.929-0500 c20012| 2016-04-06T02:53:36.535-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.932-0500 c20012| 2016-04-06T02:53:36.535-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:02.936-0500 c20012| 2016-04-06T02:53:36.535-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.937-0500 c20012| 2016-04-06T02:53:36.535-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:02.941-0500 c20012| 2016-04-06T02:53:36.535-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:02.943-0500 c20012| 2016-04-06T02:53:36.536-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.946-0500 c20012| 2016-04-06T02:53:36.538-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:54:02.948-0500 c20012| 2016-04-06T02:53:36.539-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.950-0500 c20012| 2016-04-06T02:53:36.539-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:02.958-0500 c20012| 2016-04-06T02:53:36.539-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.960-0500 c20012| 2016-04-06T02:53:36.539-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:02.968-0500 c20012| 2016-04-06T02:53:36.539-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:02.975-0500 c20012| 2016-04-06T02:53:36.541-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f263'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216539), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:02.997-0500 c20012| 2016-04-06T02:53:36.541-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:03.022-0500 c20012| 2016-04-06T02:53:36.541-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:03.028-0500 c20012| 2016-04-06T02:53:36.541-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.044-0500 c20012| 2016-04-06T02:53:36.541-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:03.046-0500 c20012| 2016-04-06T02:53:36.541-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:03.047-0500 c20012| 2016-04-06T02:53:36.541-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:03.054-0500 c20012| 2016-04-06T02:53:36.541-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f263'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216539), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:03.067-0500 c20012| 2016-04-06T02:53:36.541-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f263'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216539), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f263'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216539), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:03.068-0500 c20012| 2016-04-06T02:53:36.541-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.071-0500 c20012| 2016-04-06T02:53:36.541-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:03.076-0500 c20012| 2016-04-06T02:53:36.541-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.078-0500 c20012| 2016-04-06T02:53:36.541-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:03.092-0500 c20012| 2016-04-06T02:53:36.541-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:03.096-0500 c20012| 2016-04-06T02:53:36.548-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.100-0500 c20012| 2016-04-06T02:53:36.548-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:03.103-0500 c20012| 2016-04-06T02:53:36.548-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.105-0500 c20012| 2016-04-06T02:53:36.548-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:03.112-0500 c20012| 2016-04-06T02:53:36.548-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:03.115-0500 c20012| 2016-04-06T02:53:36.550-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.119-0500 c20012| 2016-04-06T02:53:36.555-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:54:03.127-0500 c20012| 2016-04-06T02:53:36.564-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f264'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216557), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.129-0500 c20012| 2016-04-06T02:53:36.564-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:03.131-0500 c20012| 2016-04-06T02:53:36.564-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:03.134-0500 c20012| 2016-04-06T02:53:36.564-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.138-0500 c20012| 2016-04-06T02:53:36.565-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:03.139-0500 c20012| 2016-04-06T02:53:36.565-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:03.143-0500 c20012| 2016-04-06T02:53:36.565-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:03.162-0500 c20012| 2016-04-06T02:53:36.565-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f264'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216557), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:03.192-0500 c20012| 2016-04-06T02:53:36.565-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f264'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216557), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f264'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216557), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:03.197-0500 c20012| 2016-04-06T02:53:36.566-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.226-0500 s20014| 2016-04-06T02:53:41.234-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] warning: log line attempted (22kB) over max size (10kB), printing beginning and end ... Request 800 finished with response: { host: "mongovm16:20012", advisoryHostFQDNs: [], version: "3.3.4-37-g36f3ff8", process: "mongod", pid: 65723, uptime: 104.0, uptimeMillis: 104093, uptimeEstimate: 90.0, localTime: new Date(1459929221233), asserts: { regular: 0, warning: 0, msg: 0, user: 42, rollovers: 0 }, connections: { current: 17, available: 51183, totalCreated: 48 }, extra_info: { note: "fields vary by platform", heap_usage_bytes: 133856920, page_faults: 0 }, globalLock: { totalTime: 104091000, currentQueue: { total: 0, readers: 0, writers: 0 }, activeClients: { total: 34, readers: 0, writers: 0 } }, locks: { Global: { acquireCount: { r: 3754, w: 805, R: 172, W: 342 }, acquireWaitCount: { r: 18, w: 2, W: 9 }, timeAcquiringMicros: { r: 79690, w: 22138, W: 3261 } }, Database: { acquireCount: { r: 1229, w: 250, W: 555 }, acquireWaitCount: { r: 115, w: 1, W: 22 }, timeAcquiringMicros: { r: 15661, w: 7420, W: 5681 } }, Collection: { acquireCount: { r: 657, w: 220 } }, Metadata: { acquireCount: { w: 81, W: 490 }, acquireWaitCount: { W: 7 }, timeAcquiringMicros: { W: 620 } }, oplog: { acquireCount: { r: 586, w: 37, R: 1, W: 1 } } }, network: { bytesIn: 207731, bytesOut: 1379432, numRequests: 860 }, opcounters: { insert: 6, query: 263, update: 10, delete: 0, getmore: 113, command: 487 }, opcountersRepl: { insert: 61, query: 0, update: 170, delete: 0, getmore: 0, command: 0 }, repl: { hosts: [ "mongovm16:20011", "mongovm16:20012", "mongovm16:20013" ], setName: "multidrop-configRS", setVersion: 1, ismaster: true, secondary: false, primary: "mongovm16:20012", me: "mongovm16:20012", electionId: ObjectId('7fffffff0000000000000007'), rbid: 1287542267 }, storageEngine: { name: "wiredTiger", supportsCommittedReads: true, readOnly: false, persistent: true }, tcmalloc: { generic: { current_allocated_bytes: 133858440, heap_size: 138121216 }, tcmalloc: { pageheap_free_bytes: 1335296, pageheap_unmapped_bytes: 0, max_total_thread_cache_bytes: 1073741824, current_total_thread_cache_bytes: 1811304, total_free_bytes: 2927480, central_cache_free_bytes: 205840, transfer_cache_free_bytes: 910336, thread_cache_free_bytes: 1811304, aggressive_memory_decommit: 0, size_classes: [ { bytes_per_object: 0, pages_per_span: 0, num_spans: 0, num_thread_objs: 0, num_central_objs: 0, num_transfer_objs: 0, free_bytes: 0, allocated_bytes: 0 }, { bytes_per_object: 8, pages_per_span: 2, num_spans: 2, num_thread_objs: 153, num_central_objs: 920, num_transfer_objs: 0, free_bytes: 8584, allocated_bytes: 16384 }, { bytes_per_object: 16, pages_per_span: 2, num_spans: 4, num_thread_objs: 400, num_central_objs: 587, num_transfer_objs: 0, free_bytes: 15792, allocated_bytes: 32768 }, { bytes_per_object: 32, pages_per_span: 2, num_spans: 36, num_thread_objs: 1575, num_central_objs: 163, num_transfer_objs: 1280, free_bytes: 96576, allocated_bytes: 294912 }, { bytes_per_object: 48, pages_per_span: 2, num_spans: 24, num_thread_objs: 651, num_central_objs: 59, num_transfer_objs: 340, free_bytes: 50400, allocated_bytes: 196608 }, { bytes_per_object: 64, pages_per_span: 2, num_spans: 58, num_thread_objs: 542, num_central_objs: 83, num_transfer_objs: 5632, free_bytes: 400448, allocated_bytes: 475136 }, { bytes_per_object: 80, pages_per_span: 2, num_spans: 34, num_thread_objs: 467, num_central_objs: 67, num_transfer_objs: 1836, free_bytes: 189600, allocated_bytes: 278528 }, { bytes_per_object: 96, pages_pe .......... cheSetFilter: { failed: 0, total: 0 }, profile: { failed: 0, total: 0 }, reIndex: { failed: 0, total: 0 }, renameCollection: { failed: 0, total: 0 }, repairCursor: { failed: 0, total: 0 }, repairDatabase: { failed: 0, total: 0 }, replSetDeclareElectionWinner: { failed: 0, total: 0 }, replSetElect: { failed: 0, total: 0 }, replSetFreeze: { failed: 0, total: 0 }, replSetFresh: { failed: 0, total: 0 }, replSetGetConfig: { failed: 0, total: 0 }, replSetGetRBID: { failed: 0, total: 2 }, replSetGetStatus: { failed: 0, total: 0 }, replSetHeartbeat: { failed: 0, total: 74 }, replSetInitiate: { failed: 0, total: 0 }, replSetMaintenance: { failed: 0, total: 0 }, replSetReconfig: { failed: 0, total: 0 }, replSetRequestVotes: { failed: 0, total: 8 }, replSetStepDown: { failed: 0, total: 1 }, replSetSyncFrom: { failed: 0, total: 0 }, replSetTest: { failed: 0, total: 0 }, replSetUpdatePosition: { failed: 0, total: 128 }, resetError: { failed: 0, total: 0 }, resync: { failed: 0, total: 0 }, revokePrivilegesFromRole: { failed: 0, total: 0 }, revokeRolesFromRole: { failed: 0, total: 0 }, revokeRolesFromUser: { failed: 0, total: 0 }, rolesInfo: { failed: 0, total: 0 }, saslContinue: { failed: 0, total: 0 }, saslStart: { failed: 0, total: 0 }, serverStatus: { failed: 0, total: 41 }, setCommittedSnapshot: { failed: 0, total: 0 }, setParameter: { failed: 0, total: 0 }, setShardVersion: { failed: 0, total: 0 }, shardConnPoolStats: { failed: 0, total: 0 }, shardingState: { failed: 0, total: 0 }, shutdown: { failed: 0, total: 0 }, sleep: { failed: 0, total: 0 }, splitChunk: { failed: 0, total: 0 }, splitVector: { failed: 0, total: 0 }, stageDebug: { failed: 0, total: 0 }, top: { failed: 0, total: 0 }, touch: { failed: 0, total: 0 }, unsetSharding: { failed: 0, total: 0 }, update: { failed: 0, total: 10 }, updateRole: { failed: 0, total: 0 }, updateUser: { failed: 0, total: 0 }, usersInfo: { failed: 0, total: 0 }, validate: { failed: 0, total: 0 }, whatsmyuri: { failed: 0, total: 0 }, writebacklisten: { failed: 0, total: 0 } }, cursor: { timedOut: 0, open: { noTimeout: 0, pinned: 2, total: 2 } }, document: { deleted: 0, inserted: 12, returned: 428, updated: 22 }, getLastError: { wtime: { num: 34, totalMillis: 5770 }, wtimeouts: 0 }, operation: { fastmod: 0, idhack: 102, scanAndOrder: 0, writeConflicts: 0 }, queryExecutor: { scanned: 266, scannedObjects: 398 }, record: { moves: 0 }, repl: { executor: { counters: { eventCreated: 14, eventWait: 14, cancels: 459, waits: 1668, scheduledNetCmd: 92, scheduledDBWork: 3, scheduledXclWork: 0, scheduledWorkAt: 542, scheduledWork: 1823, schedulingFailures: 0 }, queues: { networkInProgress: 0, dbWorkInProgress: 0, exclusiveInProgress: 0, sleepers: 3, ready: 0, free: 30 }, unsignaledEvents: 3, eventWaiters: 0, shuttingDown: false, networkInterface: " [js_test:multi_coll_drop] 2016-04-06T02:54:03.231-0500 s20014| NetworkInterfaceASIO Operations' Diagnostic: [js_test:multi_coll_drop] 2016-04-06T02:54:03.234-0500 c20013| 2016-04-06T02:52:42.235-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1448 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:03.237-0500 c20013| 2016-04-06T02:52:42.235-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1448 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.244-0500 c20013| 2016-04-06T02:52:42.235-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1445 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.247-0500 c20013| 2016-04-06T02:52:42.236-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|10, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.249-0500 c20013| 2016-04-06T02:52:42.236-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:03.253-0500 c20013| 2016-04-06T02:52:42.236-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1451 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.236-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|10, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:03.255-0500 c20013| 2016-04-06T02:52:42.236-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1451 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:03.259-0500 c20013| 2016-04-06T02:52:42.242-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1451 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|11, t: 3, h: -1996309872813345203, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.265-0500 c20013| 2016-04-06T02:52:42.242-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|11 and ending at ts: Timestamp 1459929162000|11 [js_test:multi_coll_drop] 2016-04-06T02:54:03.268-0500 c20013| 2016-04-06T02:52:42.242-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:03.280-0500 c20013| 2016-04-06T02:52:42.242-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.281-0500 c20013| 2016-04-06T02:52:42.242-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.281-0500 c20013| 2016-04-06T02:52:42.242-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.284-0500 c20013| 2016-04-06T02:52:42.242-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.285-0500 c20013| 2016-04-06T02:52:42.242-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.286-0500 c20013| 2016-04-06T02:52:42.243-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.287-0500 c20013| 2016-04-06T02:52:42.243-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.288-0500 c20013| 2016-04-06T02:52:42.243-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.289-0500 c20013| 2016-04-06T02:52:42.243-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.294-0500 c20013| 2016-04-06T02:52:42.243-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.294-0500 c20013| 2016-04-06T02:52:42.243-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.295-0500 c20013| 2016-04-06T02:52:42.243-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.297-0500 c20013| 2016-04-06T02:52:42.243-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.297-0500 c20013| 2016-04-06T02:52:42.243-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:03.298-0500 c20013| 2016-04-06T02:52:42.243-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:03.299-0500 c20013| 2016-04-06T02:52:42.243-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.300-0500 c20013| 2016-04-06T02:52:42.243-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.300-0500 c20013| 2016-04-06T02:52:42.243-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.301-0500 c20013| 2016-04-06T02:52:42.243-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.302-0500 c20013| 2016-04-06T02:52:42.243-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.304-0500 c20013| 2016-04-06T02:52:42.243-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.306-0500 c20013| 2016-04-06T02:52:42.243-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.306-0500 c20013| 2016-04-06T02:52:42.243-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.307-0500 c20013| 2016-04-06T02:52:42.243-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.307-0500 c20013| 2016-04-06T02:52:42.243-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.309-0500 c20013| 2016-04-06T02:52:42.243-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.309-0500 c20013| 2016-04-06T02:52:42.243-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.310-0500 c20013| 2016-04-06T02:52:42.243-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.310-0500 c20013| 2016-04-06T02:52:42.243-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.311-0500 c20013| 2016-04-06T02:52:42.243-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.312-0500 c20013| 2016-04-06T02:52:42.243-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.312-0500 c20013| 2016-04-06T02:52:42.243-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.315-0500 c20013| 2016-04-06T02:52:42.243-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.319-0500 c20013| 2016-04-06T02:52:42.244-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1453 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.244-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|10, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:03.320-0500 c20013| 2016-04-06T02:52:42.244-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1453 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:03.324-0500 c20011| 2016-04-06T02:53:18.977-0500 D COMMAND [conn54] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929198273), up: 71, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.326-0500 c20011| 2016-04-06T02:53:18.977-0500 D QUERY [conn54] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:54:03.330-0500 c20011| 2016-04-06T02:53:18.977-0500 D REPL [conn54] Required snapshot optime: { ts: Timestamp 1459929198000|1, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929194000|2, t: 5 }, name-id: "269" } [js_test:multi_coll_drop] 2016-04-06T02:54:03.331-0500 c20011| 2016-04-06T02:53:18.977-0500 D REPL [conn54] Required snapshot optime: { ts: Timestamp 1459929198000|2, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929194000|2, t: 5 }, name-id: "269" } [js_test:multi_coll_drop] 2016-04-06T02:54:03.343-0500 c20011| 2016-04-06T02:53:18.977-0500 I WRITE [conn54] update config.mongos query: { _id: "mongovm16:20014" } update: { $set: { _id: "mongovm16:20014", ping: new Date(1459929198273), up: 71, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:03.345-0500 c20011| 2016-04-06T02:53:18.979-0500 D COMMAND [conn58] run command local.$cmd { getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929194000|2, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:54:03.354-0500 c20011| 2016-04-06T02:53:18.979-0500 D COMMAND [conn59] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929194000|2, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|1, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:03.355-0500 c20011| 2016-04-06T02:53:18.979-0500 D COMMAND [conn59] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:03.356-0500 c20011| 2016-04-06T02:53:18.979-0500 D REPL [conn59] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929198000|1, t: 5 } and is durable through: { ts: Timestamp 1459929194000|2, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.357-0500 c20011| 2016-04-06T02:53:18.979-0500 D REPL [conn59] Required snapshot optime: { ts: Timestamp 1459929198000|1, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929194000|2, t: 5 }, name-id: "269" } [js_test:multi_coll_drop] 2016-04-06T02:54:03.360-0500 c20011| 2016-04-06T02:53:18.979-0500 D REPL [conn59] Required snapshot optime: { ts: Timestamp 1459929198000|2, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929194000|2, t: 5 }, name-id: "269" } [js_test:multi_coll_drop] 2016-04-06T02:54:03.363-0500 c20011| 2016-04-06T02:53:18.979-0500 D REPL [conn59] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929198000|2, t: 4 } and is durable through: { ts: Timestamp 1459929198000|2, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.370-0500 c20011| 2016-04-06T02:53:18.979-0500 I COMMAND [conn59] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929194000|2, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|1, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:03.380-0500 c20011| 2016-04-06T02:53:18.979-0500 I COMMAND [conn58] command local.oplog.rs command: getMore { getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929194000|2, t: 5 } } cursorid:19461455963 numYields:0 nreturned:1 reslen:522 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:03.383-0500 c20011| 2016-04-06T02:53:18.979-0500 D REPL [conn54] Required snapshot optime: { ts: Timestamp 1459929198000|1, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929194000|2, t: 5 }, name-id: "269" } [js_test:multi_coll_drop] 2016-04-06T02:54:03.388-0500 c20011| 2016-04-06T02:53:18.979-0500 D REPL [conn54] Required snapshot optime: { ts: Timestamp 1459929198000|2, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929194000|2, t: 5 }, name-id: "269" } [js_test:multi_coll_drop] 2016-04-06T02:54:03.391-0500 c20011| 2016-04-06T02:53:18.979-0500 D REPL [conn54] Required snapshot optime: { ts: Timestamp 1459929198000|3, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929194000|2, t: 5 }, name-id: "269" } [js_test:multi_coll_drop] 2016-04-06T02:54:03.394-0500 c20011| 2016-04-06T02:53:18.983-0500 D COMMAND [conn58] run command local.$cmd { getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929194000|2, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:54:03.397-0500 c20011| 2016-04-06T02:53:18.984-0500 D COMMAND [conn59] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929194000|2, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:03.398-0500 c20011| 2016-04-06T02:53:18.984-0500 D COMMAND [conn59] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:03.402-0500 c20013| 2016-04-06T02:52:42.247-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.403-0500 c20013| 2016-04-06T02:52:42.247-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:03.412-0500 c20013| 2016-04-06T02:52:42.247-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|10, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|11, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:03.421-0500 c20013| 2016-04-06T02:52:42.247-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1454 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|10, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|11, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:03.424-0500 c20013| 2016-04-06T02:52:42.247-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1454 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:03.429-0500 c20013| 2016-04-06T02:52:42.248-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1454 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.440-0500 c20013| 2016-04-06T02:52:42.274-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|11, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|11, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:03.446-0500 c20013| 2016-04-06T02:52:42.274-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1456 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|11, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|11, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:03.449-0500 c20013| 2016-04-06T02:52:42.274-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1456 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:03.453-0500 c20013| 2016-04-06T02:52:42.275-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1456 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.456-0500 c20013| 2016-04-06T02:52:42.288-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1453 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.460-0500 c20013| 2016-04-06T02:52:42.289-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|11, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.461-0500 c20013| 2016-04-06T02:52:42.289-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:03.464-0500 c20013| 2016-04-06T02:52:42.289-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1459 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.289-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|11, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:03.465-0500 c20013| 2016-04-06T02:52:42.304-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1459 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:03.468-0500 c20013| 2016-04-06T02:52:42.304-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|60 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|11, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.471-0500 c20013| 2016-04-06T02:52:42.304-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|11, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:03.479-0500 c20013| 2016-04-06T02:52:42.304-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|60 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|11, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.482-0500 c20013| 2016-04-06T02:52:42.304-0500 D QUERY [conn10] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:03.485-0500 c20013| 2016-04-06T02:52:42.305-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|60 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|11, t: 3 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:712 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:03.488-0500 c20013| 2016-04-06T02:52:42.305-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|11, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.497-0500 c20013| 2016-04-06T02:52:42.305-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|11, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:03.502-0500 c20013| 2016-04-06T02:52:42.305-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|11, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.505-0500 c20013| 2016-04-06T02:52:42.305-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:03.518-0500 c20013| 2016-04-06T02:52:42.313-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|11, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:54:03.524-0500 c20013| 2016-04-06T02:52:42.314-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1459 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|12, t: 3, h: -1176532190065910822, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c04a65c17830b843f1bb'), state: 2, when: new Date(1459929162313), why: "splitting chunk [{ _id: -70.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.528-0500 c20013| 2016-04-06T02:52:42.314-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|12 and ending at ts: Timestamp 1459929162000|12 [js_test:multi_coll_drop] 2016-04-06T02:54:03.532-0500 c20013| 2016-04-06T02:52:42.316-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1461 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.316-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|11, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:03.537-0500 c20013| 2016-04-06T02:52:42.317-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1461 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:03.543-0500 c20013| 2016-04-06T02:52:42.318-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:03.544-0500 c20013| 2016-04-06T02:52:42.319-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.546-0500 c20013| 2016-04-06T02:52:42.319-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.546-0500 c20013| 2016-04-06T02:52:42.319-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.547-0500 c20013| 2016-04-06T02:52:42.319-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.549-0500 c20013| 2016-04-06T02:52:42.319-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.550-0500 c20013| 2016-04-06T02:52:42.319-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.552-0500 c20013| 2016-04-06T02:52:42.319-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.552-0500 c20013| 2016-04-06T02:52:42.319-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.552-0500 c20013| 2016-04-06T02:52:42.319-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.553-0500 c20013| 2016-04-06T02:52:42.319-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.554-0500 c20013| 2016-04-06T02:52:42.319-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.557-0500 c20013| 2016-04-06T02:52:42.319-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.557-0500 c20013| 2016-04-06T02:52:42.319-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.557-0500 c20013| 2016-04-06T02:52:42.319-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.559-0500 c20013| 2016-04-06T02:52:42.319-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:03.559-0500 c20013| 2016-04-06T02:52:42.319-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.560-0500 c20013| 2016-04-06T02:52:42.320-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:03.561-0500 c20013| 2016-04-06T02:52:42.320-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.561-0500 c20013| 2016-04-06T02:52:42.320-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.561-0500 c20013| 2016-04-06T02:52:42.320-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.562-0500 c20013| 2016-04-06T02:52:42.320-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.563-0500 c20013| 2016-04-06T02:52:42.320-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.564-0500 c20013| 2016-04-06T02:52:42.320-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.565-0500 c20013| 2016-04-06T02:52:42.320-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.567-0500 c20013| 2016-04-06T02:52:42.320-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.568-0500 c20013| 2016-04-06T02:52:42.320-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.568-0500 c20013| 2016-04-06T02:52:42.320-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.569-0500 c20013| 2016-04-06T02:52:42.320-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.570-0500 c20013| 2016-04-06T02:52:42.320-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.573-0500 c20013| 2016-04-06T02:52:42.320-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.575-0500 c20013| 2016-04-06T02:52:42.320-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.580-0500 c20013| 2016-04-06T02:52:42.320-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.580-0500 c20013| 2016-04-06T02:52:42.320-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.581-0500 c20013| 2016-04-06T02:52:42.320-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.584-0500 c20013| 2016-04-06T02:52:42.321-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:03.587-0500 c20013| 2016-04-06T02:52:42.322-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|11, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:03.596-0500 c20013| 2016-04-06T02:52:42.322-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1462 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|11, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:03.597-0500 c20013| 2016-04-06T02:52:42.322-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1462 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:03.601-0500 c20013| 2016-04-06T02:52:42.323-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1462 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.604-0500 c20013| 2016-04-06T02:52:42.325-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:03.609-0500 c20013| 2016-04-06T02:52:42.325-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1464 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:03.610-0500 c20013| 2016-04-06T02:52:42.325-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1464 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:03.610-0500 c20013| 2016-04-06T02:52:42.325-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1464 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.611-0500 c20013| 2016-04-06T02:52:42.327-0500 D COMMAND [conn15] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|12, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.612-0500 c20013| 2016-04-06T02:52:42.327-0500 D REPL [conn15] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929162000|12, t: 3 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929162000|11, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.613-0500 c20013| 2016-04-06T02:52:42.327-0500 D REPL [conn15] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999976μs [js_test:multi_coll_drop] 2016-04-06T02:54:03.621-0500 c20013| 2016-04-06T02:52:42.327-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1461 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.622-0500 c20013| 2016-04-06T02:52:42.327-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|12, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.630-0500 c20013| 2016-04-06T02:52:42.327-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:03.632-0500 c20013| 2016-04-06T02:52:42.327-0500 D COMMAND [conn15] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|12, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:03.636-0500 c20013| 2016-04-06T02:52:42.327-0500 D COMMAND [conn15] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|12, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.638-0500 c20013| 2016-04-06T02:52:42.327-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1467 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.327-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|12, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:03.639-0500 c20013| 2016-04-06T02:52:42.328-0500 D QUERY [conn15] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:03.645-0500 c20013| 2016-04-06T02:52:42.328-0500 I COMMAND [conn15] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|12, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:492 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:03.646-0500 c20013| 2016-04-06T02:52:42.328-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1467 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:03.653-0500 c20013| 2016-04-06T02:52:42.332-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1467 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|13, t: 3, h: 5893081311686824656, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-70.0", lastmod: Timestamp 1000|63, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -70.0 }, max: { _id: -69.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-70.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-69.0", lastmod: Timestamp 1000|64, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -69.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-69.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.657-0500 c20013| 2016-04-06T02:52:42.332-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|13 and ending at ts: Timestamp 1459929162000|13 [js_test:multi_coll_drop] 2016-04-06T02:54:03.660-0500 c20013| 2016-04-06T02:52:42.332-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:03.662-0500 c20013| 2016-04-06T02:52:42.333-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.665-0500 c20013| 2016-04-06T02:52:42.333-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.665-0500 c20013| 2016-04-06T02:52:42.333-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.667-0500 c20013| 2016-04-06T02:52:42.333-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.668-0500 c20013| 2016-04-06T02:52:42.333-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.670-0500 c20013| 2016-04-06T02:52:42.333-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.673-0500 c20013| 2016-04-06T02:52:42.333-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.674-0500 c20013| 2016-04-06T02:52:42.333-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.678-0500 c20013| 2016-04-06T02:52:42.333-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.682-0500 c20013| 2016-04-06T02:52:42.333-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.683-0500 c20013| 2016-04-06T02:52:42.333-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.686-0500 c20013| 2016-04-06T02:52:42.333-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.686-0500 c20013| 2016-04-06T02:52:42.333-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.688-0500 c20013| 2016-04-06T02:52:42.333-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:03.691-0500 c20013| 2016-04-06T02:52:42.333-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.691-0500 c20013| 2016-04-06T02:52:42.333-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.692-0500 c20013| 2016-04-06T02:52:42.333-0500 D QUERY [repl writer worker 7] Using idhack: { _id: "multidrop.coll-_id_-70.0" } [js_test:multi_coll_drop] 2016-04-06T02:54:03.693-0500 c20013| 2016-04-06T02:52:42.334-0500 D QUERY [repl writer worker 7] Using idhack: { _id: "multidrop.coll-_id_-69.0" } [js_test:multi_coll_drop] 2016-04-06T02:54:03.696-0500 c20013| 2016-04-06T02:52:42.334-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.698-0500 c20013| 2016-04-06T02:52:42.334-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.699-0500 c20013| 2016-04-06T02:52:42.334-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.701-0500 c20013| 2016-04-06T02:52:42.334-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.703-0500 c20013| 2016-04-06T02:52:42.334-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.705-0500 c20013| 2016-04-06T02:52:42.334-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.705-0500 c20013| 2016-04-06T02:52:42.334-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.707-0500 c20013| 2016-04-06T02:52:42.334-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.707-0500 c20013| 2016-04-06T02:52:42.334-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.708-0500 c20013| 2016-04-06T02:52:42.334-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.709-0500 c20013| 2016-04-06T02:52:42.334-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.709-0500 c20013| 2016-04-06T02:52:42.334-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.713-0500 c20013| 2016-04-06T02:52:42.334-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1469 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.334-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|12, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:03.714-0500 c20013| 2016-04-06T02:52:42.334-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1469 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:03.715-0500 c20013| 2016-04-06T02:52:42.334-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.720-0500 c20013| 2016-04-06T02:52:42.334-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.721-0500 c20013| 2016-04-06T02:52:42.334-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.722-0500 c20013| 2016-04-06T02:52:42.334-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.723-0500 c20013| 2016-04-06T02:52:42.334-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.728-0500 c20013| 2016-04-06T02:52:42.335-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:03.731-0500 c20013| 2016-04-06T02:52:42.335-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:03.735-0500 c20013| 2016-04-06T02:52:42.335-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1470 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:03.736-0500 c20013| 2016-04-06T02:52:42.335-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1470 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:03.738-0500 c20013| 2016-04-06T02:52:42.336-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1470 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.742-0500 c20013| 2016-04-06T02:52:42.347-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:03.749-0500 c20013| 2016-04-06T02:52:42.347-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1472 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:03.750-0500 c20013| 2016-04-06T02:52:42.347-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1472 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:03.751-0500 c20013| 2016-04-06T02:52:42.348-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1472 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.763-0500 c20013| 2016-04-06T02:52:42.349-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1469 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|14, t: 3, h: 7513391388336437937, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:42.348-0500-5704c04a65c17830b843f1bc", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162348), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -70.0 }, max: { _id: MaxKey } }, left: { min: { _id: -70.0 }, max: { _id: -69.0 }, lastmod: Timestamp 1000|63, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -69.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|64, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.772-0500 c20013| 2016-04-06T02:52:42.349-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|13, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.777-0500 c20013| 2016-04-06T02:52:42.349-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|14 and ending at ts: Timestamp 1459929162000|14 [js_test:multi_coll_drop] 2016-04-06T02:54:03.782-0500 c20013| 2016-04-06T02:52:42.350-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:03.786-0500 c20013| 2016-04-06T02:52:42.350-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.791-0500 c20013| 2016-04-06T02:52:42.350-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.792-0500 c20013| 2016-04-06T02:52:42.350-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.795-0500 c20013| 2016-04-06T02:52:42.350-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.798-0500 c20013| 2016-04-06T02:52:42.350-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.800-0500 c20013| 2016-04-06T02:52:42.350-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.800-0500 c20013| 2016-04-06T02:52:42.350-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.802-0500 c20013| 2016-04-06T02:52:42.350-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.802-0500 c20013| 2016-04-06T02:52:42.350-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.802-0500 c20013| 2016-04-06T02:52:42.350-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.803-0500 c20013| 2016-04-06T02:52:42.350-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.803-0500 c20013| 2016-04-06T02:52:42.350-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.804-0500 c20013| 2016-04-06T02:52:42.350-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.805-0500 c20013| 2016-04-06T02:52:42.350-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:03.806-0500 c20013| 2016-04-06T02:52:42.351-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.807-0500 c20013| 2016-04-06T02:52:42.351-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.808-0500 c20013| 2016-04-06T02:52:42.351-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.809-0500 c20013| 2016-04-06T02:52:42.351-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.810-0500 c20013| 2016-04-06T02:52:42.351-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.811-0500 c20013| 2016-04-06T02:52:42.351-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.812-0500 c20013| 2016-04-06T02:52:42.351-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.814-0500 c20013| 2016-04-06T02:52:42.351-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.815-0500 c20013| 2016-04-06T02:52:42.351-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.818-0500 c20013| 2016-04-06T02:52:42.351-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.820-0500 c20013| 2016-04-06T02:52:42.351-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.822-0500 c20013| 2016-04-06T02:52:42.351-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.823-0500 c20013| 2016-04-06T02:52:42.351-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.828-0500 c20013| 2016-04-06T02:52:42.352-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1475 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.352-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|13, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:03.829-0500 c20013| 2016-04-06T02:52:42.352-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.830-0500 c20013| 2016-04-06T02:52:42.352-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.831-0500 c20013| 2016-04-06T02:52:42.352-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1475 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:03.832-0500 c20013| 2016-04-06T02:52:42.352-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.834-0500 c20013| 2016-04-06T02:52:42.352-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.836-0500 c20013| 2016-04-06T02:52:42.356-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.837-0500 c20013| 2016-04-06T02:52:42.356-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.839-0500 c20013| 2016-04-06T02:52:42.356-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:03.848-0500 c20013| 2016-04-06T02:52:42.356-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|14, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:03.854-0500 c20013| 2016-04-06T02:52:42.356-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1476 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|14, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:03.855-0500 c20013| 2016-04-06T02:52:42.356-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1476 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:03.860-0500 c20013| 2016-04-06T02:52:42.357-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1476 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.867-0500 c20013| 2016-04-06T02:52:42.361-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|14, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|14, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:03.873-0500 c20013| 2016-04-06T02:52:42.361-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1478 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|14, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|14, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:03.878-0500 c20013| 2016-04-06T02:52:42.361-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1478 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:03.878-0500 c20013| 2016-04-06T02:52:42.361-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1478 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.889-0500 c20013| 2016-04-06T02:52:42.390-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1475 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|15, t: 3, h: -1518097638317544158, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.901-0500 c20013| 2016-04-06T02:52:42.390-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|14, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:03.904-0500 c20013| 2016-04-06T02:52:42.390-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|15 and ending at ts: Timestamp 1459929162000|15 [js_test:multi_coll_drop] 2016-04-06T02:54:03.905-0500 c20013| 2016-04-06T02:52:42.391-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:03.910-0500 c20013| 2016-04-06T02:52:42.391-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.913-0500 c20013| 2016-04-06T02:52:42.391-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.913-0500 c20013| 2016-04-06T02:52:42.391-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.924-0500 c20013| 2016-04-06T02:52:42.391-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.925-0500 c20013| 2016-04-06T02:52:42.391-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.928-0500 c20013| 2016-04-06T02:52:42.391-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.932-0500 c20013| 2016-04-06T02:52:42.391-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.932-0500 c20013| 2016-04-06T02:52:42.391-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.934-0500 c20013| 2016-04-06T02:52:42.391-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.935-0500 c20013| 2016-04-06T02:52:42.391-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.938-0500 c20013| 2016-04-06T02:52:42.391-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.942-0500 c20013| 2016-04-06T02:52:42.391-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.944-0500 c20013| 2016-04-06T02:52:42.391-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.945-0500 c20013| 2016-04-06T02:52:42.391-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:03.948-0500 c20013| 2016-04-06T02:52:42.391-0500 D QUERY [repl writer worker 14] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:03.952-0500 c20013| 2016-04-06T02:52:42.391-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.954-0500 c20013| 2016-04-06T02:52:42.392-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.963-0500 c20013| 2016-04-06T02:52:42.392-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1481 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.392-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|14, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:03.966-0500 c20013| 2016-04-06T02:52:42.392-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.968-0500 c20013| 2016-04-06T02:52:42.393-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1481 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:03.968-0500 c20013| 2016-04-06T02:52:42.393-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.970-0500 c20013| 2016-04-06T02:52:42.393-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.970-0500 c20013| 2016-04-06T02:52:42.394-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.972-0500 c20013| 2016-04-06T02:52:42.394-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.974-0500 c20013| 2016-04-06T02:52:42.394-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.974-0500 c20013| 2016-04-06T02:52:42.394-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.975-0500 c20013| 2016-04-06T02:52:42.394-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.979-0500 c20013| 2016-04-06T02:52:42.394-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.979-0500 c20013| 2016-04-06T02:52:42.394-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.989-0500 c20013| 2016-04-06T02:52:42.394-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.994-0500 c20013| 2016-04-06T02:52:42.394-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:03.994-0500 c20013| 2016-04-06T02:52:42.394-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.014-0500 c20013| 2016-04-06T02:52:42.394-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.014-0500 c20013| 2016-04-06T02:52:42.394-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.014-0500 c20013| 2016-04-06T02:52:42.394-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.014-0500 c20013| 2016-04-06T02:52:42.394-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.014-0500 c20013| 2016-04-06T02:52:42.394-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:04.018-0500 c20013| 2016-04-06T02:52:42.394-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|14, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|15, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.019-0500 c20013| 2016-04-06T02:52:42.394-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1482 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|14, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|15, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.019-0500 c20013| 2016-04-06T02:52:42.395-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1482 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:04.020-0500 c20013| 2016-04-06T02:52:42.395-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1482 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.022-0500 c20013| 2016-04-06T02:52:42.407-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|15, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|15, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.025-0500 c20013| 2016-04-06T02:52:42.407-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1484 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|15, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|15, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.026-0500 c20013| 2016-04-06T02:52:42.407-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1484 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:04.028-0500 c20013| 2016-04-06T02:52:42.407-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1484 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.029-0500 c20013| 2016-04-06T02:52:42.410-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1481 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.029-0500 c20013| 2016-04-06T02:52:42.410-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|15, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.030-0500 c20013| 2016-04-06T02:52:42.410-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:04.032-0500 c20013| 2016-04-06T02:52:42.410-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1487 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.410-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|15, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:04.032-0500 c20013| 2016-04-06T02:52:42.411-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1487 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:04.034-0500 c20013| 2016-04-06T02:52:42.416-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|15, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.036-0500 c20013| 2016-04-06T02:52:42.416-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|15, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:04.037-0500 c20013| 2016-04-06T02:52:42.416-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|15, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.044-0500 c20013| 2016-04-06T02:52:42.416-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:04.048-0500 c20013| 2016-04-06T02:52:42.424-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|15, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:54:04.052-0500 c20013| 2016-04-06T02:52:42.442-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1487 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|16, t: 3, h: -945827870375542200, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c04a65c17830b843f1bd'), state: 2, when: new Date(1459929162436), why: "splitting chunk [{ _id: -69.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.054-0500 c20013| 2016-04-06T02:52:42.442-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|16 and ending at ts: Timestamp 1459929162000|16 [js_test:multi_coll_drop] 2016-04-06T02:54:04.056-0500 c20013| 2016-04-06T02:52:42.443-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:04.058-0500 c20013| 2016-04-06T02:52:42.443-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.059-0500 c20013| 2016-04-06T02:52:42.443-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.060-0500 c20013| 2016-04-06T02:52:42.443-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.061-0500 c20013| 2016-04-06T02:52:42.443-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.063-0500 c20013| 2016-04-06T02:52:42.443-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.063-0500 c20013| 2016-04-06T02:52:42.443-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.065-0500 c20013| 2016-04-06T02:52:42.443-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.066-0500 c20013| 2016-04-06T02:52:42.443-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.067-0500 c20013| 2016-04-06T02:52:42.443-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.072-0500 c20013| 2016-04-06T02:52:42.443-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.074-0500 c20013| 2016-04-06T02:52:42.443-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.074-0500 c20013| 2016-04-06T02:52:42.443-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.074-0500 c20013| 2016-04-06T02:52:42.443-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.082-0500 c20013| 2016-04-06T02:52:42.443-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:04.082-0500 c20013| 2016-04-06T02:52:42.444-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.084-0500 c20013| 2016-04-06T02:52:42.443-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.086-0500 c20013| 2016-04-06T02:52:42.444-0500 D QUERY [repl writer worker 4] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:04.087-0500 c20013| 2016-04-06T02:52:42.444-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.091-0500 c20013| 2016-04-06T02:52:42.444-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.092-0500 c20013| 2016-04-06T02:52:42.444-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.092-0500 c20013| 2016-04-06T02:52:42.444-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.092-0500 c20013| 2016-04-06T02:52:42.444-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.096-0500 c20013| 2016-04-06T02:52:42.444-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.098-0500 c20013| 2016-04-06T02:52:42.444-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.100-0500 c20013| 2016-04-06T02:52:42.444-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.104-0500 c20013| 2016-04-06T02:52:42.444-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.107-0500 c20013| 2016-04-06T02:52:42.444-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.107-0500 c20013| 2016-04-06T02:52:42.444-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.109-0500 c20013| 2016-04-06T02:52:42.444-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.111-0500 c20013| 2016-04-06T02:52:42.444-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.111-0500 c20013| 2016-04-06T02:52:42.446-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.119-0500 c20013| 2016-04-06T02:52:42.446-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1489 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.446-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|15, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:04.120-0500 c20013| 2016-04-06T02:52:42.446-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1489 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:04.123-0500 c20013| 2016-04-06T02:52:42.448-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.124-0500 c20013| 2016-04-06T02:52:42.448-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.124-0500 c20013| 2016-04-06T02:52:42.448-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.126-0500 c20013| 2016-04-06T02:52:42.452-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:04.137-0500 c20013| 2016-04-06T02:52:42.452-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|15, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.151-0500 c20013| 2016-04-06T02:52:42.452-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1490 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|15, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.155-0500 c20013| 2016-04-06T02:52:42.452-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1490 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:04.160-0500 c20013| 2016-04-06T02:52:42.453-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1490 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.168-0500 c20013| 2016-04-06T02:52:42.475-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.185-0500 c20013| 2016-04-06T02:52:42.475-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1492 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.188-0500 c20013| 2016-04-06T02:52:42.475-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1492 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:04.189-0500 c20013| 2016-04-06T02:52:42.475-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1492 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.189-0500 c20013| 2016-04-06T02:52:42.488-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1489 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.191-0500 c20013| 2016-04-06T02:52:42.488-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|16, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.191-0500 c20013| 2016-04-06T02:52:42.489-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:04.193-0500 c20013| 2016-04-06T02:52:42.489-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1495 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.489-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|16, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:04.195-0500 c20013| 2016-04-06T02:52:42.489-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1495 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:04.197-0500 c20013| 2016-04-06T02:52:42.491-0500 D COMMAND [conn15] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|64 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|16, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.201-0500 c20013| 2016-04-06T02:52:42.491-0500 D COMMAND [conn15] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|16, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:04.204-0500 c20013| 2016-04-06T02:52:42.491-0500 D COMMAND [conn15] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|64 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|16, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.206-0500 c20013| 2016-04-06T02:52:42.491-0500 D QUERY [conn15] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:04.207-0500 *** Stepping down connection to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:04.211-0500 c20013| 2016-04-06T02:52:42.492-0500 I COMMAND [conn15] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|64 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|16, t: 3 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:04.217-0500 c20013| 2016-04-06T02:52:42.495-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1495 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|17, t: 3, h: -5117580334369715764, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-69.0", lastmod: Timestamp 1000|65, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -69.0 }, max: { _id: -68.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-69.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-68.0", lastmod: Timestamp 1000|66, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -68.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-68.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.220-0500 c20013| 2016-04-06T02:52:42.495-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|17 and ending at ts: Timestamp 1459929162000|17 [js_test:multi_coll_drop] 2016-04-06T02:54:04.222-0500 c20013| 2016-04-06T02:52:42.496-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:04.225-0500 c20013| 2016-04-06T02:52:42.496-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.226-0500 c20013| 2016-04-06T02:52:42.496-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.228-0500 c20013| 2016-04-06T02:52:42.496-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.229-0500 c20013| 2016-04-06T02:52:42.496-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.229-0500 c20013| 2016-04-06T02:52:42.496-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.230-0500 c20013| 2016-04-06T02:52:42.496-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.230-0500 c20013| 2016-04-06T02:52:42.496-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.231-0500 c20013| 2016-04-06T02:52:42.496-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.232-0500 c20013| 2016-04-06T02:52:42.496-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.235-0500 c20013| 2016-04-06T02:52:42.496-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.236-0500 c20013| 2016-04-06T02:52:42.496-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.236-0500 c20013| 2016-04-06T02:52:42.496-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.237-0500 c20013| 2016-04-06T02:52:42.496-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.239-0500 c20013| 2016-04-06T02:52:42.496-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:04.240-0500 c20013| 2016-04-06T02:52:42.496-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.241-0500 c20013| 2016-04-06T02:52:42.496-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll-_id_-69.0" } [js_test:multi_coll_drop] 2016-04-06T02:54:04.242-0500 c20013| 2016-04-06T02:52:42.496-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll-_id_-68.0" } [js_test:multi_coll_drop] 2016-04-06T02:54:04.245-0500 c20013| 2016-04-06T02:52:42.497-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.248-0500 c20013| 2016-04-06T02:52:42.497-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.248-0500 c20013| 2016-04-06T02:52:42.497-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.248-0500 c20013| 2016-04-06T02:52:42.497-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.249-0500 c20013| 2016-04-06T02:52:42.497-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.250-0500 c20013| 2016-04-06T02:52:42.497-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.251-0500 c20013| 2016-04-06T02:52:42.497-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.252-0500 c20013| 2016-04-06T02:52:42.497-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.259-0500 c20013| 2016-04-06T02:52:42.497-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.260-0500 c20013| 2016-04-06T02:52:42.497-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.261-0500 c20013| 2016-04-06T02:52:42.497-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.262-0500 c20013| 2016-04-06T02:52:42.497-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.263-0500 c20013| 2016-04-06T02:52:42.497-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.263-0500 c20013| 2016-04-06T02:52:42.497-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.264-0500 c20013| 2016-04-06T02:52:42.497-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.265-0500 c20013| 2016-04-06T02:52:42.497-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.271-0500 c20013| 2016-04-06T02:52:42.497-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1497 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.497-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|16, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:04.272-0500 c20013| 2016-04-06T02:52:42.497-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1497 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:04.274-0500 c20013| 2016-04-06T02:52:42.510-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.276-0500 c20013| 2016-04-06T02:52:42.510-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.279-0500 c20013| 2016-04-06T02:52:42.516-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:04.283-0500 c20013| 2016-04-06T02:52:42.517-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.288-0500 c20013| 2016-04-06T02:52:42.517-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1498 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.290-0500 c20013| 2016-04-06T02:52:42.517-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1498 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:04.291-0500 c20013| 2016-04-06T02:52:42.517-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1498 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.295-0500 c20013| 2016-04-06T02:52:42.525-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.301-0500 c20013| 2016-04-06T02:52:42.525-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1500 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.302-0500 c20013| 2016-04-06T02:52:42.525-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1500 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:04.304-0500 c20013| 2016-04-06T02:52:42.525-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1500 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.305-0500 c20013| 2016-04-06T02:52:42.530-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1497 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.307-0500 c20013| 2016-04-06T02:52:42.530-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|17, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.308-0500 c20013| 2016-04-06T02:52:42.530-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:04.310-0500 c20013| 2016-04-06T02:52:42.530-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1503 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.530-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|17, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:04.312-0500 c20013| 2016-04-06T02:52:42.530-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1503 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:04.320-0500 c20013| 2016-04-06T02:52:42.531-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1503 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|18, t: 3, h: -4361096421252425844, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:42.526-0500-5704c04a65c17830b843f1be", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162526), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -69.0 }, max: { _id: MaxKey } }, left: { min: { _id: -69.0 }, max: { _id: -68.0 }, lastmod: Timestamp 1000|65, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -68.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|66, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.326-0500 c20013| 2016-04-06T02:52:42.531-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|18 and ending at ts: Timestamp 1459929162000|18 [js_test:multi_coll_drop] 2016-04-06T02:54:04.328-0500 c20013| 2016-04-06T02:52:42.533-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1505 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.533-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|17, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:04.332-0500 c20013| 2016-04-06T02:52:42.534-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1505 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:04.335-0500 c20013| 2016-04-06T02:52:42.536-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:04.340-0500 c20013| 2016-04-06T02:52:42.536-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.340-0500 c20013| 2016-04-06T02:52:42.536-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.341-0500 c20013| 2016-04-06T02:52:42.536-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.343-0500 c20013| 2016-04-06T02:52:42.536-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.344-0500 c20013| 2016-04-06T02:52:42.536-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.346-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.346-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.348-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.350-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.352-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.355-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.355-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.357-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.357-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.359-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.360-0500 c20013| 2016-04-06T02:52:42.537-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:04.361-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.362-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.363-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.366-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.371-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.372-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.376-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.379-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.381-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.383-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.384-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.384-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.384-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.386-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.388-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.389-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.393-0500 c20013| 2016-04-06T02:52:42.537-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.394-0500 c20013| 2016-04-06T02:52:42.538-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:04.400-0500 c20013| 2016-04-06T02:52:42.538-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|18, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.405-0500 c20013| 2016-04-06T02:52:42.538-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1506 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|18, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.406-0500 c20013| 2016-04-06T02:52:42.538-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1506 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:04.407-0500 c20013| 2016-04-06T02:52:42.538-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1506 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.415-0500 c20013| 2016-04-06T02:52:42.630-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|18, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|18, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.418-0500 c20013| 2016-04-06T02:52:42.630-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1508 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|18, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|18, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.428-0500 c20013| 2016-04-06T02:52:42.630-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1508 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:04.429-0500 c20013| 2016-04-06T02:52:42.631-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1508 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.431-0500 c20013| 2016-04-06T02:52:42.632-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1505 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.435-0500 c20013| 2016-04-06T02:52:42.633-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|18, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.436-0500 c20013| 2016-04-06T02:52:42.633-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:04.438-0500 c20013| 2016-04-06T02:52:42.633-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1511 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.633-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|18, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:04.439-0500 c20013| 2016-04-06T02:52:42.633-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1511 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:04.443-0500 c20013| 2016-04-06T02:52:42.634-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1511 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|19, t: 3, h: 6416545433872635095, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.445-0500 c20013| 2016-04-06T02:52:42.634-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|19 and ending at ts: Timestamp 1459929162000|19 [js_test:multi_coll_drop] 2016-04-06T02:54:04.446-0500 c20013| 2016-04-06T02:52:42.634-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:04.447-0500 c20013| 2016-04-06T02:52:42.635-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.448-0500 c20013| 2016-04-06T02:52:42.635-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.451-0500 c20013| 2016-04-06T02:52:42.635-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.452-0500 c20013| 2016-04-06T02:52:42.635-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.453-0500 c20013| 2016-04-06T02:52:42.635-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.458-0500 c20013| 2016-04-06T02:52:42.635-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.458-0500 c20013| 2016-04-06T02:52:42.635-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.459-0500 c20013| 2016-04-06T02:52:42.635-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:04.460-0500 c20013| 2016-04-06T02:52:42.635-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.461-0500 c20013| 2016-04-06T02:52:42.635-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.462-0500 c20013| 2016-04-06T02:52:42.635-0500 D QUERY [repl writer worker 4] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:04.463-0500 c20013| 2016-04-06T02:52:42.635-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.465-0500 c20013| 2016-04-06T02:52:42.635-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.467-0500 c20013| 2016-04-06T02:52:42.635-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.469-0500 c20013| 2016-04-06T02:52:42.635-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.469-0500 c20013| 2016-04-06T02:52:42.636-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.470-0500 c20013| 2016-04-06T02:52:42.636-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.471-0500 c20013| 2016-04-06T02:52:42.636-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.472-0500 c20013| 2016-04-06T02:52:42.636-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.473-0500 c20013| 2016-04-06T02:52:42.636-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.478-0500 c20013| 2016-04-06T02:52:42.636-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.481-0500 c20013| 2016-04-06T02:52:42.636-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.484-0500 c20013| 2016-04-06T02:52:42.636-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.486-0500 c20013| 2016-04-06T02:52:42.636-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.486-0500 c20013| 2016-04-06T02:52:42.636-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.489-0500 c20013| 2016-04-06T02:52:42.636-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.502-0500 c20013| 2016-04-06T02:52:42.636-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.506-0500 c20013| 2016-04-06T02:52:42.636-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.508-0500 c20013| 2016-04-06T02:52:42.636-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.513-0500 c20013| 2016-04-06T02:52:42.636-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.515-0500 c20013| 2016-04-06T02:52:42.636-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.516-0500 c20013| 2016-04-06T02:52:42.636-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.518-0500 c20013| 2016-04-06T02:52:42.636-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.523-0500 c20013| 2016-04-06T02:52:42.636-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1513 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.636-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|18, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:04.526-0500 c20013| 2016-04-06T02:52:42.636-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1513 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:04.526-0500 c20013| 2016-04-06T02:52:42.636-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.527-0500 2016-04-06T02:53:49.541-0500 I NETWORK [thread2] trying reconnect to mongovm16:20011 (192.168.100.28) failed [js_test:multi_coll_drop] 2016-04-06T02:54:04.529-0500 c20013| 2016-04-06T02:52:42.637-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:04.544-0500 c20013| 2016-04-06T02:52:42.637-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|18, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|19, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.550-0500 c20013| 2016-04-06T02:52:42.637-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1514 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|18, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|19, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.555-0500 c20013| 2016-04-06T02:52:42.637-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1514 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:04.558-0500 c20013| 2016-04-06T02:52:42.638-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1514 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.563-0500 c20013| 2016-04-06T02:52:42.697-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|19, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|19, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.567-0500 c20013| 2016-04-06T02:52:42.697-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1516 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|19, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|19, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.568-0500 c20013| 2016-04-06T02:52:42.697-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1516 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:04.570-0500 c20013| 2016-04-06T02:52:42.698-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1516 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.570-0500 c20013| 2016-04-06T02:52:42.700-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1513 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.571-0500 c20013| 2016-04-06T02:52:42.701-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|19, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.572-0500 c20013| 2016-04-06T02:52:42.701-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:04.573-0500 c20013| 2016-04-06T02:52:42.701-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1519 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.701-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|19, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:04.574-0500 c20013| 2016-04-06T02:52:42.701-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1519 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:04.576-0500 c20013| 2016-04-06T02:52:42.702-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|19, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.577-0500 c20013| 2016-04-06T02:52:42.702-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|19, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:04.578-0500 c20013| 2016-04-06T02:52:42.702-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|19, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.580-0500 c20013| 2016-04-06T02:52:42.702-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:04.593-0500 c20013| 2016-04-06T02:52:42.703-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|19, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:04.596-0500 c20013| 2016-04-06T02:52:42.714-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1519 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|20, t: 3, h: 8531577838120665184, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c04a65c17830b843f1bf'), state: 2, when: new Date(1459929162713), why: "splitting chunk [{ _id: -68.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.597-0500 c20013| 2016-04-06T02:52:42.715-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|20 and ending at ts: Timestamp 1459929162000|20 [js_test:multi_coll_drop] 2016-04-06T02:54:04.597-0500 c20013| 2016-04-06T02:52:42.715-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:04.598-0500 c20013| 2016-04-06T02:52:42.715-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.600-0500 c20013| 2016-04-06T02:52:42.715-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.601-0500 c20013| 2016-04-06T02:52:42.715-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.602-0500 c20013| 2016-04-06T02:52:42.715-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.626-0500 c20013| 2016-04-06T02:52:42.715-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.630-0500 c20013| 2016-04-06T02:52:42.715-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.632-0500 c20013| 2016-04-06T02:52:42.715-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.641-0500 c20013| 2016-04-06T02:52:42.715-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.643-0500 c20013| 2016-04-06T02:52:42.715-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.644-0500 c20013| 2016-04-06T02:52:42.715-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.644-0500 c20013| 2016-04-06T02:52:42.716-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.645-0500 c20013| 2016-04-06T02:52:42.715-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:04.645-0500 c20013| 2016-04-06T02:52:42.716-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.647-0500 c20013| 2016-04-06T02:52:42.716-0500 D QUERY [repl writer worker 13] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:04.648-0500 c20013| 2016-04-06T02:52:42.716-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.650-0500 c20013| 2016-04-06T02:52:42.716-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.650-0500 c20013| 2016-04-06T02:52:42.715-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.655-0500 c20013| 2016-04-06T02:52:42.716-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.655-0500 c20013| 2016-04-06T02:52:42.716-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.656-0500 c20013| 2016-04-06T02:52:42.717-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.660-0500 c20013| 2016-04-06T02:52:42.717-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1521 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.717-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|19, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:04.661-0500 c20013| 2016-04-06T02:52:42.717-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.664-0500 c20013| 2016-04-06T02:52:42.717-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.665-0500 c20013| 2016-04-06T02:52:42.717-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.666-0500 c20013| 2016-04-06T02:52:42.717-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1521 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:04.669-0500 c20013| 2016-04-06T02:52:42.717-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.669-0500 c20013| 2016-04-06T02:52:42.717-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.671-0500 c20013| 2016-04-06T02:52:42.717-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.671-0500 c20013| 2016-04-06T02:52:42.717-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.689-0500 c20013| 2016-04-06T02:52:42.717-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.689-0500 c20013| 2016-04-06T02:52:42.717-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.693-0500 c20013| 2016-04-06T02:52:42.718-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.694-0500 c20013| 2016-04-06T02:52:42.718-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.697-0500 c20013| 2016-04-06T02:52:42.718-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.698-0500 c20013| 2016-04-06T02:52:42.718-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.713-0500 c20013| 2016-04-06T02:52:42.718-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.714-0500 c20013| 2016-04-06T02:52:42.718-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:04.729-0500 c20013| 2016-04-06T02:52:42.718-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|19, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|20, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.752-0500 c20013| 2016-04-06T02:52:42.719-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1522 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|19, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|20, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.754-0500 c20013| 2016-04-06T02:52:42.719-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1522 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:04.755-0500 c20013| 2016-04-06T02:52:42.719-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1522 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.759-0500 c20013| 2016-04-06T02:52:42.735-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|20, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|20, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.771-0500 c20013| 2016-04-06T02:52:42.735-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1524 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|20, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|20, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.772-0500 c20013| 2016-04-06T02:52:42.735-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1524 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:04.774-0500 c20013| 2016-04-06T02:52:42.735-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1524 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.788-0500 c20013| 2016-04-06T02:52:42.737-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1521 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.789-0500 c20013| 2016-04-06T02:52:42.737-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|20, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.792-0500 c20013| 2016-04-06T02:52:42.737-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:04.804-0500 c20013| 2016-04-06T02:52:42.737-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1527 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.737-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|20, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:04.809-0500 c20013| 2016-04-06T02:52:42.737-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1527 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:04.822-0500 c20013| 2016-04-06T02:52:42.759-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1527 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|21, t: 3, h: 6251998028634303926, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-68.0", lastmod: Timestamp 1000|67, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -68.0 }, max: { _id: -67.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-68.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-67.0", lastmod: Timestamp 1000|68, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -67.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-67.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.827-0500 c20013| 2016-04-06T02:52:42.759-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|21 and ending at ts: Timestamp 1459929162000|21 [js_test:multi_coll_drop] 2016-04-06T02:54:04.843-0500 c20013| 2016-04-06T02:52:42.761-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:04.843-0500 c20013| 2016-04-06T02:52:42.761-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.844-0500 c20013| 2016-04-06T02:52:42.761-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.850-0500 c20013| 2016-04-06T02:52:42.761-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.850-0500 c20013| 2016-04-06T02:52:42.761-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.851-0500 c20013| 2016-04-06T02:52:42.761-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.853-0500 c20013| 2016-04-06T02:52:42.761-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.855-0500 c20013| 2016-04-06T02:52:42.761-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.855-0500 c20013| 2016-04-06T02:52:42.761-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.856-0500 c20013| 2016-04-06T02:52:42.761-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.857-0500 c20013| 2016-04-06T02:52:42.761-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.859-0500 c20013| 2016-04-06T02:52:42.761-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:04.860-0500 c20013| 2016-04-06T02:52:42.761-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.861-0500 c20013| 2016-04-06T02:52:42.761-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.862-0500 c20013| 2016-04-06T02:52:42.761-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.863-0500 c20013| 2016-04-06T02:52:42.761-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-68.0" } [js_test:multi_coll_drop] 2016-04-06T02:54:04.864-0500 c20013| 2016-04-06T02:52:42.762-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.865-0500 c20013| 2016-04-06T02:52:42.762-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-67.0" } [js_test:multi_coll_drop] 2016-04-06T02:54:04.866-0500 c20013| 2016-04-06T02:52:42.762-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.867-0500 c20013| 2016-04-06T02:52:42.762-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.869-0500 c20013| 2016-04-06T02:52:42.762-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.876-0500 c20013| 2016-04-06T02:52:42.762-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.878-0500 c20013| 2016-04-06T02:52:42.762-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.880-0500 c20013| 2016-04-06T02:52:42.762-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.884-0500 c20013| 2016-04-06T02:52:42.762-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1529 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.762-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|20, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:04.886-0500 c20013| 2016-04-06T02:52:42.762-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.897-0500 c20013| 2016-04-06T02:52:42.762-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.898-0500 c20013| 2016-04-06T02:52:42.762-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.899-0500 c20013| 2016-04-06T02:52:42.762-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.900-0500 c20013| 2016-04-06T02:52:42.762-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.903-0500 c20013| 2016-04-06T02:52:42.762-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.903-0500 c20013| 2016-04-06T02:52:42.762-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.904-0500 c20013| 2016-04-06T02:52:42.762-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.908-0500 c20013| 2016-04-06T02:52:42.762-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.909-0500 c20013| 2016-04-06T02:52:42.762-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.910-0500 c20013| 2016-04-06T02:52:42.762-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.912-0500 c20013| 2016-04-06T02:52:42.762-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:04.913-0500 c20013| 2016-04-06T02:52:42.762-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1529 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:04.917-0500 c20013| 2016-04-06T02:52:42.763-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:04.923-0500 c20013| 2016-04-06T02:52:42.763-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|20, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.926-0500 c20013| 2016-04-06T02:52:42.763-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1530 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|20, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.927-0500 c20013| 2016-04-06T02:52:42.763-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1530 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:04.932-0500 c20013| 2016-04-06T02:52:42.764-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1530 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.939-0500 c20013| 2016-04-06T02:52:42.774-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.960-0500 c20013| 2016-04-06T02:52:42.774-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1532 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:04.962-0500 c20013| 2016-04-06T02:52:42.774-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1532 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:04.963-0500 c20013| 2016-04-06T02:52:42.774-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1532 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.964-0500 2016-04-06T02:53:49.544-0500c20013| 2016-04-06T02:52:42.776-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1529 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.965-0500 I NETWORK [thread2] reconnect mongovm16:20011 (192.168.100.28) ok [js_test:multi_coll_drop] 2016-04-06T02:54:04.970-0500 c20013| 2016-04-06T02:52:42.776-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|21, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:04.971-0500 c20013| 2016-04-06T02:52:42.776-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:04.986-0500 c20013| 2016-04-06T02:52:42.776-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1535 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.776-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|21, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:04.993-0500 c20013| 2016-04-06T02:52:42.776-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1535 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:05.021-0500 c20013| 2016-04-06T02:52:42.781-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1535 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|22, t: 3, h: 7283501194637932925, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:42.780-0500-5704c04a65c17830b843f1c0", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162780), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -68.0 }, max: { _id: MaxKey } }, left: { min: { _id: -68.0 }, max: { _id: -67.0 }, lastmod: Timestamp 1000|67, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -67.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|68, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.022-0500 c20013| 2016-04-06T02:52:42.781-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|22 and ending at ts: Timestamp 1459929162000|22 [js_test:multi_coll_drop] 2016-04-06T02:54:05.022-0500 c20013| 2016-04-06T02:52:42.781-0500 D REPL [rsBackgroundSync-0] bgsync buffer has 0 bytes [js_test:multi_coll_drop] 2016-04-06T02:54:05.023-0500 c20013| 2016-04-06T02:52:42.783-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:05.028-0500 c20013| 2016-04-06T02:52:42.783-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.028-0500 c20013| 2016-04-06T02:52:42.783-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.029-0500 c20013| 2016-04-06T02:52:42.783-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.032-0500 c20013| 2016-04-06T02:52:42.783-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.034-0500 c20013| 2016-04-06T02:52:42.783-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.035-0500 c20013| 2016-04-06T02:52:42.783-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.036-0500 c20013| 2016-04-06T02:52:42.783-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.036-0500 c20013| 2016-04-06T02:52:42.783-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.039-0500 c20013| 2016-04-06T02:52:42.783-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.040-0500 c20013| 2016-04-06T02:52:42.783-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.040-0500 c20013| 2016-04-06T02:52:42.783-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.041-0500 c20013| 2016-04-06T02:52:42.783-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.041-0500 c20013| 2016-04-06T02:52:42.783-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:05.043-0500 c20013| 2016-04-06T02:52:42.783-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.047-0500 c20013| 2016-04-06T02:52:42.783-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1537 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.783-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|21, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:05.050-0500 c20013| 2016-04-06T02:52:42.783-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.054-0500 c20013| 2016-04-06T02:52:42.783-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.056-0500 c20013| 2016-04-06T02:52:42.783-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1537 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:05.059-0500 c20013| 2016-04-06T02:52:42.783-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.059-0500 c20013| 2016-04-06T02:52:42.783-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.064-0500 c20013| 2016-04-06T02:52:42.783-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.067-0500 c20013| 2016-04-06T02:52:42.783-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.068-0500 c20013| 2016-04-06T02:52:42.784-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.072-0500 c20013| 2016-04-06T02:52:42.784-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.074-0500 c20013| 2016-04-06T02:52:42.784-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.074-0500 c20013| 2016-04-06T02:52:42.784-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.077-0500 c20013| 2016-04-06T02:52:42.784-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.082-0500 c20013| 2016-04-06T02:52:42.784-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.084-0500 c20013| 2016-04-06T02:52:42.784-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.086-0500 c20013| 2016-04-06T02:52:42.784-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.086-0500 c20013| 2016-04-06T02:52:42.784-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.086-0500 c20013| 2016-04-06T02:52:42.784-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.088-0500 c20013| 2016-04-06T02:52:42.784-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.089-0500 c20013| 2016-04-06T02:52:42.784-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.094-0500 c20013| 2016-04-06T02:52:42.787-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.096-0500 c20013| 2016-04-06T02:52:42.788-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:05.107-0500 c20013| 2016-04-06T02:52:42.788-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|22, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:05.112-0500 c20013| 2016-04-06T02:52:42.788-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1538 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|22, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:05.124-0500 c20013| 2016-04-06T02:52:42.788-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1538 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:05.131-0500 c20013| 2016-04-06T02:52:42.788-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1538 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.167-0500 c20013| 2016-04-06T02:52:42.804-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|22, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|22, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:05.179-0500 c20013| 2016-04-06T02:52:42.804-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1540 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|22, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|22, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:05.181-0500 c20013| 2016-04-06T02:52:42.804-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1540 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:05.184-0500 c20013| 2016-04-06T02:52:42.805-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1540 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.198-0500 c20013| 2016-04-06T02:52:42.805-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1537 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.202-0500 c20013| 2016-04-06T02:52:42.805-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|22, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.205-0500 c20013| 2016-04-06T02:52:42.805-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:05.207-0500 c20013| 2016-04-06T02:52:42.805-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1543 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.805-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|22, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:05.208-0500 c20013| 2016-04-06T02:52:42.805-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1543 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:05.211-0500 c20013| 2016-04-06T02:52:42.807-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1543 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|23, t: 3, h: 8752573526769020090, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.213-0500 c20013| 2016-04-06T02:52:42.808-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|23 and ending at ts: Timestamp 1459929162000|23 [js_test:multi_coll_drop] 2016-04-06T02:54:05.214-0500 c20013| 2016-04-06T02:52:42.809-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:05.215-0500 c20013| 2016-04-06T02:52:42.809-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.215-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.217-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.218-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.218-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.218-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.219-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.220-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.221-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.222-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.224-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.225-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.226-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.226-0500 c20013| 2016-04-06T02:52:42.810-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:05.227-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.228-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.230-0500 c20013| 2016-04-06T02:52:42.810-0500 D QUERY [repl writer worker 13] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:05.232-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.233-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.234-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.235-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.237-0500 c20013| 2016-04-06T02:52:42.810-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1545 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.810-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|22, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:05.237-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.238-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.239-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.239-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.240-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.240-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.241-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.241-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.242-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.242-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.242-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.243-0500 c20013| 2016-04-06T02:52:42.810-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.244-0500 c20013| 2016-04-06T02:52:42.811-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.245-0500 c20013| 2016-04-06T02:52:42.811-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1545 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:05.245-0500 c20013| 2016-04-06T02:52:42.811-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:05.247-0500 c20013| 2016-04-06T02:52:42.811-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|22, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|23, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:05.249-0500 c20013| 2016-04-06T02:52:42.811-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1546 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|22, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|23, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:05.250-0500 c20013| 2016-04-06T02:52:42.811-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1546 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:05.251-0500 c20013| 2016-04-06T02:52:42.811-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1546 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.252-0500 c20013| 2016-04-06T02:52:42.822-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|23, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|23, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:05.255-0500 c20013| 2016-04-06T02:52:42.822-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1548 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|23, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|23, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:05.255-0500 c20013| 2016-04-06T02:52:42.823-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1548 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:05.256-0500 c20013| 2016-04-06T02:52:42.823-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1548 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.256-0500 c20013| 2016-04-06T02:52:42.823-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1545 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.257-0500 c20013| 2016-04-06T02:52:42.824-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|23, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.257-0500 c20013| 2016-04-06T02:52:42.824-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:05.258-0500 c20013| 2016-04-06T02:52:42.824-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1551 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.824-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|23, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:05.260-0500 c20013| 2016-04-06T02:52:42.824-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|23, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.262-0500 c20013| 2016-04-06T02:52:42.824-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|23, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:05.264-0500 c20013| 2016-04-06T02:52:42.824-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|23, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.268-0500 c20013| 2016-04-06T02:52:42.825-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:05.270-0500 c20013| 2016-04-06T02:52:42.833-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|23, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 8ms [js_test:multi_coll_drop] 2016-04-06T02:54:05.272-0500 c20013| 2016-04-06T02:52:42.834-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|66 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|23, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.278-0500 c20013| 2016-04-06T02:52:42.834-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|23, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:05.282-0500 c20013| 2016-04-06T02:52:42.834-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|66 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|23, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.283-0500 c20013| 2016-04-06T02:52:42.834-0500 D QUERY [conn10] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:05.290-0500 c20013| 2016-04-06T02:52:42.835-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|66 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|23, t: 3 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:712 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:05.294-0500 c20013| 2016-04-06T02:52:42.840-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1551 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:05.297-0500 c20013| 2016-04-06T02:52:42.843-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1551 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|24, t: 3, h: 1860944858099365931, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c04a65c17830b843f1c1'), state: 2, when: new Date(1459929162840), why: "splitting chunk [{ _id: -67.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.299-0500 c20013| 2016-04-06T02:52:42.844-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|24 and ending at ts: Timestamp 1459929162000|24 [js_test:multi_coll_drop] 2016-04-06T02:54:05.303-0500 c20013| 2016-04-06T02:52:42.846-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1553 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.846-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|23, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:05.306-0500 c20013| 2016-04-06T02:52:42.846-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1553 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:05.308-0500 c20013| 2016-04-06T02:52:42.852-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:05.308-0500 c20013| 2016-04-06T02:52:42.852-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.309-0500 c20013| 2016-04-06T02:52:42.852-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.309-0500 c20013| 2016-04-06T02:52:42.852-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.311-0500 c20013| 2016-04-06T02:52:42.852-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.311-0500 c20013| 2016-04-06T02:52:42.852-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.312-0500 c20013| 2016-04-06T02:52:42.852-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.313-0500 c20013| 2016-04-06T02:52:42.853-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.315-0500 c20013| 2016-04-06T02:52:42.853-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.316-0500 c20013| 2016-04-06T02:52:42.853-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.318-0500 c20013| 2016-04-06T02:52:42.853-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.319-0500 c20013| 2016-04-06T02:52:42.853-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.320-0500 c20013| 2016-04-06T02:52:42.853-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:05.322-0500 c20013| 2016-04-06T02:52:42.853-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.324-0500 c20013| 2016-04-06T02:52:42.853-0500 D QUERY [repl writer worker 1] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:05.327-0500 c20013| 2016-04-06T02:52:42.853-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.328-0500 c20013| 2016-04-06T02:52:42.853-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.329-0500 c20013| 2016-04-06T02:52:42.853-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.330-0500 c20013| 2016-04-06T02:52:42.853-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.332-0500 c20013| 2016-04-06T02:52:42.853-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.333-0500 c20013| 2016-04-06T02:52:42.853-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.334-0500 c20013| 2016-04-06T02:52:42.853-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.336-0500 c20013| 2016-04-06T02:52:42.853-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.339-0500 c20013| 2016-04-06T02:52:42.853-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.340-0500 c20013| 2016-04-06T02:52:42.853-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.340-0500 c20013| 2016-04-06T02:52:42.853-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.341-0500 c20013| 2016-04-06T02:52:42.853-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.342-0500 c20013| 2016-04-06T02:52:42.853-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.345-0500 c20013| 2016-04-06T02:52:42.853-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.346-0500 c20013| 2016-04-06T02:52:42.853-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.352-0500 c20013| 2016-04-06T02:52:42.853-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.353-0500 c20013| 2016-04-06T02:52:42.854-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.372-0500 c20013| 2016-04-06T02:52:42.854-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.375-0500 c20013| 2016-04-06T02:52:42.856-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.377-0500 c20013| 2016-04-06T02:52:42.856-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.380-0500 c20013| 2016-04-06T02:52:42.856-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:05.387-0500 c20013| 2016-04-06T02:52:42.856-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|23, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:05.390-0500 c20013| 2016-04-06T02:52:42.856-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1554 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|23, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:05.392-0500 c20013| 2016-04-06T02:52:42.856-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1554 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:05.394-0500 c20013| 2016-04-06T02:52:42.857-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1554 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.403-0500 c20013| 2016-04-06T02:52:42.864-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:05.407-0500 c20013| 2016-04-06T02:52:42.864-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1556 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:05.410-0500 c20013| 2016-04-06T02:52:42.864-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1556 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:05.412-0500 c20013| 2016-04-06T02:52:42.865-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1556 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.415-0500 c20013| 2016-04-06T02:52:42.865-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1553 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.416-0500 c20013| 2016-04-06T02:52:42.865-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|24, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.416-0500 c20013| 2016-04-06T02:52:42.865-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:05.420-0500 c20013| 2016-04-06T02:52:42.865-0500 D COMMAND [conn15] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|24, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.421-0500 c20013| 2016-04-06T02:52:42.865-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1559 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.865-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|24, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:05.424-0500 c20013| 2016-04-06T02:52:42.865-0500 D COMMAND [conn15] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|24, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:05.426-0500 c20013| 2016-04-06T02:52:42.865-0500 D COMMAND [conn15] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|24, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.428-0500 c20013| 2016-04-06T02:52:42.865-0500 D QUERY [conn15] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:05.431-0500 c20013| 2016-04-06T02:52:42.865-0500 I COMMAND [conn15] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|24, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:492 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:05.433-0500 c20013| 2016-04-06T02:52:42.866-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1559 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:05.437-0500 c20013| 2016-04-06T02:52:42.876-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1559 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|25, t: 3, h: -2362474195479051620, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-67.0", lastmod: Timestamp 1000|69, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -67.0 }, max: { _id: -66.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-67.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-66.0", lastmod: Timestamp 1000|70, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -66.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-66.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.696-0500 c20013| 2016-04-06T02:52:42.876-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|25 and ending at ts: Timestamp 1459929162000|25 [js_test:multi_coll_drop] 2016-04-06T02:54:05.699-0500 c20013| 2016-04-06T02:52:42.876-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:05.703-0500 c20013| 2016-04-06T02:52:42.876-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.704-0500 c20013| 2016-04-06T02:52:42.876-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.705-0500 c20013| 2016-04-06T02:52:42.876-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.706-0500 c20013| 2016-04-06T02:52:42.876-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.710-0500 c20013| 2016-04-06T02:52:42.876-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.712-0500 c20013| 2016-04-06T02:52:42.876-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.713-0500 c20013| 2016-04-06T02:52:42.877-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.713-0500 c20013| 2016-04-06T02:52:42.876-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.714-0500 c20013| 2016-04-06T02:52:42.877-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.715-0500 c20013| 2016-04-06T02:52:42.877-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.716-0500 c20013| 2016-04-06T02:52:42.877-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.716-0500 c20013| 2016-04-06T02:52:42.877-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.718-0500 c20013| 2016-04-06T02:52:42.877-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.718-0500 c20013| 2016-04-06T02:52:42.877-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.720-0500 c20013| 2016-04-06T02:52:42.877-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:05.721-0500 c20013| 2016-04-06T02:52:42.877-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.723-0500 c20013| 2016-04-06T02:52:42.877-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.724-0500 c20013| 2016-04-06T02:52:42.877-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-67.0" } [js_test:multi_coll_drop] 2016-04-06T02:54:05.724-0500 c20013| 2016-04-06T02:52:42.877-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-66.0" } [js_test:multi_coll_drop] 2016-04-06T02:54:05.725-0500 c20013| 2016-04-06T02:52:42.877-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.728-0500 c20013| 2016-04-06T02:52:42.877-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.729-0500 c20013| 2016-04-06T02:52:42.877-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.747-0500 c20013| 2016-04-06T02:52:42.877-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.747-0500 c20013| 2016-04-06T02:52:42.877-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.748-0500 c20013| 2016-04-06T02:52:42.877-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.749-0500 c20013| 2016-04-06T02:52:42.877-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.755-0500 c20013| 2016-04-06T02:52:42.877-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.756-0500 c20013| 2016-04-06T02:52:42.877-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.759-0500 c20013| 2016-04-06T02:52:42.877-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.760-0500 c20013| 2016-04-06T02:52:42.877-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.761-0500 c20013| 2016-04-06T02:52:42.877-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.762-0500 c20013| 2016-04-06T02:52:42.877-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.762-0500 c20013| 2016-04-06T02:52:42.877-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.762-0500 c20013| 2016-04-06T02:52:42.877-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.763-0500 c20013| 2016-04-06T02:52:42.877-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.763-0500 c20013| 2016-04-06T02:52:42.878-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:05.770-0500 c20013| 2016-04-06T02:52:42.878-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:05.774-0500 c20013| 2016-04-06T02:52:42.878-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1561 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:05.775-0500 c20013| 2016-04-06T02:52:42.878-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1561 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:05.777-0500 c20013| 2016-04-06T02:52:42.878-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1561 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.780-0500 c20013| 2016-04-06T02:52:42.878-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1563 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.878-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|24, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:05.781-0500 c20013| 2016-04-06T02:52:42.881-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1563 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:05.783-0500 c20013| 2016-04-06T02:52:42.893-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:05.785-0500 c20013| 2016-04-06T02:52:42.894-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1564 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:05.787-0500 c20013| 2016-04-06T02:52:42.894-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1564 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:05.788-0500 c20013| 2016-04-06T02:52:42.894-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1564 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.796-0500 c20013| 2016-04-06T02:52:42.897-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1563 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|26, t: 3, h: -1073076068676759960, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:42.894-0500-5704c04a65c17830b843f1c2", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929162894), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -67.0 }, max: { _id: MaxKey } }, left: { min: { _id: -67.0 }, max: { _id: -66.0 }, lastmod: Timestamp 1000|69, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -66.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|70, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.799-0500 c20013| 2016-04-06T02:52:42.897-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|25, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.808-0500 c20013| 2016-04-06T02:52:42.897-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|26 and ending at ts: Timestamp 1459929162000|26 [js_test:multi_coll_drop] 2016-04-06T02:54:05.816-0500 c20013| 2016-04-06T02:52:42.897-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:05.817-0500 c20013| 2016-04-06T02:52:42.898-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.820-0500 c20013| 2016-04-06T02:52:42.898-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.821-0500 c20013| 2016-04-06T02:52:42.898-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.822-0500 c20013| 2016-04-06T02:52:42.898-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.823-0500 c20013| 2016-04-06T02:52:42.898-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.841-0500 c20013| 2016-04-06T02:52:42.898-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.842-0500 c20013| 2016-04-06T02:52:42.898-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.849-0500 c20013| 2016-04-06T02:52:42.898-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.850-0500 c20013| 2016-04-06T02:52:42.898-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.851-0500 c20013| 2016-04-06T02:52:42.898-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.852-0500 c20013| 2016-04-06T02:52:42.898-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.852-0500 c20013| 2016-04-06T02:52:42.898-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.852-0500 c20013| 2016-04-06T02:52:42.898-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.853-0500 c20013| 2016-04-06T02:52:42.898-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.854-0500 c20013| 2016-04-06T02:52:42.898-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.855-0500 c20013| 2016-04-06T02:52:42.898-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:05.855-0500 c20013| 2016-04-06T02:52:42.898-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.856-0500 c20013| 2016-04-06T02:52:42.898-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.857-0500 c20013| 2016-04-06T02:52:42.899-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.869-0500 c20013| 2016-04-06T02:52:42.899-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.872-0500 c20013| 2016-04-06T02:52:42.899-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.875-0500 c20013| 2016-04-06T02:52:42.899-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.877-0500 c20013| 2016-04-06T02:52:42.899-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.878-0500 c20013| 2016-04-06T02:52:42.899-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.891-0500 c20013| 2016-04-06T02:52:42.899-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.892-0500 c20013| 2016-04-06T02:52:42.899-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.893-0500 c20013| 2016-04-06T02:52:42.899-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.894-0500 c20013| 2016-04-06T02:52:42.899-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.904-0500 c20013| 2016-04-06T02:52:42.899-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.911-0500 c20013| 2016-04-06T02:52:42.899-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.914-0500 c20013| 2016-04-06T02:52:42.899-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.917-0500 c20013| 2016-04-06T02:52:42.899-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.918-0500 c20013| 2016-04-06T02:52:42.899-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:05.921-0500 c20013| 2016-04-06T02:52:42.899-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1567 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.899-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|25, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:05.926-0500 c20013| 2016-04-06T02:52:42.899-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1567 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:05.941-0500 c20013| 2016-04-06T02:52:42.903-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:05.956-0500 c20013| 2016-04-06T02:52:42.903-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|26, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:05.963-0500 c20013| 2016-04-06T02:52:42.903-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1568 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|26, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:05.964-0500 c20013| 2016-04-06T02:52:42.903-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1568 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:05.971-0500 c20013| 2016-04-06T02:52:42.903-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1568 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.973-0500 c20013| 2016-04-06T02:52:42.908-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1570 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:52:52.908-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:05.974-0500 c20013| 2016-04-06T02:52:42.908-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1570 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:05.979-0500 c20013| 2016-04-06T02:52:42.910-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1570 finished with response: { ok: 1.0, electionTime: new Date(6270347962317012993), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929162000|26, t: 3 }, opTime: { ts: Timestamp 1459929162000|26, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:05.979-0500 c20013| 2016-04-06T02:52:42.910-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:52:44.910Z [js_test:multi_coll_drop] 2016-04-06T02:54:05.981-0500 c20013| 2016-04-06T02:52:42.910-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|26, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|26, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:05.986-0500 c20013| 2016-04-06T02:52:42.910-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1572 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|26, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|26, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:05.988-0500 c20013| 2016-04-06T02:52:42.910-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1572 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:05.990-0500 c20013| 2016-04-06T02:52:42.910-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1572 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.001-0500 c20013| 2016-04-06T02:52:42.911-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1567 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.002-0500 c20013| 2016-04-06T02:52:42.911-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|26, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.003-0500 c20013| 2016-04-06T02:52:42.911-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:06.003-0500 c20013| 2016-04-06T02:52:42.911-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1575 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.911-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|26, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:06.003-0500 c20013| 2016-04-06T02:52:42.913-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1575 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:06.006-0500 c20013| 2016-04-06T02:52:42.920-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1575 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|27, t: 3, h: 6098684010735250913, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.006-0500 c20013| 2016-04-06T02:52:42.920-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|27 and ending at ts: Timestamp 1459929162000|27 [js_test:multi_coll_drop] 2016-04-06T02:54:06.006-0500 c20013| 2016-04-06T02:52:42.920-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:06.006-0500 c20013| 2016-04-06T02:52:42.920-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.007-0500 c20013| 2016-04-06T02:52:42.921-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.007-0500 c20013| 2016-04-06T02:52:42.921-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.007-0500 c20013| 2016-04-06T02:52:42.921-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.007-0500 c20013| 2016-04-06T02:52:42.921-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.007-0500 c20013| 2016-04-06T02:52:42.921-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.007-0500 c20013| 2016-04-06T02:52:42.921-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.007-0500 c20013| 2016-04-06T02:52:42.921-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.008-0500 c20013| 2016-04-06T02:52:42.921-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.008-0500 c20013| 2016-04-06T02:52:42.921-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.008-0500 c20013| 2016-04-06T02:52:42.921-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.008-0500 c20013| 2016-04-06T02:52:42.921-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.008-0500 c20013| 2016-04-06T02:52:42.921-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:06.008-0500 c20013| 2016-04-06T02:52:42.921-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.008-0500 c20013| 2016-04-06T02:52:42.921-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.008-0500 c20013| 2016-04-06T02:52:42.921-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:06.009-0500 c20013| 2016-04-06T02:52:42.921-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.009-0500 c20013| 2016-04-06T02:52:42.921-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.009-0500 c20013| 2016-04-06T02:52:42.921-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.009-0500 c20013| 2016-04-06T02:52:42.921-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.009-0500 c20013| 2016-04-06T02:52:42.921-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.009-0500 c20013| 2016-04-06T02:52:42.921-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.009-0500 c20013| 2016-04-06T02:52:42.921-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.010-0500 c20013| 2016-04-06T02:52:42.921-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.010-0500 c20013| 2016-04-06T02:52:42.921-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.010-0500 c20013| 2016-04-06T02:52:42.921-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.010-0500 c20013| 2016-04-06T02:52:42.921-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.010-0500 c20013| 2016-04-06T02:52:42.922-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.010-0500 c20013| 2016-04-06T02:52:42.922-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.010-0500 c20013| 2016-04-06T02:52:42.922-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.011-0500 c20013| 2016-04-06T02:52:42.922-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.011-0500 c20013| 2016-04-06T02:52:42.922-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.011-0500 c20013| 2016-04-06T02:52:42.922-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.011-0500 c20013| 2016-04-06T02:52:42.922-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.011-0500 c20013| 2016-04-06T02:52:42.922-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:06.012-0500 c20013| 2016-04-06T02:52:42.922-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1577 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.922-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|26, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:06.012-0500 c20013| 2016-04-06T02:52:42.922-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1577 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:06.027-0500 c20013| 2016-04-06T02:52:42.923-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|26, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|27, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:06.034-0500 c20013| 2016-04-06T02:52:42.923-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1578 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|26, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|27, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:06.043-0500 c20013| 2016-04-06T02:52:42.923-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1578 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:06.043-0500 c20013| 2016-04-06T02:52:42.923-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1578 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.045-0500 c20013| 2016-04-06T02:52:42.936-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|27, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|27, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:06.051-0500 c20013| 2016-04-06T02:52:42.936-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1580 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|27, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|27, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:06.054-0500 c20013| 2016-04-06T02:52:42.936-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1580 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:06.059-0500 c20013| 2016-04-06T02:52:42.936-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1580 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.065-0500 c20013| 2016-04-06T02:52:42.936-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1577 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.066-0500 c20013| 2016-04-06T02:52:42.936-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|27, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.069-0500 c20013| 2016-04-06T02:52:42.936-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:06.077-0500 c20013| 2016-04-06T02:52:42.937-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1583 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.937-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|27, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:06.081-0500 c20013| 2016-04-06T02:52:42.937-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1583 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:06.083-0500 c20013| 2016-04-06T02:52:42.941-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|27, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.085-0500 c20013| 2016-04-06T02:52:42.941-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|27, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:06.107-0500 c20013| 2016-04-06T02:52:42.941-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|27, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.110-0500 c20013| 2016-04-06T02:52:42.942-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:06.110-0500 2016-04-06T02:53:50.434-0500 I NETWORK [thread2] trying reconnect to mongovm16:20012 (192.168.100.28) failed [js_test:multi_coll_drop] 2016-04-06T02:54:06.112-0500 c20013| 2016-04-06T02:52:42.942-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|27, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:06.115-0500 c20013| 2016-04-06T02:52:42.945-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|27, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.117-0500 c20013| 2016-04-06T02:52:42.945-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|27, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:06.120-0500 c20013| 2016-04-06T02:52:42.945-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|27, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.122-0500 c20013| 2016-04-06T02:52:42.945-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:06.126-0500 c20013| 2016-04-06T02:52:42.952-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929162000|27, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:54:06.134-0500 c20013| 2016-04-06T02:52:42.958-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1583 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929162000|28, t: 3, h: -8839070260856131231, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c04a65c17830b843f1c3'), state: 2, when: new Date(1459929162952), why: "splitting chunk [{ _id: -66.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.135-0500 c20013| 2016-04-06T02:52:42.958-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929162000|28 and ending at ts: Timestamp 1459929162000|28 [js_test:multi_coll_drop] 2016-04-06T02:54:06.139-0500 c20013| 2016-04-06T02:52:42.959-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:06.142-0500 c20013| 2016-04-06T02:52:42.959-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.142-0500 c20013| 2016-04-06T02:52:42.959-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.143-0500 c20013| 2016-04-06T02:52:42.959-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.144-0500 c20013| 2016-04-06T02:52:42.959-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.145-0500 c20013| 2016-04-06T02:52:42.959-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.147-0500 c20013| 2016-04-06T02:52:42.959-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.148-0500 c20013| 2016-04-06T02:52:42.959-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.151-0500 c20013| 2016-04-06T02:52:42.959-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.152-0500 c20013| 2016-04-06T02:52:42.959-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.152-0500 c20013| 2016-04-06T02:52:42.959-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.153-0500 c20013| 2016-04-06T02:52:42.959-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.155-0500 c20013| 2016-04-06T02:52:42.959-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.155-0500 c20013| 2016-04-06T02:52:42.959-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.158-0500 c20013| 2016-04-06T02:52:42.959-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.158-0500 c20013| 2016-04-06T02:52:42.959-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.160-0500 c20013| 2016-04-06T02:52:42.959-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:06.161-0500 c20013| 2016-04-06T02:52:42.959-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.161-0500 c20013| 2016-04-06T02:52:42.959-0500 D QUERY [repl writer worker 0] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:06.162-0500 c20013| 2016-04-06T02:52:42.960-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.164-0500 c20013| 2016-04-06T02:52:42.960-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.167-0500 c20013| 2016-04-06T02:52:42.960-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.168-0500 c20013| 2016-04-06T02:52:42.960-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.171-0500 c20013| 2016-04-06T02:52:42.960-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.172-0500 c20013| 2016-04-06T02:52:42.960-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.173-0500 c20013| 2016-04-06T02:52:42.960-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.174-0500 c20013| 2016-04-06T02:52:42.960-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.175-0500 c20013| 2016-04-06T02:52:42.960-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.177-0500 c20013| 2016-04-06T02:52:42.960-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.178-0500 c20013| 2016-04-06T02:52:42.960-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.180-0500 c20013| 2016-04-06T02:52:42.960-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.180-0500 c20013| 2016-04-06T02:52:42.960-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.180-0500 c20013| 2016-04-06T02:52:42.960-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.181-0500 c20013| 2016-04-06T02:52:42.960-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.182-0500 c20013| 2016-04-06T02:52:42.960-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.186-0500 c20013| 2016-04-06T02:52:42.960-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:06.188-0500 c20013| 2016-04-06T02:52:42.961-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1585 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:47.961-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|27, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:06.191-0500 c20013| 2016-04-06T02:52:42.961-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|27, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:06.194-0500 c20013| 2016-04-06T02:52:42.961-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1586 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|27, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:06.206-0500 c20013| 2016-04-06T02:52:42.961-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1586 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:06.206-0500 c20013| 2016-04-06T02:52:42.961-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1585 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:06.208-0500 c20013| 2016-04-06T02:52:42.961-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1586 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.235-0500 c20013| 2016-04-06T02:52:43.046-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:06.240-0500 c20013| 2016-04-06T02:52:43.046-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1588 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:06.242-0500 c20013| 2016-04-06T02:52:43.046-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1588 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:06.245-0500 c20013| 2016-04-06T02:52:43.046-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1588 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.247-0500 c20013| 2016-04-06T02:52:43.047-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1585 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.248-0500 c20013| 2016-04-06T02:52:43.047-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929162000|28, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.249-0500 c20013| 2016-04-06T02:52:43.047-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:06.251-0500 c20013| 2016-04-06T02:52:43.047-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1591 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:48.047-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|28, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:06.252-0500 c20013| 2016-04-06T02:52:43.047-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1591 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:06.268-0500 c20013| 2016-04-06T02:52:43.083-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1591 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929163000|1, t: 3, h: -6983826774664833750, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-66.0", lastmod: Timestamp 1000|71, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -66.0 }, max: { _id: -65.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-66.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-65.0", lastmod: Timestamp 1000|72, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -65.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-65.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.269-0500 c20013| 2016-04-06T02:52:43.084-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929163000|1 and ending at ts: Timestamp 1459929163000|1 [js_test:multi_coll_drop] 2016-04-06T02:54:06.273-0500 c20013| 2016-04-06T02:52:43.084-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:06.274-0500 c20013| 2016-04-06T02:52:43.084-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.277-0500 c20013| 2016-04-06T02:52:43.084-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.279-0500 c20013| 2016-04-06T02:52:43.084-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.281-0500 c20013| 2016-04-06T02:52:43.084-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.282-0500 c20013| 2016-04-06T02:52:43.084-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.283-0500 c20013| 2016-04-06T02:52:43.084-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.288-0500 c20013| 2016-04-06T02:52:43.084-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.289-0500 c20013| 2016-04-06T02:52:43.084-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.290-0500 c20013| 2016-04-06T02:52:43.084-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.295-0500 c20013| 2016-04-06T02:52:43.084-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.298-0500 c20013| 2016-04-06T02:52:43.084-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.299-0500 c20013| 2016-04-06T02:52:43.084-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.299-0500 c20013| 2016-04-06T02:52:43.084-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.302-0500 c20013| 2016-04-06T02:52:43.084-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:06.306-0500 c20013| 2016-04-06T02:52:43.084-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-66.0" } [js_test:multi_coll_drop] 2016-04-06T02:54:06.308-0500 c20013| 2016-04-06T02:52:43.085-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-65.0" } [js_test:multi_coll_drop] 2016-04-06T02:54:06.310-0500 c20013| 2016-04-06T02:52:43.085-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.311-0500 c20013| 2016-04-06T02:52:43.085-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.312-0500 c20013| 2016-04-06T02:52:43.085-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.313-0500 c20013| 2016-04-06T02:52:43.085-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.313-0500 c20013| 2016-04-06T02:52:43.085-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.315-0500 c20013| 2016-04-06T02:52:43.085-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.332-0500 c20013| 2016-04-06T02:52:43.085-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.343-0500 c20013| 2016-04-06T02:52:43.085-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.344-0500 c20013| 2016-04-06T02:52:43.085-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.351-0500 c20013| 2016-04-06T02:52:43.085-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.361-0500 c20013| 2016-04-06T02:52:43.085-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.362-0500 c20013| 2016-04-06T02:52:43.085-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.363-0500 c20013| 2016-04-06T02:52:43.085-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.364-0500 c20013| 2016-04-06T02:52:43.085-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.365-0500 c20013| 2016-04-06T02:52:43.085-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.368-0500 c20013| 2016-04-06T02:52:43.084-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.368-0500 c20013| 2016-04-06T02:52:43.085-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.369-0500 c20013| 2016-04-06T02:52:43.085-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.370-0500 c20013| 2016-04-06T02:52:43.085-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.373-0500 c20013| 2016-04-06T02:52:43.086-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1593 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:48.086-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929162000|28, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:06.379-0500 c20013| 2016-04-06T02:52:43.086-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1593 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:06.389-0500 c20013| 2016-04-06T02:52:43.091-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:06.446-0500 c20013| 2016-04-06T02:52:43.091-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|1, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:06.472-0500 c20013| 2016-04-06T02:52:43.091-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1594 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|1, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:06.492-0500 c20013| 2016-04-06T02:52:43.091-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1594 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:06.495-0500 c20013| 2016-04-06T02:52:43.091-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1594 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.504-0500 c20013| 2016-04-06T02:52:43.116-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|1, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|1, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:06.520-0500 c20013| 2016-04-06T02:52:43.116-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1596 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|1, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|1, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:06.524-0500 c20013| 2016-04-06T02:52:43.116-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1596 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:06.530-0500 c20013| 2016-04-06T02:52:43.116-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1596 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.534-0500 c20013| 2016-04-06T02:52:43.119-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1593 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.537-0500 c20013| 2016-04-06T02:52:43.119-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929163000|1, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.548-0500 c20013| 2016-04-06T02:52:43.119-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:06.564-0500 c20013| 2016-04-06T02:52:43.119-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1599 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:48.119-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|1, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:06.565-0500 c20013| 2016-04-06T02:52:43.120-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1599 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:06.581-0500 c20013| 2016-04-06T02:52:43.122-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1599 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929163000|2, t: 3, h: -3691712439411572840, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:43.119-0500-5704c04b65c17830b843f1c4", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929163119), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -66.0 }, max: { _id: MaxKey } }, left: { min: { _id: -66.0 }, max: { _id: -65.0 }, lastmod: Timestamp 1000|71, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -65.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|72, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.585-0500 c20013| 2016-04-06T02:52:43.122-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929163000|2 and ending at ts: Timestamp 1459929163000|2 [js_test:multi_coll_drop] 2016-04-06T02:54:06.588-0500 c20013| 2016-04-06T02:52:43.123-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:06.591-0500 c20013| 2016-04-06T02:52:43.123-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.591-0500 c20013| 2016-04-06T02:52:43.123-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.599-0500 c20013| 2016-04-06T02:52:43.123-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.600-0500 c20013| 2016-04-06T02:52:43.123-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.601-0500 c20013| 2016-04-06T02:52:43.123-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.601-0500 c20013| 2016-04-06T02:52:43.123-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.603-0500 c20013| 2016-04-06T02:52:43.123-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.604-0500 c20013| 2016-04-06T02:52:43.123-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.606-0500 c20013| 2016-04-06T02:52:43.123-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.607-0500 c20013| 2016-04-06T02:52:43.123-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.608-0500 c20013| 2016-04-06T02:52:43.124-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.608-0500 c20013| 2016-04-06T02:52:43.124-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:06.609-0500 c20013| 2016-04-06T02:52:43.124-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.610-0500 c20013| 2016-04-06T02:52:43.124-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.611-0500 c20013| 2016-04-06T02:52:43.124-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.611-0500 c20013| 2016-04-06T02:52:43.124-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.614-0500 c20013| 2016-04-06T02:52:43.124-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.616-0500 c20013| 2016-04-06T02:52:43.124-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.619-0500 c20013| 2016-04-06T02:52:43.126-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1601 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:48.126-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|1, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:06.620-0500 c20013| 2016-04-06T02:52:43.126-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.621-0500 c20013| 2016-04-06T02:52:43.126-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.622-0500 c20013| 2016-04-06T02:52:43.126-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1601 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:06.624-0500 c20013| 2016-04-06T02:52:43.126-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.627-0500 c20013| 2016-04-06T02:52:43.126-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.628-0500 c20013| 2016-04-06T02:52:43.126-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.628-0500 c20013| 2016-04-06T02:52:43.126-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.631-0500 c20013| 2016-04-06T02:52:43.126-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.632-0500 c20013| 2016-04-06T02:52:43.126-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.643-0500 c20013| 2016-04-06T02:52:43.130-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.645-0500 c20013| 2016-04-06T02:52:43.130-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.647-0500 c20013| 2016-04-06T02:52:43.131-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.648-0500 c20013| 2016-04-06T02:52:43.131-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.651-0500 c20013| 2016-04-06T02:52:43.131-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.651-0500 c20013| 2016-04-06T02:52:43.131-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.654-0500 c20013| 2016-04-06T02:52:43.131-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.656-0500 c20013| 2016-04-06T02:52:43.131-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:06.660-0500 c20013| 2016-04-06T02:52:43.131-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|1, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:06.665-0500 c20013| 2016-04-06T02:52:43.131-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1602 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|1, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:06.666-0500 c20013| 2016-04-06T02:52:43.131-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1602 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:06.668-0500 c20013| 2016-04-06T02:52:43.131-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1602 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.672-0500 c20013| 2016-04-06T02:52:43.139-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:06.676-0500 c20013| 2016-04-06T02:52:43.139-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1604 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:06.677-0500 c20013| 2016-04-06T02:52:43.139-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1604 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:06.680-0500 c20013| 2016-04-06T02:52:43.139-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1604 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.681-0500 c20013| 2016-04-06T02:52:43.159-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1601 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.684-0500 c20013| 2016-04-06T02:52:43.159-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929163000|2, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.685-0500 c20013| 2016-04-06T02:52:43.159-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:06.697-0500 c20013| 2016-04-06T02:52:43.160-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1607 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:48.160-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:06.703-0500 c20013| 2016-04-06T02:52:43.160-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1607 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:06.739-0500 c20013| 2016-04-06T02:52:43.161-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1607 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929163000|3, t: 3, h: -5230974407681466498, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.742-0500 c20013| 2016-04-06T02:52:43.161-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929163000|3 and ending at ts: Timestamp 1459929163000|3 [js_test:multi_coll_drop] 2016-04-06T02:54:06.746-0500 c20013| 2016-04-06T02:52:43.164-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1609 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:48.163-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:06.746-0500 c20013| 2016-04-06T02:52:43.164-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1609 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:06.747-0500 c20013| 2016-04-06T02:52:43.167-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:06.749-0500 c20013| 2016-04-06T02:52:43.168-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.749-0500 c20013| 2016-04-06T02:52:43.168-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.750-0500 c20013| 2016-04-06T02:52:43.168-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.766-0500 c20013| 2016-04-06T02:52:43.168-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.769-0500 c20013| 2016-04-06T02:52:43.168-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.775-0500 c20013| 2016-04-06T02:52:43.168-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.777-0500 c20013| 2016-04-06T02:52:43.168-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.778-0500 c20013| 2016-04-06T02:52:43.168-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:06.781-0500 c20013| 2016-04-06T02:52:43.168-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.782-0500 c20013| 2016-04-06T02:52:43.168-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.787-0500 c20013| 2016-04-06T02:52:43.168-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.789-0500 c20013| 2016-04-06T02:52:43.168-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.790-0500 c20013| 2016-04-06T02:52:43.168-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.791-0500 c20013| 2016-04-06T02:52:43.168-0500 D QUERY [repl writer worker 13] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:06.792-0500 c20013| 2016-04-06T02:52:43.168-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.793-0500 c20013| 2016-04-06T02:52:43.169-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.795-0500 c20013| 2016-04-06T02:52:43.169-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.796-0500 c20013| 2016-04-06T02:52:43.169-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.798-0500 c20013| 2016-04-06T02:52:43.169-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.798-0500 c20013| 2016-04-06T02:52:43.169-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.800-0500 c20013| 2016-04-06T02:52:43.170-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.802-0500 c20013| 2016-04-06T02:52:43.170-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.802-0500 c20013| 2016-04-06T02:52:43.170-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.803-0500 c20013| 2016-04-06T02:52:43.170-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.804-0500 c20013| 2016-04-06T02:52:43.179-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.805-0500 c20013| 2016-04-06T02:52:43.179-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.806-0500 c20013| 2016-04-06T02:52:43.179-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.809-0500 c20013| 2016-04-06T02:52:43.179-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.810-0500 c20013| 2016-04-06T02:52:43.179-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.811-0500 c20013| 2016-04-06T02:52:43.179-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.812-0500 c20013| 2016-04-06T02:52:43.179-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.813-0500 c20013| 2016-04-06T02:52:43.179-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.813-0500 c20013| 2016-04-06T02:52:43.179-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.814-0500 c20013| 2016-04-06T02:52:43.179-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.817-0500 c20013| 2016-04-06T02:52:43.180-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:06.826-0500 c20013| 2016-04-06T02:52:43.180-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:06.830-0500 c20013| 2016-04-06T02:52:43.180-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1610 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:06.833-0500 c20013| 2016-04-06T02:52:43.180-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1610 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:06.843-0500 c20013| 2016-04-06T02:52:43.180-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1610 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.851-0500 c20013| 2016-04-06T02:52:43.185-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|3, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:06.856-0500 c20013| 2016-04-06T02:52:43.185-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1612 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|3, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|3, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:06.859-0500 c20013| 2016-04-06T02:52:43.185-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1612 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:06.865-0500 c20013| 2016-04-06T02:52:43.188-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1612 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.870-0500 c20013| 2016-04-06T02:52:43.191-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1609 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.875-0500 c20013| 2016-04-06T02:52:43.191-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929163000|3, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.877-0500 c20013| 2016-04-06T02:52:43.191-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:06.880-0500 c20013| 2016-04-06T02:52:43.191-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1615 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:48.191-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|3, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:06.881-0500 c20013| 2016-04-06T02:52:43.191-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1615 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:06.883-0500 c20013| 2016-04-06T02:52:43.198-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|70 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|3, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.884-0500 c20013| 2016-04-06T02:52:43.198-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|3, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:06.894-0500 c20013| 2016-04-06T02:52:43.198-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|70 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|3, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.896-0500 c20013| 2016-04-06T02:52:43.198-0500 D QUERY [conn10] score(1.66697) = baseScore(1) + productivity((2 advanced)/(3 works) = 0.666667) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:06.918-0500 c20013| 2016-04-06T02:52:43.200-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|70 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|3, t: 3 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:2 docsExamined:2 cursorExhausted:1 numYields:0 nreturned:2 reslen:712 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:54:06.919-0500 c20013| 2016-04-06T02:52:43.201-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.920-0500 c20013| 2016-04-06T02:52:43.202-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|3, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:06.922-0500 c20013| 2016-04-06T02:52:43.202-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.923-0500 c20013| 2016-04-06T02:52:43.202-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:06.928-0500 c20013| 2016-04-06T02:52:43.203-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|3, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:06.932-0500 c20013| 2016-04-06T02:52:43.205-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1615 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929163000|4, t: 3, h: 6336516151299301636, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c04b65c17830b843f1c5'), state: 2, when: new Date(1459929163203), why: "splitting chunk [{ _id: -65.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:06.933-0500 c20013| 2016-04-06T02:52:43.205-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929163000|4 and ending at ts: Timestamp 1459929163000|4 [js_test:multi_coll_drop] 2016-04-06T02:54:06.941-0500 c20013| 2016-04-06T02:52:43.205-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:06.942-0500 c20013| 2016-04-06T02:52:43.206-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.944-0500 c20013| 2016-04-06T02:52:43.206-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.946-0500 c20013| 2016-04-06T02:52:43.206-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.947-0500 c20013| 2016-04-06T02:52:43.206-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.959-0500 c20013| 2016-04-06T02:52:43.206-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.959-0500 c20013| 2016-04-06T02:52:43.206-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.970-0500 c20013| 2016-04-06T02:52:43.206-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.971-0500 c20013| 2016-04-06T02:52:43.206-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.976-0500 c20013| 2016-04-06T02:52:43.206-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.980-0500 c20013| 2016-04-06T02:52:43.206-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.981-0500 c20013| 2016-04-06T02:52:43.206-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:06.983-0500 c20013| 2016-04-06T02:52:43.206-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.985-0500 c20013| 2016-04-06T02:52:43.206-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:06.986-0500 c20013| 2016-04-06T02:52:43.207-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.987-0500 c20013| 2016-04-06T02:52:43.207-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.989-0500 c20013| 2016-04-06T02:52:43.207-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1617 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:48.207-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|3, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:06.992-0500 c20013| 2016-04-06T02:52:43.207-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:06.993-0500 c20013| 2016-04-06T02:52:43.208-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.018-0500 c20013| 2016-04-06T02:52:43.208-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1617 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:07.018-0500 c20013| 2016-04-06T02:52:43.210-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.019-0500 c20013| 2016-04-06T02:52:43.211-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.023-0500 c20013| 2016-04-06T02:52:43.211-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.028-0500 c20013| 2016-04-06T02:52:43.211-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.028-0500 c20013| 2016-04-06T02:52:43.211-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.029-0500 c20013| 2016-04-06T02:52:43.211-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.030-0500 c20013| 2016-04-06T02:52:43.212-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.031-0500 c20013| 2016-04-06T02:52:43.212-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.031-0500 c20013| 2016-04-06T02:52:43.212-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.032-0500 c20013| 2016-04-06T02:52:43.213-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.034-0500 c20013| 2016-04-06T02:52:43.213-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.035-0500 c20013| 2016-04-06T02:52:43.213-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.035-0500 c20013| 2016-04-06T02:52:43.213-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.037-0500 c20013| 2016-04-06T02:52:43.213-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.038-0500 c20013| 2016-04-06T02:52:43.213-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.039-0500 c20013| 2016-04-06T02:52:43.213-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.040-0500 c20013| 2016-04-06T02:52:43.213-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.041-0500 c20013| 2016-04-06T02:52:43.213-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:07.048-0500 c20013| 2016-04-06T02:52:43.214-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|3, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:07.051-0500 c20013| 2016-04-06T02:52:43.214-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1618 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|3, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:07.052-0500 c20013| 2016-04-06T02:52:43.214-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1618 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:07.056-0500 c20013| 2016-04-06T02:52:43.214-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1618 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.059-0500 c20013| 2016-04-06T02:52:43.226-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:07.069-0500 c20013| 2016-04-06T02:52:43.226-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1620 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:07.071-0500 c20013| 2016-04-06T02:52:43.226-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1620 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:07.072-0500 c20013| 2016-04-06T02:52:43.226-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1620 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.076-0500 c20013| 2016-04-06T02:52:43.231-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1617 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.077-0500 c20013| 2016-04-06T02:52:43.231-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929163000|4, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.079-0500 c20013| 2016-04-06T02:52:43.231-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:07.082-0500 c20013| 2016-04-06T02:52:43.231-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1623 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:48.231-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|4, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.085-0500 c20013| 2016-04-06T02:52:43.231-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1623 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:07.087-0500 c20013| 2016-04-06T02:52:43.231-0500 D COMMAND [conn15] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|72 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|4, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.090-0500 c20013| 2016-04-06T02:52:43.231-0500 D COMMAND [conn15] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|4, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.094-0500 c20013| 2016-04-06T02:52:43.231-0500 D COMMAND [conn15] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|72 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|4, t: 3 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.096-0500 c20013| 2016-04-06T02:52:43.232-0500 D QUERY [conn15] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:07.100-0500 c20013| 2016-04-06T02:52:43.232-0500 I COMMAND [conn15] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll", lastmod: { $gte: Timestamp 1000|72 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|4, t: 3 } }, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.105-0500 c20013| 2016-04-06T02:52:43.234-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1623 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929163000|5, t: 3, h: -8172355748864553859, v: 2, op: "c", ns: "config.$cmd", o: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-65.0", lastmod: Timestamp 1000|73, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -65.0 }, max: { _id: -64.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-65.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-64.0", lastmod: Timestamp 1000|74, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -64.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-64.0" } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.108-0500 c20013| 2016-04-06T02:52:43.234-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929163000|5 and ending at ts: Timestamp 1459929163000|5 [js_test:multi_coll_drop] 2016-04-06T02:54:07.109-0500 c20013| 2016-04-06T02:52:43.235-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:07.111-0500 c20013| 2016-04-06T02:52:43.235-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.116-0500 c20013| 2016-04-06T02:52:43.235-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.118-0500 c20013| 2016-04-06T02:52:43.235-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.120-0500 c20013| 2016-04-06T02:52:43.235-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.120-0500 c20013| 2016-04-06T02:52:43.235-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.121-0500 c20013| 2016-04-06T02:52:43.235-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.124-0500 c20013| 2016-04-06T02:52:43.235-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.125-0500 c20013| 2016-04-06T02:52:43.235-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.125-0500 c20013| 2016-04-06T02:52:43.235-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.128-0500 c20013| 2016-04-06T02:52:43.235-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.129-0500 c20013| 2016-04-06T02:52:43.235-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.130-0500 c20013| 2016-04-06T02:52:43.235-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.131-0500 c20013| 2016-04-06T02:52:43.235-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.131-0500 c20013| 2016-04-06T02:52:43.235-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:07.132-0500 c20013| 2016-04-06T02:52:43.235-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.133-0500 c20013| 2016-04-06T02:52:43.235-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-65.0" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.135-0500 c20013| 2016-04-06T02:52:43.235-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll-_id_-64.0" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.139-0500 c20013| 2016-04-06T02:52:43.236-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1625 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:48.236-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|4, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.140-0500 c20013| 2016-04-06T02:52:43.236-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1625 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:07.141-0500 c20013| 2016-04-06T02:52:43.237-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.143-0500 c20013| 2016-04-06T02:52:43.237-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.144-0500 c20013| 2016-04-06T02:52:43.237-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.144-0500 c20013| 2016-04-06T02:52:43.237-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.145-0500 c20013| 2016-04-06T02:52:43.238-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.169-0500 c20013| 2016-04-06T02:52:43.238-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.177-0500 c20013| 2016-04-06T02:52:43.240-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.179-0500 c20013| 2016-04-06T02:52:43.240-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.180-0500 c20013| 2016-04-06T02:52:43.240-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.182-0500 c20013| 2016-04-06T02:52:43.240-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.183-0500 c20013| 2016-04-06T02:52:43.240-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.184-0500 c20013| 2016-04-06T02:52:43.240-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.184-0500 c20013| 2016-04-06T02:52:43.240-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.185-0500 c20013| 2016-04-06T02:52:43.240-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.188-0500 c20013| 2016-04-06T02:52:43.240-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.189-0500 c20013| 2016-04-06T02:52:43.240-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.190-0500 c20013| 2016-04-06T02:52:43.240-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.191-0500 c20013| 2016-04-06T02:52:43.241-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.194-0500 c20013| 2016-04-06T02:52:43.241-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:07.197-0500 c20013| 2016-04-06T02:52:43.243-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:07.200-0500 c20013| 2016-04-06T02:52:43.243-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1626 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:07.201-0500 c20013| 2016-04-06T02:52:43.243-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1626 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:07.201-0500 c20013| 2016-04-06T02:52:43.243-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1626 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.204-0500 c20013| 2016-04-06T02:52:43.259-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:07.209-0500 c20013| 2016-04-06T02:52:43.259-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1628 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:07.210-0500 c20013| 2016-04-06T02:52:43.259-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1628 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:07.210-0500 c20013| 2016-04-06T02:52:43.259-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1628 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.212-0500 c20013| 2016-04-06T02:52:43.260-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1625 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.214-0500 c20013| 2016-04-06T02:52:43.260-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929163000|5, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.215-0500 2016-04-06T02:53:51.349-0500c20013| 2016-04-06T02:52:43.260-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:07.218-0500 I c20013| 2016-04-06T02:52:43.260-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1631 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:48.260-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|5, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.219-0500 c20013| 2016-04-06T02:52:43.260-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1631 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:07.225-0500 NETWORK [c20013| 2016-04-06T02:52:43.263-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1631 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929163000|6, t: 3, h: -317850286324307218, v: 2, op: "i", ns: "config.changelog", o: { _id: "mongovm16-2016-04-06T02:52:43.260-0500-5704c04b65c17830b843f1c6", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929163260), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -65.0 }, max: { _id: MaxKey } }, left: { min: { _id: -65.0 }, max: { _id: -64.0 }, lastmod: Timestamp 1000|73, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -64.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|74, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.229-0500 c20013| 2016-04-06T02:52:43.264-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929163000|6 and ending at ts: Timestamp 1459929163000|6 [js_test:multi_coll_drop] 2016-04-06T02:54:07.230-0500 c20013| 2016-04-06T02:52:43.264-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:07.231-0500 c20013| 2016-04-06T02:52:43.265-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.232-0500 c20013| 2016-04-06T02:52:43.265-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.233-0500 thread2] reconnect mongovm16:20012 (192.168.100.28) ok [js_test:multi_coll_drop] 2016-04-06T02:54:07.233-0500 c20013| 2016-04-06T02:52:43.265-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.233-0500 c20013| 2016-04-06T02:52:43.265-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.234-0500 c20013| 2016-04-06T02:52:43.265-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.234-0500 c20013| 2016-04-06T02:52:43.265-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.235-0500 c20013| 2016-04-06T02:52:43.265-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.235-0500 c20013| 2016-04-06T02:52:43.265-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.236-0500 c20013| 2016-04-06T02:52:43.265-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.236-0500 c20013| 2016-04-06T02:52:43.265-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.237-0500 c20013| 2016-04-06T02:52:43.265-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.239-0500 c20013| 2016-04-06T02:52:43.265-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.240-0500 c20013| 2016-04-06T02:52:43.265-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.240-0500 c20013| 2016-04-06T02:52:43.265-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:07.241-0500 c20013| 2016-04-06T02:52:43.265-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.242-0500 c20013| 2016-04-06T02:52:43.265-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.242-0500 c20013| 2016-04-06T02:52:43.265-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.243-0500 c20013| 2016-04-06T02:52:43.266-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.244-0500 c20013| 2016-04-06T02:52:43.266-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.244-0500 c20013| 2016-04-06T02:52:43.266-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.244-0500 c20013| 2016-04-06T02:52:43.266-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.246-0500 c20013| 2016-04-06T02:52:43.266-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.246-0500 c20013| 2016-04-06T02:52:43.266-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.247-0500 c20013| 2016-04-06T02:52:43.266-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.248-0500 c20013| 2016-04-06T02:52:43.266-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.249-0500 c20013| 2016-04-06T02:52:43.266-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.250-0500 c20013| 2016-04-06T02:52:43.266-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.251-0500 c20013| 2016-04-06T02:52:43.266-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.254-0500 c20013| 2016-04-06T02:52:43.266-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1633 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:48.266-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|5, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.255-0500 c20013| 2016-04-06T02:52:43.266-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.255-0500 c20013| 2016-04-06T02:52:43.266-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.256-0500 c20013| 2016-04-06T02:52:43.267-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1633 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:07.256-0500 c20013| 2016-04-06T02:52:43.267-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.265-0500 c20013| 2016-04-06T02:52:43.273-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.266-0500 c20013| 2016-04-06T02:52:43.273-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:07.274-0500 c20013| 2016-04-06T02:52:43.274-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:07.287-0500 c20013| 2016-04-06T02:52:43.274-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|6, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:07.289-0500 2016-04-06T02:53:52.312-0500 I NETWORK [ReplicaSetMonitorWatcher] Socket closed remotely, no longer connected (idle 11 secs, remote host 192.168.100.28:20012) [js_test:multi_coll_drop] 2016-04-06T02:54:07.292-0500 c20012| 2016-04-06T02:53:36.566-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.299-0500 c20012| 2016-04-06T02:53:36.566-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.304-0500 c20012| 2016-04-06T02:53:36.566-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:07.323-0500 c20012| 2016-04-06T02:53:36.566-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.326-0500 c20012| 2016-04-06T02:53:36.566-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.328-0500 c20012| 2016-04-06T02:53:36.566-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.330-0500 c20012| 2016-04-06T02:53:36.566-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.332-0500 c20012| 2016-04-06T02:53:36.566-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:07.338-0500 c20012| 2016-04-06T02:53:36.567-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.340-0500 c20012| 2016-04-06T02:53:36.567-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.344-0500 c20012| 2016-04-06T02:53:36.574-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.349-0500 c20012| 2016-04-06T02:53:36.576-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f265'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216576), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.354-0500 c20012| 2016-04-06T02:53:36.576-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.358-0500 c20012| 2016-04-06T02:53:36.576-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.366-0500 c20012| 2016-04-06T02:53:36.577-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.367-0500 c20012| 2016-04-06T02:53:36.577-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.368-0500 c20012| 2016-04-06T02:53:36.577-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:07.369-0500 c20012| 2016-04-06T02:53:36.577-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:07.372-0500 c20012| 2016-04-06T02:53:36.577-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f265'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216576), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.378-0500 c20012| 2016-04-06T02:53:36.577-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f265'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216576), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f265'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216576), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.383-0500 c20012| 2016-04-06T02:53:36.580-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.386-0500 c20012| 2016-04-06T02:53:36.580-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.388-0500 c20012| 2016-04-06T02:53:36.580-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.389-0500 c20012| 2016-04-06T02:53:36.580-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:07.392-0500 c20012| 2016-04-06T02:53:36.581-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.397-0500 c20012| 2016-04-06T02:53:36.581-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.411-0500 c20012| 2016-04-06T02:53:36.581-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.415-0500 c20012| 2016-04-06T02:53:36.581-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.417-0500 c20012| 2016-04-06T02:53:36.581-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:07.424-0500 c20012| 2016-04-06T02:53:36.582-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.426-0500 c20012| 2016-04-06T02:53:36.582-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.434-0500 c20012| 2016-04-06T02:53:36.584-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.441-0500 c20012| 2016-04-06T02:53:36.584-0500 D COMMAND [conn42] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.443-0500 c20012| 2016-04-06T02:53:36.584-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.446-0500 c20012| 2016-04-06T02:53:36.584-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.448-0500 c20012| 2016-04-06T02:53:36.584-0500 D QUERY [conn42] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:07.453-0500 c20012| 2016-04-06T02:53:36.584-0500 I COMMAND [conn42] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.457-0500 c20012| 2016-04-06T02:53:36.585-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f266'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216585), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.459-0500 c20012| 2016-04-06T02:53:36.585-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.460-0500 c20012| 2016-04-06T02:53:36.585-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.462-0500 c20012| 2016-04-06T02:53:36.585-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.464-0500 c20012| 2016-04-06T02:53:36.585-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.467-0500 c20012| 2016-04-06T02:53:36.585-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:07.468-0500 c20012| 2016-04-06T02:53:36.585-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:07.471-0500 c20012| 2016-04-06T02:53:36.585-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f266'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216585), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.479-0500 c20012| 2016-04-06T02:53:36.585-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f266'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216585), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f266'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216585), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.483-0500 c20012| 2016-04-06T02:53:36.588-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.484-0500 c20012| 2016-04-06T02:53:36.588-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.498-0500 c20012| 2016-04-06T02:53:36.588-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.499-0500 c20012| 2016-04-06T02:53:36.588-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:07.504-0500 c20012| 2016-04-06T02:53:36.588-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.509-0500 c20012| 2016-04-06T02:53:36.592-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.512-0500 c20012| 2016-04-06T02:53:36.592-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.515-0500 c20012| 2016-04-06T02:53:36.592-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.517-0500 c20012| 2016-04-06T02:53:36.593-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:07.520-0500 c20012| 2016-04-06T02:53:36.593-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.521-0500 c20012| 2016-04-06T02:53:36.594-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.522-0500 c20012| 2016-04-06T02:53:36.595-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.525-0500 c20012| 2016-04-06T02:53:36.599-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f267'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216596), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.526-0500 c20012| 2016-04-06T02:53:36.599-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.527-0500 c20012| 2016-04-06T02:53:36.599-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.528-0500 c20012| 2016-04-06T02:53:36.599-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.530-0500 c20012| 2016-04-06T02:53:36.600-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.531-0500 c20012| 2016-04-06T02:53:36.600-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:07.531-0500 c20012| 2016-04-06T02:53:36.600-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:07.535-0500 c20012| 2016-04-06T02:53:36.600-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f267'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216596), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.550-0500 c20012| 2016-04-06T02:53:36.600-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f267'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216596), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f267'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216596), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.563-0500 c20012| 2016-04-06T02:53:36.600-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.576-0500 c20012| 2016-04-06T02:53:36.600-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.578-0500 c20012| 2016-04-06T02:53:36.600-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.578-0500 c20012| 2016-04-06T02:53:36.600-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:07.579-0500 c20012| 2016-04-06T02:53:36.600-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.582-0500 c20012| 2016-04-06T02:53:36.602-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.583-0500 c20012| 2016-04-06T02:53:36.602-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.586-0500 c20012| 2016-04-06T02:53:36.602-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.589-0500 c20012| 2016-04-06T02:53:36.602-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:07.592-0500 c20012| 2016-04-06T02:53:36.602-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.593-0500 c20012| 2016-04-06T02:53:36.603-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.594-0500 c20012| 2016-04-06T02:53:36.603-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.598-0500 c20012| 2016-04-06T02:53:36.605-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f268'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216605), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.599-0500 c20012| 2016-04-06T02:53:36.605-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.604-0500 c20012| 2016-04-06T02:53:36.605-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.605-0500 c20012| 2016-04-06T02:53:36.605-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.607-0500 c20012| 2016-04-06T02:53:36.605-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.607-0500 c20012| 2016-04-06T02:53:36.605-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:07.607-0500 c20012| 2016-04-06T02:53:36.605-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:07.611-0500 c20012| 2016-04-06T02:53:36.605-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f268'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216605), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.615-0500 c20012| 2016-04-06T02:53:36.605-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f268'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216605), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f268'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216605), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.617-0500 c20012| 2016-04-06T02:53:36.605-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.619-0500 c20012| 2016-04-06T02:53:36.605-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.622-0500 c20012| 2016-04-06T02:53:36.605-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.624-0500 c20012| 2016-04-06T02:53:36.605-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:07.629-0500 c20012| 2016-04-06T02:53:36.606-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.632-0500 c20012| 2016-04-06T02:53:36.606-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.635-0500 c20012| 2016-04-06T02:53:36.606-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.638-0500 c20012| 2016-04-06T02:53:36.606-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.639-0500 c20012| 2016-04-06T02:53:36.606-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:07.641-0500 c20012| 2016-04-06T02:53:36.606-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.641-0500 c20012| 2016-04-06T02:53:36.606-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.642-0500 c20012| 2016-04-06T02:53:36.608-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.644-0500 c20012| 2016-04-06T02:53:36.609-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f269'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216609), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.647-0500 c20012| 2016-04-06T02:53:36.609-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.648-0500 c20012| 2016-04-06T02:53:36.609-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.650-0500 c20012| 2016-04-06T02:53:36.609-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.653-0500 c20012| 2016-04-06T02:53:36.610-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.655-0500 c20012| 2016-04-06T02:53:36.610-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:07.656-0500 c20012| 2016-04-06T02:53:36.610-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:07.659-0500 c20012| 2016-04-06T02:53:36.610-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f269'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216609), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.666-0500 c20012| 2016-04-06T02:53:36.610-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f269'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216609), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f269'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216609), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.668-0500 c20012| 2016-04-06T02:53:36.610-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.668-0500 c20012| 2016-04-06T02:53:36.610-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.669-0500 c20012| 2016-04-06T02:53:36.610-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.671-0500 c20012| 2016-04-06T02:53:36.610-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:07.675-0500 c20012| 2016-04-06T02:53:36.611-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.677-0500 c20012| 2016-04-06T02:53:36.612-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.680-0500 c20012| 2016-04-06T02:53:36.612-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.685-0500 c20012| 2016-04-06T02:53:36.612-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.686-0500 c20012| 2016-04-06T02:53:36.612-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:07.690-0500 c20012| 2016-04-06T02:53:36.612-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.691-0500 c20012| 2016-04-06T02:53:36.617-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.693-0500 c20012| 2016-04-06T02:53:36.618-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.696-0500 c20012| 2016-04-06T02:53:36.619-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f26a'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216619), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.698-0500 c20012| 2016-04-06T02:53:36.619-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.699-0500 c20012| 2016-04-06T02:53:36.619-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.701-0500 c20012| 2016-04-06T02:53:36.619-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.702-0500 c20012| 2016-04-06T02:53:36.620-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.703-0500 c20012| 2016-04-06T02:53:36.620-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:07.703-0500 c20012| 2016-04-06T02:53:36.620-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:07.708-0500 c20012| 2016-04-06T02:53:36.620-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f26a'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216619), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.714-0500 c20012| 2016-04-06T02:53:36.620-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f26a'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216619), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f26a'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216619), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.717-0500 c20012| 2016-04-06T02:53:36.620-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.719-0500 c20012| 2016-04-06T02:53:36.620-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.723-0500 c20012| 2016-04-06T02:53:36.620-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.724-0500 c20012| 2016-04-06T02:53:36.620-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:07.728-0500 c20012| 2016-04-06T02:53:36.620-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.729-0500 c20012| 2016-04-06T02:53:36.621-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.730-0500 c20012| 2016-04-06T02:53:36.621-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.732-0500 c20012| 2016-04-06T02:53:36.621-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.733-0500 c20012| 2016-04-06T02:53:36.621-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:07.737-0500 c20012| 2016-04-06T02:53:36.621-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.737-0500 c20012| 2016-04-06T02:53:36.621-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.738-0500 c20012| 2016-04-06T02:53:36.622-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.746-0500 c20012| 2016-04-06T02:53:36.623-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f26b'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216623), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.747-0500 c20012| 2016-04-06T02:53:36.623-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.748-0500 c20012| 2016-04-06T02:53:36.623-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.750-0500 c20012| 2016-04-06T02:53:36.623-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.753-0500 c20012| 2016-04-06T02:53:36.624-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.754-0500 c20012| 2016-04-06T02:53:36.624-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:07.756-0500 c20012| 2016-04-06T02:53:36.624-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:07.760-0500 c20012| 2016-04-06T02:53:36.624-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f26b'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216623), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.764-0500 c20012| 2016-04-06T02:53:36.624-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f26b'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216623), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f26b'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216623), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.767-0500 c20012| 2016-04-06T02:53:36.624-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.769-0500 c20012| 2016-04-06T02:53:36.624-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.771-0500 c20012| 2016-04-06T02:53:36.624-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.772-0500 c20012| 2016-04-06T02:53:36.624-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:07.784-0500 c20012| 2016-04-06T02:53:36.624-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.788-0500 c20012| 2016-04-06T02:53:36.625-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.791-0500 c20012| 2016-04-06T02:53:36.625-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.795-0500 c20012| 2016-04-06T02:53:36.625-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.796-0500 c20012| 2016-04-06T02:53:36.625-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:07.799-0500 c20012| 2016-04-06T02:53:36.625-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.800-0500 c20012| 2016-04-06T02:53:36.626-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.802-0500 c20012| 2016-04-06T02:53:36.627-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.809-0500 c20012| 2016-04-06T02:53:36.628-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f26c'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216628), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.811-0500 c20012| 2016-04-06T02:53:36.628-0500 D QUERY [conn38] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.817-0500 c20012| 2016-04-06T02:53:36.628-0500 D QUERY [conn38] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.820-0500 c20012| 2016-04-06T02:53:36.629-0500 D QUERY [conn38] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.822-0500 c20012| 2016-04-06T02:53:36.629-0500 D - [conn38] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.823-0500 c20012| 2016-04-06T02:53:36.629-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:07.824-0500 c20012| 2016-04-06T02:53:36.629-0500 D STORAGE [conn38] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:07.827-0500 c20012| 2016-04-06T02:53:36.629-0500 D COMMAND [conn38] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f26c'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216628), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.832-0500 c20012| 2016-04-06T02:53:36.629-0500 I COMMAND [conn38] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08065c17830b843f26c'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216628), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08065c17830b843f26c'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929216628), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.834-0500 c20012| 2016-04-06T02:53:36.632-0500 D COMMAND [conn38] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.836-0500 c20012| 2016-04-06T02:53:36.632-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.840-0500 c20012| 2016-04-06T02:53:36.632-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.840-0500 c20012| 2016-04-06T02:53:36.632-0500 D QUERY [conn38] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:07.848-0500 c20012| 2016-04-06T02:53:36.634-0500 I COMMAND [conn38] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.851-0500 c20012| 2016-04-06T02:53:36.634-0500 D COMMAND [conn38] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.852-0500 c20012| 2016-04-06T02:53:36.634-0500 D COMMAND [conn38] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.854-0500 c20012| 2016-04-06T02:53:36.634-0500 D COMMAND [conn38] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.855-0500 c20012| 2016-04-06T02:53:36.634-0500 D QUERY [conn38] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:07.860-0500 c20012| 2016-04-06T02:53:36.636-0500 I COMMAND [conn38] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.860-0500 c20012| 2016-04-06T02:53:36.637-0500 D COMMAND [conn38] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.863-0500 c20012| 2016-04-06T02:53:36.638-0500 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.863-0500 c20012| 2016-04-06T02:53:36.891-0500 D COMMAND [conn32] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.866-0500 c20012| 2016-04-06T02:53:36.891-0500 I COMMAND [conn32] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.869-0500 c20012| 2016-04-06T02:53:36.894-0500 D COMMAND [conn42] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929211993), up: 84, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.869-0500 c20012| 2016-04-06T02:53:36.895-0500 D QUERY [conn42] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.874-0500 c20012| 2016-04-06T02:53:36.895-0500 I WRITE [conn42] update config.mongos query: { _id: "mongovm16:20014" } update: { $set: { _id: "mongovm16:20014", ping: new Date(1459929211993), up: 84, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.881-0500 c20012| 2016-04-06T02:53:36.895-0500 I COMMAND [conn40] command local.oplog.rs command: getMore { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } cursorid:23538204668 numYields:1 nreturned:1 reslen:522 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 493ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.883-0500 c20012| 2016-04-06T02:53:36.899-0500 D COMMAND [conn40] run command local.$cmd { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.891-0500 c20012| 2016-04-06T02:53:36.902-0500 D COMMAND [conn45] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|2, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:07.892-0500 c20012| 2016-04-06T02:53:36.902-0500 D COMMAND [conn45] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:07.907-0500 c20012| 2016-04-06T02:53:36.902-0500 D REPL [conn45] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929216000|2, t: 7 } and is durable through: { ts: Timestamp 1459929216000|1, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.912-0500 c20012| 2016-04-06T02:53:36.902-0500 D REPL [conn45] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929201000|1, t: 5 } and is durable through: { ts: Timestamp 1459929201000|1, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.917-0500 c20012| 2016-04-06T02:53:36.902-0500 I COMMAND [conn45] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|2, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.920-0500 c20012| 2016-04-06T02:53:36.904-0500 D REPL [conn42] Required snapshot optime: { ts: Timestamp 1459929216000|2, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929216000|1, t: 7 }, name-id: "259" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.924-0500 c20012| 2016-04-06T02:53:36.907-0500 D COMMAND [conn45] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|2, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:07.927-0500 c20012| 2016-04-06T02:53:36.907-0500 D COMMAND [conn45] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:07.928-0500 c20012| 2016-04-06T02:53:36.907-0500 D REPL [conn45] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929216000|2, t: 7 } and is durable through: { ts: Timestamp 1459929216000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.929-0500 c20012| 2016-04-06T02:53:36.907-0500 D REPL [conn45] Updating _lastCommittedOpTime to { ts: Timestamp 1459929216000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.932-0500 c20012| 2016-04-06T02:53:36.907-0500 D REPL [conn45] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929201000|1, t: 5 } and is durable through: { ts: Timestamp 1459929201000|1, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.938-0500 c20012| 2016-04-06T02:53:36.907-0500 I COMMAND [conn45] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|2, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.943-0500 c20012| 2016-04-06T02:53:36.908-0500 I COMMAND [conn42] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929211993), up: 84, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:386 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 13ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.948-0500 c20012| 2016-04-06T02:53:36.908-0500 I COMMAND [conn40] command local.oplog.rs command: getMore { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929216000|1, t: 7 } } cursorid:23538204668 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 8ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.950-0500 c20012| 2016-04-06T02:53:36.909-0500 D COMMAND [conn40] run command local.$cmd { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929216000|2, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.956-0500 c20012| 2016-04-06T02:53:36.910-0500 D COMMAND [conn42] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.976-0500 c20012| 2016-04-06T02:53:36.910-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|2, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:07.978-0500 c20012| 2016-04-06T02:53:36.910-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.978-0500 c20012| 2016-04-06T02:53:36.910-0500 D QUERY [conn42] Using idhack: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:07.985-0500 c20012| 2016-04-06T02:53:36.910-0500 I COMMAND [conn42] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929216000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:434 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:07.989-0500 c20012| 2016-04-06T02:53:36.914-0500 D COMMAND [conn42] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929216914), up: 89, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:07.992-0500 c20012| 2016-04-06T02:53:36.915-0500 D QUERY [conn42] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:54:07.996-0500 c20012| 2016-04-06T02:53:36.915-0500 I WRITE [conn42] update config.mongos query: { _id: "mongovm16:20014" } update: { $set: { _id: "mongovm16:20014", ping: new Date(1459929216914), up: 89, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.025-0500 c20012| 2016-04-06T02:53:36.915-0500 I COMMAND [conn40] command local.oplog.rs command: getMore { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929216000|2, t: 7 } } cursorid:23538204668 numYields:0 nreturned:1 reslen:522 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.028-0500 c20012| 2016-04-06T02:53:36.922-0500 D COMMAND [conn40] run command local.$cmd { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929216000|2, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:08.031-0500 c20012| 2016-04-06T02:53:36.923-0500 D COMMAND [conn45] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:08.032-0500 c20012| 2016-04-06T02:53:36.923-0500 D COMMAND [conn45] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:08.036-0500 c20012| 2016-04-06T02:53:36.923-0500 D REPL [conn45] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929216000|3, t: 7 } and is durable through: { ts: Timestamp 1459929216000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.038-0500 c20012| 2016-04-06T02:53:36.923-0500 D REPL [conn45] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929201000|1, t: 5 } and is durable through: { ts: Timestamp 1459929201000|1, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.041-0500 c20012| 2016-04-06T02:53:36.923-0500 I COMMAND [conn45] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.045-0500 c20012| 2016-04-06T02:53:36.926-0500 D REPL [conn42] Required snapshot optime: { ts: Timestamp 1459929216000|3, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929216000|2, t: 7 }, name-id: "260" } [js_test:multi_coll_drop] 2016-04-06T02:54:08.050-0500 c20012| 2016-04-06T02:53:36.927-0500 D COMMAND [conn45] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:08.051-0500 c20012| 2016-04-06T02:53:36.927-0500 D COMMAND [conn45] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:08.055-0500 c20012| 2016-04-06T02:53:36.927-0500 D REPL [conn45] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929216000|3, t: 7 } and is durable through: { ts: Timestamp 1459929216000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.057-0500 c20012| 2016-04-06T02:53:36.927-0500 D REPL [conn45] Updating _lastCommittedOpTime to { ts: Timestamp 1459929216000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.060-0500 c20012| 2016-04-06T02:53:36.928-0500 D REPL [conn45] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929201000|1, t: 5 } and is durable through: { ts: Timestamp 1459929201000|1, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.066-0500 c20012| 2016-04-06T02:53:36.928-0500 I COMMAND [conn45] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.071-0500 c20012| 2016-04-06T02:53:36.928-0500 I COMMAND [conn42] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929216914), up: 89, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:386 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 13ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.075-0500 c20012| 2016-04-06T02:53:36.928-0500 I COMMAND [conn40] command local.oplog.rs command: getMore { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929216000|2, t: 7 } } cursorid:23538204668 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.079-0500 c20012| 2016-04-06T02:53:36.934-0500 D COMMAND [conn40] run command local.$cmd { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929216000|3, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:08.080-0500 c20012| 2016-04-06T02:53:36.997-0500 D COMMAND [conn37] run command local.$cmd { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:54:08.083-0500 c20012| 2016-04-06T02:53:36.997-0500 D QUERY [conn37] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: 1 } projection: {} limit: 1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:08.085-0500 c20012| 2016-04-06T02:53:36.997-0500 I COMMAND [conn37] command local.oplog.rs command: find { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:274 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.086-0500 c20012| 2016-04-06T02:53:36.999-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:41448 #46 (15 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:08.088-0500 c20012| 2016-04-06T02:53:36.999-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:41449 #47 (16 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:08.088-0500 c20012| 2016-04-06T02:53:36.999-0500 D COMMAND [conn46] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20013" } [js_test:multi_coll_drop] 2016-04-06T02:54:08.090-0500 c20012| 2016-04-06T02:53:36.999-0500 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20013" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.091-0500 c20012| 2016-04-06T02:53:36.999-0500 D COMMAND [conn47] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20013" } [js_test:multi_coll_drop] 2016-04-06T02:54:08.094-0500 c20012| 2016-04-06T02:53:36.999-0500 I COMMAND [conn47] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20013" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.100-0500 c20012| 2016-04-06T02:53:36.999-0500 D COMMAND [conn46] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:08.101-0500 c20012| 2016-04-06T02:53:36.999-0500 D COMMAND [conn46] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:08.107-0500 c20012| 2016-04-06T02:53:36.999-0500 D REPL [conn46] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929210000|1, t: 6 } and is durable through: { ts: Timestamp 1459929210000|1, t: 6 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.108-0500 c20012| 2016-04-06T02:53:36.999-0500 D REPL [conn46] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929210000|1, t: 6 } and is durable through: { ts: Timestamp 1459929210000|1, t: 6 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.115-0500 c20012| 2016-04-06T02:53:36.999-0500 I COMMAND [conn46] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.122-0500 c20012| 2016-04-06T02:53:36.999-0500 D COMMAND [conn47] run command local.$cmd { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929210000|1 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.126-0500 c20012| 2016-04-06T02:53:37.014-0500 I COMMAND [conn47] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929210000|1 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 7 } planSummary: COLLSCAN cursorid:22842679084 keysExamined:0 docsExamined:4 numYields:0 nreturned:4 reslen:978 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 14ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.128-0500 c20012| 2016-04-06T02:53:37.033-0500 D COMMAND [conn47] run command local.$cmd { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929216000|3, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:08.131-0500 c20012| 2016-04-06T02:53:37.049-0500 D COMMAND [conn46] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929216000|1, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:08.132-0500 c20012| 2016-04-06T02:53:37.049-0500 D COMMAND [conn46] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:08.135-0500 c20012| 2016-04-06T02:53:37.049-0500 D REPL [conn46] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929210000|1, t: 6 } and is durable through: { ts: Timestamp 1459929210000|1, t: 6 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.138-0500 c20012| 2016-04-06T02:53:37.049-0500 D REPL [conn46] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929216000|1, t: 7 } and is durable through: { ts: Timestamp 1459929210000|1, t: 6 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.140-0500 c20012| 2016-04-06T02:53:37.049-0500 I COMMAND [conn46] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929216000|1, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.143-0500 c20012| 2016-04-06T02:53:37.050-0500 D COMMAND [conn46] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:08.144-0500 c20012| 2016-04-06T02:53:37.050-0500 D COMMAND [conn46] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:08.149-0500 c20012| 2016-04-06T02:53:37.050-0500 D REPL [conn46] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929210000|1, t: 6 } and is durable through: { ts: Timestamp 1459929210000|1, t: 6 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.151-0500 c20012| 2016-04-06T02:53:37.050-0500 D REPL [conn46] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929216000|3, t: 7 } and is durable through: { ts: Timestamp 1459929210000|1, t: 6 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.159-0500 c20012| 2016-04-06T02:53:37.050-0500 I COMMAND [conn46] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.163-0500 c20012| 2016-04-06T02:53:37.053-0500 D COMMAND [conn46] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929216000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:08.165-0500 c20012| 2016-04-06T02:53:37.053-0500 D COMMAND [conn46] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:08.169-0500 c20012| 2016-04-06T02:53:37.053-0500 D REPL [conn46] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929210000|1, t: 6 } and is durable through: { ts: Timestamp 1459929210000|1, t: 6 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.171-0500 c20012| 2016-04-06T02:53:37.053-0500 D REPL [conn46] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929216000|3, t: 7 } and is durable through: { ts: Timestamp 1459929216000|1, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.174-0500 c20012| 2016-04-06T02:53:37.053-0500 I COMMAND [conn46] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929216000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.178-0500 c20012| 2016-04-06T02:53:37.053-0500 D COMMAND [conn46] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:08.179-0500 c20012| 2016-04-06T02:53:37.053-0500 D COMMAND [conn46] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:08.182-0500 2016-04-06T02:53:52.617-0500 W NETWORK [ReplicaSetMonitorWatcher] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:08.184-0500 c20012| 2016-04-06T02:53:37.053-0500 D REPL [conn46] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929210000|1, t: 6 } and is durable through: { ts: Timestamp 1459929210000|1, t: 6 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.186-0500 c20012| 2016-04-06T02:53:37.053-0500 D REPL [conn46] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929216000|3, t: 7 } and is durable through: { ts: Timestamp 1459929216000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.193-0500 c20012| 2016-04-06T02:53:37.053-0500 I COMMAND [conn46] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.194-0500 c20012| 2016-04-06T02:53:37.136-0500 D COMMAND [conn42] run command admin.$cmd { _getUserCacheGeneration: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.195-0500 c20012| 2016-04-06T02:53:37.136-0500 D COMMAND [conn42] command: _getUserCacheGeneration [js_test:multi_coll_drop] 2016-04-06T02:54:08.198-0500 c20012| 2016-04-06T02:53:37.136-0500 I COMMAND [conn42] command admin.$cmd command: _getUserCacheGeneration { _getUserCacheGeneration: 1, maxTimeMS: 30000 } numYields:0 reslen:337 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.201-0500 c20012| 2016-04-06T02:53:37.670-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1418 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:47.670-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.202-0500 c20012| 2016-04-06T02:53:37.670-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1418 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:08.204-0500 c20012| 2016-04-06T02:53:37.671-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1418 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20012", term: 7, primaryId: 1, durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, opTime: { ts: Timestamp 1459929216000|3, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:08.205-0500 c20012| 2016-04-06T02:53:37.671-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:39.671Z [js_test:multi_coll_drop] 2016-04-06T02:54:08.207-0500 c20012| 2016-04-06T02:53:38.142-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.207-0500 c20012| 2016-04-06T02:53:38.142-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:54:08.211-0500 c20012| 2016-04-06T02:53:38.142-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 7 } numYields:0 reslen:500 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.217-0500 c20012| 2016-04-06T02:53:38.386-0500 D COMMAND [conn37] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.217-0500 c20012| 2016-04-06T02:53:38.386-0500 D COMMAND [conn37] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:54:08.221-0500 c20012| 2016-04-06T02:53:38.386-0500 I COMMAND [conn37] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 7 } numYields:0 reslen:500 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.223-0500 c20012| 2016-04-06T02:53:38.399-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1420 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:48.399-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.227-0500 c20012| 2016-04-06T02:53:38.399-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1420 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:08.229-0500 c20012| 2016-04-06T02:53:38.400-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1420 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20012", term: 7, primaryId: 1, durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, opTime: { ts: Timestamp 1459929216000|3, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:08.233-0500 c20012| 2016-04-06T02:53:38.400-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:40.400Z [js_test:multi_coll_drop] 2016-04-06T02:54:08.242-0500 c20012| 2016-04-06T02:53:38.885-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter failed to prepare update command with status: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:54:08.244-0500 c20012| 2016-04-06T02:53:38.885-0500 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to mongovm16:20013: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:54:08.247-0500 c20012| 2016-04-06T02:53:38.885-0500 D REPL [SyncSourceFeedback] The replication progress command (replSetUpdatePosition) failed and will be retried: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:54:08.247-0500 c20012| 2016-04-06T02:53:39.122-0500 D COMMAND [conn34] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.249-0500 c20012| 2016-04-06T02:53:39.122-0500 I COMMAND [conn34] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.252-0500 c20012| 2016-04-06T02:53:39.428-0500 D COMMAND [conn45] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:08.252-0500 c20012| 2016-04-06T02:53:39.428-0500 D COMMAND [conn45] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:08.256-0500 c20012| 2016-04-06T02:53:39.428-0500 D REPL [conn45] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929216000|3, t: 7 } and is durable through: { ts: Timestamp 1459929216000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.261-0500 c20012| 2016-04-06T02:53:39.428-0500 D REPL [conn45] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929216000|3, t: 7 } and is durable through: { ts: Timestamp 1459929216000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.269-0500 c20012| 2016-04-06T02:53:39.428-0500 I COMMAND [conn45] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.273-0500 c20012| 2016-04-06T02:53:39.434-0500 I COMMAND [conn40] command local.oplog.rs command: getMore { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929216000|3, t: 7 } } cursorid:23538204668 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 2500ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.279-0500 c20012| 2016-04-06T02:53:39.535-0500 I COMMAND [conn47] command local.oplog.rs command: getMore { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929216000|3, t: 7 } } cursorid:22842679084 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 2502ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.281-0500 c20012| 2016-04-06T02:53:39.537-0500 D COMMAND [conn47] run command local.$cmd { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929216000|3, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:08.290-0500 c20012| 2016-04-06T02:53:39.553-0500 D COMMAND [conn46] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:08.291-0500 c20012| 2016-04-06T02:53:39.553-0500 D COMMAND [conn46] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:08.292-0500 c20012| 2016-04-06T02:53:39.553-0500 D REPL [conn46] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929216000|3, t: 7 } and is durable through: { ts: Timestamp 1459929216000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.297-0500 c20012| 2016-04-06T02:53:39.553-0500 D REPL [conn46] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929216000|3, t: 7 } and is durable through: { ts: Timestamp 1459929216000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.301-0500 c20012| 2016-04-06T02:53:39.553-0500 I COMMAND [conn46] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.303-0500 c20012| 2016-04-06T02:53:39.671-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1422 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:49.671-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.306-0500 c20012| 2016-04-06T02:53:39.671-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1422 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:08.307-0500 c20012| 2016-04-06T02:53:39.743-0500 D COMMAND [conn44] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.311-0500 c20012| 2016-04-06T02:53:39.744-0500 I COMMAND [conn44] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.312-0500 c20012| 2016-04-06T02:53:39.745-0500 D COMMAND [conn40] run command local.$cmd { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929216000|3, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:08.317-0500 c20012| 2016-04-06T02:53:39.752-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1422 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20012", term: 7, primaryId: 1, durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, opTime: { ts: Timestamp 1459929216000|3, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:08.318-0500 c20012| 2016-04-06T02:53:39.752-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:41.752Z [js_test:multi_coll_drop] 2016-04-06T02:54:08.319-0500 c20012| 2016-04-06T02:53:40.070-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:41591 #48 (17 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:08.321-0500 c20012| 2016-04-06T02:53:40.071-0500 D COMMAND [conn48] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:54:08.324-0500 c20012| 2016-04-06T02:53:40.071-0500 I COMMAND [conn48] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20015" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.333-0500 c20012| 2016-04-06T02:53:40.071-0500 D COMMAND [conn48] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929220070), up: 93, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.336-0500 c20012| 2016-04-06T02:53:40.071-0500 D QUERY [conn48] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:54:08.338-0500 c20012| 2016-04-06T02:53:40.071-0500 I WRITE [conn48] update config.mongos query: { _id: "mongovm16:20015" } update: { $set: { _id: "mongovm16:20015", ping: new Date(1459929220070), up: 93, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.342-0500 c20012| 2016-04-06T02:53:40.071-0500 I COMMAND [conn40] command local.oplog.rs command: getMore { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929216000|3, t: 7 } } cursorid:23538204668 numYields:1 nreturned:1 reslen:522 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 326ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.345-0500 c20012| 2016-04-06T02:53:40.072-0500 I COMMAND [conn47] command local.oplog.rs command: getMore { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929216000|3, t: 7 } } cursorid:22842679084 numYields:1 nreturned:1 reslen:522 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 534ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.347-0500 c20012| 2016-04-06T02:53:40.074-0500 D REPL [conn48] Required snapshot optime: { ts: Timestamp 1459929220000|1, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929216000|3, t: 7 }, name-id: "261" } [js_test:multi_coll_drop] 2016-04-06T02:54:08.349-0500 c20012| 2016-04-06T02:53:40.075-0500 D COMMAND [conn47] run command local.$cmd { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929216000|3, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:08.355-0500 c20012| 2016-04-06T02:53:40.075-0500 D COMMAND [conn46] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|1, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:08.357-0500 c20012| 2016-04-06T02:53:40.075-0500 D COMMAND [conn40] run command local.$cmd { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929216000|3, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:08.357-0500 c20012| 2016-04-06T02:53:40.075-0500 D COMMAND [conn46] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:08.360-0500 c20012| 2016-04-06T02:53:40.075-0500 D REPL [conn46] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929216000|3, t: 7 } and is durable through: { ts: Timestamp 1459929216000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.363-0500 c20012| 2016-04-06T02:53:40.075-0500 D REPL [conn46] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|1, t: 7 } and is durable through: { ts: Timestamp 1459929216000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.366-0500 c20012| 2016-04-06T02:53:40.075-0500 D REPL [conn46] Required snapshot optime: { ts: Timestamp 1459929220000|1, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929216000|3, t: 7 }, name-id: "261" } [js_test:multi_coll_drop] 2016-04-06T02:54:08.370-0500 c20012| 2016-04-06T02:53:40.075-0500 I COMMAND [conn46] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|1, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.373-0500 c20012| 2016-04-06T02:53:40.076-0500 D COMMAND [conn45] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|1, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:08.374-0500 c20012| 2016-04-06T02:53:40.076-0500 D COMMAND [conn45] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:08.379-0500 c20012| 2016-04-06T02:53:40.076-0500 D REPL [conn45] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|1, t: 7 } and is durable through: { ts: Timestamp 1459929216000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.381-0500 c20012| 2016-04-06T02:53:40.076-0500 D REPL [conn45] Required snapshot optime: { ts: Timestamp 1459929220000|1, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929216000|3, t: 7 }, name-id: "261" } [js_test:multi_coll_drop] 2016-04-06T02:54:08.384-0500 c20012| 2016-04-06T02:53:40.076-0500 D REPL [conn45] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929216000|3, t: 7 } and is durable through: { ts: Timestamp 1459929216000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.392-0500 c20012| 2016-04-06T02:53:40.076-0500 I COMMAND [conn45] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|1, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.395-0500 c20012| 2016-04-06T02:53:40.079-0500 D COMMAND [conn46] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|1, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:08.396-0500 c20012| 2016-04-06T02:53:40.079-0500 D COMMAND [conn46] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:08.401-0500 c20012| 2016-04-06T02:53:40.079-0500 D REPL [conn46] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929216000|3, t: 7 } and is durable through: { ts: Timestamp 1459929216000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.406-0500 c20012| 2016-04-06T02:53:40.079-0500 D REPL [conn46] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|1, t: 7 } and is durable through: { ts: Timestamp 1459929220000|1, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.407-0500 c20012| 2016-04-06T02:53:40.079-0500 D REPL [conn46] Updating _lastCommittedOpTime to { ts: Timestamp 1459929220000|1, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.412-0500 c20012| 2016-04-06T02:53:40.079-0500 I COMMAND [conn46] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|1, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.415-0500 c20012| 2016-04-06T02:53:40.080-0500 I COMMAND [conn48] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929220070), up: 93, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:386 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 8ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.422-0500 c20012| 2016-04-06T02:53:40.080-0500 D COMMAND [conn45] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|1, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:08.422-0500 c20012| 2016-04-06T02:53:40.080-0500 D COMMAND [conn45] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:08.423-0500 c20012| 2016-04-06T02:53:40.080-0500 D REPL [conn45] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|1, t: 7 } and is durable through: { ts: Timestamp 1459929220000|1, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.426-0500 c20012| 2016-04-06T02:53:40.080-0500 D REPL [conn45] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929216000|3, t: 7 } and is durable through: { ts: Timestamp 1459929216000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.433-0500 c20012| 2016-04-06T02:53:40.080-0500 I COMMAND [conn45] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|1, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.438-0500 c20012| 2016-04-06T02:53:40.081-0500 I COMMAND [conn40] command local.oplog.rs command: getMore { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929216000|3, t: 7 } } cursorid:23538204668 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.442-0500 c20012| 2016-04-06T02:53:40.081-0500 I COMMAND [conn47] command local.oplog.rs command: getMore { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929216000|3, t: 7 } } cursorid:22842679084 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.446-0500 c20012| 2016-04-06T02:53:40.082-0500 D COMMAND [conn47] run command local.$cmd { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|1, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:08.449-0500 c20012| 2016-04-06T02:53:40.083-0500 D COMMAND [conn48] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929220083), up: 93, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.451-0500 c20012| 2016-04-06T02:53:40.083-0500 D QUERY [conn48] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:54:08.456-0500 c20012| 2016-04-06T02:53:40.084-0500 I WRITE [conn48] update config.mongos query: { _id: "mongovm16:20015" } update: { $set: { _id: "mongovm16:20015", ping: new Date(1459929220083), up: 93, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.459-0500 c20012| 2016-04-06T02:53:40.084-0500 I COMMAND [conn47] command local.oplog.rs command: getMore { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|1, t: 7 } } cursorid:22842679084 numYields:0 nreturned:1 reslen:510 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.461-0500 c20012| 2016-04-06T02:53:40.084-0500 D COMMAND [conn40] run command local.$cmd { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|1, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:08.464-0500 c20012| 2016-04-06T02:53:40.085-0500 I COMMAND [conn40] command local.oplog.rs command: getMore { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|1, t: 7 } } cursorid:23538204668 numYields:0 nreturned:1 reslen:510 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.466-0500 c20012| 2016-04-06T02:53:40.087-0500 D COMMAND [conn47] run command local.$cmd { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|1, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:08.468-0500 c20012| 2016-04-06T02:53:40.088-0500 D REPL [conn48] Required snapshot optime: { ts: Timestamp 1459929220000|2, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929220000|1, t: 7 }, name-id: "262" } [js_test:multi_coll_drop] 2016-04-06T02:54:08.471-0500 c20012| 2016-04-06T02:53:40.089-0500 D COMMAND [conn40] run command local.$cmd { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|1, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:08.479-0500 c20012| 2016-04-06T02:53:40.089-0500 D COMMAND [conn45] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|2, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:08.479-0500 c20012| 2016-04-06T02:53:40.089-0500 D COMMAND [conn45] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:08.481-0500 c20012| 2016-04-06T02:53:40.089-0500 D REPL [conn45] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|2, t: 7 } and is durable through: { ts: Timestamp 1459929220000|1, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.482-0500 c20012| 2016-04-06T02:53:40.089-0500 D REPL [conn45] Required snapshot optime: { ts: Timestamp 1459929220000|2, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929220000|1, t: 7 }, name-id: "262" } [js_test:multi_coll_drop] 2016-04-06T02:54:08.485-0500 c20012| 2016-04-06T02:53:40.089-0500 D REPL [conn45] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929216000|3, t: 7 } and is durable through: { ts: Timestamp 1459929216000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.488-0500 c20012| 2016-04-06T02:53:40.089-0500 I COMMAND [conn45] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|2, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.492-0500 c20012| 2016-04-06T02:53:40.091-0500 D COMMAND [conn45] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|2, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:08.493-0500 c20012| 2016-04-06T02:53:40.091-0500 D COMMAND [conn45] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:08.496-0500 c20012| 2016-04-06T02:53:40.091-0500 D REPL [conn45] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|2, t: 7 } and is durable through: { ts: Timestamp 1459929220000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.498-0500 c20012| 2016-04-06T02:53:40.091-0500 D REPL [conn45] Updating _lastCommittedOpTime to { ts: Timestamp 1459929220000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.501-0500 c20012| 2016-04-06T02:53:40.091-0500 D REPL [conn45] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929216000|3, t: 7 } and is durable through: { ts: Timestamp 1459929216000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.508-0500 c20012| 2016-04-06T02:53:40.091-0500 I COMMAND [conn45] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|2, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.511-0500 c20012| 2016-04-06T02:53:40.091-0500 I COMMAND [conn48] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929220083), up: 93, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:386 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.516-0500 c20012| 2016-04-06T02:53:40.091-0500 I COMMAND [conn47] command local.oplog.rs command: getMore { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|1, t: 7 } } cursorid:22842679084 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.519-0500 c20012| 2016-04-06T02:53:40.091-0500 I COMMAND [conn40] command local.oplog.rs command: getMore { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|1, t: 7 } } cursorid:23538204668 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.521-0500 c20012| 2016-04-06T02:53:40.091-0500 D COMMAND [conn47] run command local.$cmd { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|2, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:08.523-0500 c20012| 2016-04-06T02:53:40.092-0500 D COMMAND [conn40] run command local.$cmd { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|2, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:08.526-0500 c20012| 2016-04-06T02:53:40.092-0500 D COMMAND [conn46] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:08.527-0500 c20012| 2016-04-06T02:53:40.092-0500 D COMMAND [conn46] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:08.528-0500 c20012| 2016-04-06T02:53:40.092-0500 D REPL [conn46] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929216000|3, t: 7 } and is durable through: { ts: Timestamp 1459929216000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.533-0500 c20012| 2016-04-06T02:53:40.092-0500 D REPL [conn46] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|2, t: 7 } and is durable through: { ts: Timestamp 1459929220000|1, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.543-0500 c20012| 2016-04-06T02:53:40.092-0500 I COMMAND [conn46] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.561-0500 c20012| 2016-04-06T02:53:40.095-0500 D COMMAND [conn46] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:08.561-0500 c20012| 2016-04-06T02:53:40.095-0500 D COMMAND [conn46] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:08.564-0500 c20012| 2016-04-06T02:53:40.095-0500 D REPL [conn46] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929216000|3, t: 7 } and is durable through: { ts: Timestamp 1459929216000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.566-0500 c20012| 2016-04-06T02:53:40.095-0500 D REPL [conn46] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|2, t: 7 } and is durable through: { ts: Timestamp 1459929220000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.569-0500 c20012| 2016-04-06T02:53:40.095-0500 I COMMAND [conn46] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.571-0500 c20012| 2016-04-06T02:53:40.143-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.572-0500 c20012| 2016-04-06T02:53:40.143-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:54:08.574-0500 c20012| 2016-04-06T02:53:40.143-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 7 } numYields:0 reslen:500 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.575-0500 c20012| 2016-04-06T02:53:40.387-0500 D COMMAND [conn37] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.576-0500 c20012| 2016-04-06T02:53:40.387-0500 D COMMAND [conn37] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:54:08.576-0500 c20012| 2016-04-06T02:53:40.397-0500 I COMMAND [conn37] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 7 } numYields:0 reslen:500 locks:{} protocol:op_command 9ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.578-0500 c20012| 2016-04-06T02:53:40.400-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1424 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:50.400-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.579-0500 c20012| 2016-04-06T02:53:40.400-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1424 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:08.580-0500 c20012| 2016-04-06T02:53:40.400-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1424 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20012", term: 7, primaryId: 1, durableOpTime: { ts: Timestamp 1459929220000|2, t: 7 }, opTime: { ts: Timestamp 1459929220000|2, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:08.581-0500 c20012| 2016-04-06T02:53:40.400-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:42.400Z [js_test:multi_coll_drop] 2016-04-06T02:54:08.582-0500 c20012| 2016-04-06T02:53:40.701-0500 D COMMAND [conn42] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:53:40.701-0500-5704c08406c33406d4d9c0c4", server: "mongovm16", clientAddr: "127.0.0.1:55066", time: new Date(1459929220701), what: "dropCollection.start", ns: "multidrop.coll", details: {} } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.587-0500 c20012| 2016-04-06T02:53:40.702-0500 I COMMAND [conn40] command local.oplog.rs command: getMore { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|2, t: 7 } } cursorid:23538204668 numYields:1 nreturned:1 reslen:653 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 610ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.589-0500 c20012| 2016-04-06T02:53:40.702-0500 I COMMAND [conn47] command local.oplog.rs command: getMore { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|2, t: 7 } } cursorid:22842679084 numYields:1 nreturned:1 reslen:653 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 610ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.590-0500 c20012| 2016-04-06T02:53:40.705-0500 D COMMAND [conn47] run command local.$cmd { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|2, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:08.591-0500 c20012| 2016-04-06T02:53:40.705-0500 D COMMAND [conn40] run command local.$cmd { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|2, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:08.598-0500 c20012| 2016-04-06T02:53:40.705-0500 D COMMAND [conn46] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:08.598-0500 c20012| 2016-04-06T02:53:40.705-0500 D COMMAND [conn46] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:08.602-0500 c20012| 2016-04-06T02:53:40.705-0500 D REPL [conn46] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929216000|3, t: 7 } and is durable through: { ts: Timestamp 1459929216000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.608-0500 c20012| 2016-04-06T02:53:40.705-0500 D REPL [conn46] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|3, t: 7 } and is durable through: { ts: Timestamp 1459929220000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.612-0500 c20012| 2016-04-06T02:53:40.705-0500 I COMMAND [conn46] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.620-0500 c20012| 2016-04-06T02:53:40.711-0500 D COMMAND [conn45] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:08.622-0500 c20012| 2016-04-06T02:53:40.711-0500 D COMMAND [conn45] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:08.628-0500 c20012| 2016-04-06T02:53:40.711-0500 D REPL [conn45] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|3, t: 7 } and is durable through: { ts: Timestamp 1459929220000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.632-0500 c20012| 2016-04-06T02:53:40.711-0500 D REPL [conn45] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|2, t: 7 } and is durable through: { ts: Timestamp 1459929220000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.642-0500 c20012| 2016-04-06T02:53:40.712-0500 I COMMAND [conn45] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.657-0500 c20012| 2016-04-06T02:53:40.720-0500 D REPL [conn42] Required snapshot optime: { ts: Timestamp 1459929220000|3, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929220000|2, t: 7 }, name-id: "263" } [js_test:multi_coll_drop] 2016-04-06T02:54:08.663-0500 c20012| 2016-04-06T02:53:40.727-0500 D COMMAND [conn45] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:08.663-0500 c20012| 2016-04-06T02:53:40.727-0500 D COMMAND [conn45] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:08.667-0500 c20011| 2016-04-06T02:53:18.984-0500 D REPL [conn59] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929198000|2, t: 5 } and is durable through: { ts: Timestamp 1459929194000|2, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.671-0500 c20013| 2016-04-06T02:52:43.274-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1634 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|6, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:08.674-0500 c20013| 2016-04-06T02:52:43.274-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1634 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:08.675-0500 c20013| 2016-04-06T02:52:43.274-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1634 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.677-0500 c20013| 2016-04-06T02:52:43.291-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|6, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|6, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:08.685-0500 c20013| 2016-04-06T02:52:43.291-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1636 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|6, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|6, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:08.691-0500 c20013| 2016-04-06T02:52:43.291-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1636 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:08.692-0500 c20013| 2016-04-06T02:52:43.292-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1636 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.710-0500 c20013| 2016-04-06T02:52:43.292-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1633 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.710-0500 c20013| 2016-04-06T02:52:43.292-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929163000|6, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.711-0500 c20013| 2016-04-06T02:52:43.292-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:08.712-0500 c20013| 2016-04-06T02:52:43.292-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1639 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:48.292-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|6, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:08.715-0500 c20013| 2016-04-06T02:52:43.293-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1639 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:08.715-0500 s20014| Operation: Count: [js_test:multi_coll_drop] 2016-04-06T02:54:08.719-0500 s20014| Connecting 0 [js_test:multi_coll_drop] 2016-04-06T02:54:08.719-0500 s20014| In Progress 0 [js_test:multi_coll_drop] 2016-04-06T02:54:08.719-0500 s20014| Succeeded 81 [js_test:multi_coll_drop] 2016-04-06T02:54:08.724-0500 s20014| Canceled..." }, apply: { batches: { num: 168, totalMillis: 0 }, ops: 196 }, buffer: { count: 0, maxSizeBytes: 268435456, sizeBytes: 0 }, network: { bytes: 67253, getmores: { num: 266, totalMillis: 15808 }, ops: 206, readersCreated: 1 }, preload: { docs: { num: 0, totalMillis: 0 }, indexes: { num: 0, totalMillis: 0 } } }, storage: { freelist: { search: { bucketExhausted: 0, requests: 0, scanned: 0 } } }, ttl: { deletedDocuments: 0, passes: 1 } }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.727-0500 s20014| 2016-04-06T02:53:41.234-0500 D SHARDING [conn1] checking last ping for lock 'multidrop.coll' against last seen process mongovm16:20010:1459929128:185613966 and ping 2016-04-06T02:53:11.721-0500 [js_test:multi_coll_drop] 2016-04-06T02:54:08.728-0500 s20014| 2016-04-06T02:53:41.234-0500 D SHARDING [conn1] could not force lock 'multidrop.coll' because elapsed time 503 < takeover time 900000 ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.730-0500 s20014| 2016-04-06T02:53:41.234-0500 D SHARDING [conn1] distributed lock 'multidrop.coll' was not acquired. [js_test:multi_coll_drop] 2016-04-06T02:54:08.733-0500 s20014| 2016-04-06T02:53:41.735-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c08506c33406d4d9c0c7, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:08.735-0500 s20014| 2016-04-06T02:53:41.735-0500 D ASIO [conn1] startCommand: RemoteCommand 802 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:11.735-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08506c33406d4d9c0c7'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929221735), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.737-0500 s20014| 2016-04-06T02:53:41.735-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 802 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:08.745-0500 s20015| 2016-04-06T02:53:48.969-0500 D ASIO [replSetDistLockPinger] startCommand: RemoteCommand 155 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:18.969-0500 cmd:{ findAndModify: "lockpings", query: { _id: "mongovm16:20015:1459929127:-1485108316" }, update: { $set: { ping: new Date(1459929228969) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.751-0500 c20013| 2016-04-06T02:52:43.299-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1639 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929163000|7, t: 3, h: 2232396361430522479, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { state: 0 } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.756-0500 c20012| 2016-04-06T02:53:40.727-0500 D REPL [conn45] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|3, t: 7 } and is durable through: { ts: Timestamp 1459929220000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.756-0500 c20012| 2016-04-06T02:53:40.727-0500 D REPL [conn45] Updating _lastCommittedOpTime to { ts: Timestamp 1459929220000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.758-0500 c20012| 2016-04-06T02:53:40.727-0500 D REPL [conn45] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|2, t: 7 } and is durable through: { ts: Timestamp 1459929220000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.761-0500 c20012| 2016-04-06T02:53:40.727-0500 I COMMAND [conn40] command local.oplog.rs command: getMore { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|2, t: 7 } } cursorid:23538204668 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 21ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.762-0500 c20013| 2016-04-06T02:52:43.299-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929163000|7 and ending at ts: Timestamp 1459929163000|7 [js_test:multi_coll_drop] 2016-04-06T02:54:08.764-0500 s20014| 2016-04-06T02:53:41.735-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 802 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.765-0500 c20013| 2016-04-06T02:52:43.299-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:08.766-0500 s20015| 2016-04-06T02:53:48.969-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 155 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:08.766-0500 c20013| 2016-04-06T02:52:43.299-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:08.767-0500 c20013| 2016-04-06T02:52:43.299-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:08.770-0500 s20015| 2016-04-06T02:53:48.982-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 155 finished with response: { lastErrorObject: { updatedExisting: true, n: 1 }, value: { _id: "mongovm16:20015:1459929127:-1485108316", ping: new Date(1459929127338) }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.770-0500 s20015| 2016-04-06T02:53:49.144-0500 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:08.772-0500 s20015| 2016-04-06T02:53:49.144-0500 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:08.773-0500 s20015| 2016-04-06T02:53:49.144-0500 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.100.28:20012, no events [js_test:multi_coll_drop] 2016-04-06T02:54:08.774-0500 s20015| 2016-04-06T02:53:49.145-0500 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.100.28:20013, no events [js_test:multi_coll_drop] 2016-04-06T02:54:08.777-0500 s20015| 2016-04-06T02:53:49.145-0500 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 192.168.100.28:20011, no events [js_test:multi_coll_drop] 2016-04-06T02:54:08.778-0500 s20014| 2016-04-06T02:53:41.735-0500 D ASIO [conn1] startCommand: RemoteCommand 804 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:11.735-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.782-0500 c20011| 2016-04-06T02:53:18.984-0500 D REPL [conn59] Required snapshot optime: { ts: Timestamp 1459929198000|1, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929194000|2, t: 5 }, name-id: "269" } [js_test:multi_coll_drop] 2016-04-06T02:54:08.783-0500 s20014| 2016-04-06T02:53:41.736-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 804 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:08.785-0500 c20011| 2016-04-06T02:53:18.984-0500 D REPL [conn59] Required snapshot optime: { ts: Timestamp 1459929198000|2, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929194000|2, t: 5 }, name-id: "269" } [js_test:multi_coll_drop] 2016-04-06T02:54:08.790-0500 c20012| 2016-04-06T02:53:40.727-0500 D COMMAND [conn46] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:08.793-0500 s20015| 2016-04-06T02:53:50.092-0500 D ASIO [Balancer] startCommand: RemoteCommand 157 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:20.092-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929230092), up: 103, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.796-0500 c20011| 2016-04-06T02:53:18.984-0500 D REPL [conn59] Required snapshot optime: { ts: Timestamp 1459929198000|3, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929194000|2, t: 5 }, name-id: "269" } [js_test:multi_coll_drop] 2016-04-06T02:54:08.801-0500 c20011| 2016-04-06T02:53:18.984-0500 D REPL [conn59] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929198000|2, t: 4 } and is durable through: { ts: Timestamp 1459929198000|2, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.805-0500 c20011| 2016-04-06T02:53:18.984-0500 I COMMAND [conn59] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929194000|2, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.816-0500 s20014| 2016-04-06T02:53:41.736-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 804 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.848-0500 s20014| 2016-04-06T02:53:41.736-0500 D ASIO [conn1] startCommand: RemoteCommand 806 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:11.736-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.850-0500 s20014| 2016-04-06T02:53:41.736-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 806 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:08.854-0500 s20014| 2016-04-06T02:53:41.736-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 806 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929191721) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.861-0500 s20014| 2016-04-06T02:53:41.736-0500 D ASIO [conn1] startCommand: RemoteCommand 808 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:54:11.736-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.864-0500 s20014| 2016-04-06T02:53:41.736-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 808 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:08.887-0500 s20014| 2016-04-06T02:53:41.743-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] warning: log line attempted (22kB) over max size (10kB), printing beginning and end ... Request 808 finished with response: { host: "mongovm16:20012", advisoryHostFQDNs: [], version: "3.3.4-37-g36f3ff8", process: "mongod", pid: 65723, uptime: 104.0, uptimeMillis: 104596, uptimeEstimate: 90.0, localTime: new Date(1459929221736), asserts: { regular: 0, warning: 0, msg: 0, user: 43, rollovers: 0 }, connections: { current: 17, available: 51183, totalCreated: 48 }, extra_info: { note: "fields vary by platform", heap_usage_bytes: 133857176, page_faults: 0 }, globalLock: { totalTime: 104594000, currentQueue: { total: 0, readers: 0, writers: 0 }, activeClients: { total: 34, readers: 0, writers: 0 } }, locks: { Global: { acquireCount: { r: 3793, w: 806, R: 172, W: 342 }, acquireWaitCount: { r: 18, w: 2, W: 9 }, timeAcquiringMicros: { r: 79690, w: 22138, W: 3261 } }, Database: { acquireCount: { r: 1248, w: 251, W: 555 }, acquireWaitCount: { r: 115, w: 1, W: 22 }, timeAcquiringMicros: { r: 15661, w: 7420, W: 5681 } }, Collection: { acquireCount: { r: 675, w: 221 } }, Metadata: { acquireCount: { w: 81, W: 490 }, acquireWaitCount: { W: 7 }, timeAcquiringMicros: { W: 620 } }, oplog: { acquireCount: { r: 587, w: 37, R: 1, W: 1 } } }, network: { bytesIn: 208747, bytesOut: 1406952, numRequests: 865 }, opcounters: { insert: 6, query: 265, update: 10, delete: 0, getmore: 113, command: 490 }, opcountersRepl: { insert: 61, query: 0, update: 170, delete: 0, getmore: 0, command: 0 }, repl: { hosts: [ "mongovm16:20011", "mongovm16:20012", "mongovm16:20013" ], setName: "multidrop-configRS", setVersion: 1, ismaster: true, secondary: false, primary: "mongovm16:20012", me: "mongovm16:20012", electionId: ObjectId('7fffffff0000000000000007'), rbid: 1287542267 }, storageEngine: { name: "wiredTiger", supportsCommittedReads: true, readOnly: false, persistent: true }, tcmalloc: { generic: { current_allocated_bytes: 133858696, heap_size: 138121216 }, tcmalloc: { pageheap_free_bytes: 1327104, pageheap_unmapped_bytes: 0, max_total_thread_cache_bytes: 1073741824, current_total_thread_cache_bytes: 1827928, total_free_bytes: 2935416, central_cache_free_bytes: 205344, transfer_cache_free_bytes: 902144, thread_cache_free_bytes: 1827928, aggressive_memory_decommit: 0, size_classes: [ { bytes_per_object: 0, pages_per_span: 0, num_spans: 0, num_thread_objs: 0, num_central_objs: 0, num_transfer_objs: 0, free_bytes: 0, allocated_bytes: 0 }, { bytes_per_object: 8, pages_per_span: 2, num_spans: 2, num_thread_objs: 153, num_central_objs: 920, num_transfer_objs: 0, free_bytes: 8584, allocated_bytes: 16384 }, { bytes_per_object: 16, pages_per_span: 2, num_spans: 4, num_thread_objs: 400, num_central_objs: 587, num_transfer_objs: 0, free_bytes: 15792, allocated_bytes: 32768 }, { bytes_per_object: 32, pages_per_span: 2, num_spans: 36, num_thread_objs: 1574, num_central_objs: 163, num_transfer_objs: 1280, free_bytes: 96544, allocated_bytes: 294912 }, { bytes_per_object: 48, pages_per_span: 2, num_spans: 24, num_thread_objs: 649, num_central_objs: 61, num_transfer_objs: 340, free_bytes: 50400, allocated_bytes: 196608 }, { bytes_per_object: 64, pages_per_span: 2, num_spans: 58, num_thread_objs: 552, num_central_objs: 73, num_transfer_objs: 5632, free_bytes: 400448, allocated_bytes: 475136 }, { bytes_per_object: 80, pages_per_span: 2, num_spans: 34, num_thread_objs: 484, num_central_objs: 50, num_transfer_objs: 1836, free_bytes: 189600, allocated_bytes: 278528 }, { bytes_per_object: 96, pages_pe .......... cheSetFilter: { failed: 0, total: 0 }, profile: { failed: 0, total: 0 }, reIndex: { failed: 0, total: 0 }, renameCollection: { failed: 0, total: 0 }, repairCursor: { failed: 0, total: 0 }, repairDatabase: { failed: 0, total: 0 }, replSetDeclareElectionWinner: { failed: 0, total: 0 }, replSetElect: { failed: 0, total: 0 }, replSetFreeze: { failed: 0, total: 0 }, replSetFresh: { failed: 0, total: 0 }, replSetGetConfig: { failed: 0, total: 0 }, replSetGetRBID: { failed: 0, total: 2 }, replSetGetStatus: { failed: 0, total: 0 }, replSetHeartbeat: { failed: 0, total: 74 }, replSetInitiate: { failed: 0, total: 0 }, replSetMaintenance: { failed: 0, total: 0 }, replSetReconfig: { failed: 0, total: 0 }, replSetRequestVotes: { failed: 0, total: 8 }, replSetStepDown: { failed: 0, total: 1 }, replSetSyncFrom: { failed: 0, total: 0 }, replSetTest: { failed: 0, total: 0 }, replSetUpdatePosition: { failed: 0, total: 128 }, resetError: { failed: 0, total: 0 }, resync: { failed: 0, total: 0 }, revokePrivilegesFromRole: { failed: 0, total: 0 }, revokeRolesFromRole: { failed: 0, total: 0 }, revokeRolesFromUser: { failed: 0, total: 0 }, rolesInfo: { failed: 0, total: 0 }, saslContinue: { failed: 0, total: 0 }, saslStart: { failed: 0, total: 0 }, serverStatus: { failed: 0, total: 42 }, setCommittedSnapshot: { failed: 0, total: 0 }, setParameter: { failed: 0, total: 0 }, setShardVersion: { failed: 0, total: 0 }, shardConnPoolStats: { failed: 0, total: 0 }, shardingState: { failed: 0, total: 0 }, shutdown: { failed: 0, total: 0 }, sleep: { failed: 0, total: 0 }, splitChunk: { failed: 0, total: 0 }, splitVector: { failed: 0, total: 0 }, stageDebug: { failed: 0, total: 0 }, top: { failed: 0, total: 0 }, touch: { failed: 0, total: 0 }, unsetSharding: { failed: 0, total: 0 }, update: { failed: 0, total: 10 }, updateRole: { failed: 0, total: 0 }, updateUser: { failed: 0, total: 0 }, usersInfo: { failed: 0, total: 0 }, validate: { failed: 0, total: 0 }, whatsmyuri: { failed: 0, total: 0 }, writebacklisten: { failed: 0, total: 0 } }, cursor: { timedOut: 0, open: { noTimeout: 0, pinned: 2, total: 2 } }, document: { deleted: 0, inserted: 12, returned: 430, updated: 22 }, getLastError: { wtime: { num: 34, totalMillis: 5770 }, wtimeouts: 0 }, operation: { fastmod: 0, idhack: 104, scanAndOrder: 0, writeConflicts: 0 }, queryExecutor: { scanned: 268, scannedObjects: 400 }, record: { moves: 0 }, repl: { executor: { counters: { eventCreated: 14, eventWait: 14, cancels: 459, waits: 1673, scheduledNetCmd: 92, scheduledDBWork: 3, scheduledXclWork: 0, scheduledWorkAt: 542, scheduledWork: 1828, schedulingFailures: 0 }, queues: { networkInProgress: 0, dbWorkInProgress: 0, exclusiveInProgress: 0, sleepers: 3, ready: 0, free: 30 }, unsignaledEvents: 3, eventWaiters: 0, shuttingDown: false, networkInterface: " [js_test:multi_coll_drop] 2016-04-06T02:54:08.887-0500 s20014| NetworkInterfaceASIO Operations' Diagnostic: [js_test:multi_coll_drop] 2016-04-06T02:54:08.888-0500 s20014| Operation: Count: [js_test:multi_coll_drop] 2016-04-06T02:54:08.891-0500 s20014| Connecting 0 [js_test:multi_coll_drop] 2016-04-06T02:54:08.892-0500 s20014| In Progress 0 [js_test:multi_coll_drop] 2016-04-06T02:54:08.893-0500 s20014| Succeeded 81 [js_test:multi_coll_drop] 2016-04-06T02:54:08.899-0500 s20014| Canceled..." }, apply: { batches: { num: 168, totalMillis: 0 }, ops: 196 }, buffer: { count: 0, maxSizeBytes: 268435456, sizeBytes: 0 }, network: { bytes: 67253, getmores: { num: 266, totalMillis: 15808 }, ops: 206, readersCreated: 1 }, preload: { docs: { num: 0, totalMillis: 0 }, indexes: { num: 0, totalMillis: 0 } } }, storage: { freelist: { search: { bucketExhausted: 0, requests: 0, scanned: 0 } } }, ttl: { deletedDocuments: 0, passes: 1 } }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.901-0500 s20014| 2016-04-06T02:53:41.743-0500 D SHARDING [conn1] checking last ping for lock 'multidrop.coll' against last seen process mongovm16:20010:1459929128:185613966 and ping 2016-04-06T02:53:11.721-0500 [js_test:multi_coll_drop] 2016-04-06T02:54:08.903-0500 s20014| 2016-04-06T02:53:41.743-0500 D SHARDING [conn1] could not force lock 'multidrop.coll' because elapsed time 1003 < takeover time 900000 ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.904-0500 s20014| 2016-04-06T02:53:41.743-0500 D SHARDING [conn1] distributed lock 'multidrop.coll' was not acquired. [js_test:multi_coll_drop] 2016-04-06T02:54:08.909-0500 s20014| 2016-04-06T02:53:42.243-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c08606c33406d4d9c0c8, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:08.912-0500 s20014| 2016-04-06T02:53:42.243-0500 D ASIO [conn1] startCommand: RemoteCommand 810 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:12.243-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08606c33406d4d9c0c8'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929222243), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.914-0500 s20014| 2016-04-06T02:53:42.244-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 810 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:08.916-0500 s20014| 2016-04-06T02:53:42.245-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 810 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.918-0500 s20014| 2016-04-06T02:53:42.245-0500 D ASIO [conn1] startCommand: RemoteCommand 812 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:12.245-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.919-0500 s20014| 2016-04-06T02:53:42.245-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 812 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:08.923-0500 s20014| 2016-04-06T02:53:42.245-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 812 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.934-0500 s20014| 2016-04-06T02:53:42.246-0500 D ASIO [conn1] startCommand: RemoteCommand 814 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:12.246-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.942-0500 s20014| 2016-04-06T02:53:42.256-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 814 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:08.956-0500 s20014| 2016-04-06T02:53:42.257-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 814 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929191721) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.961-0500 s20014| 2016-04-06T02:53:42.257-0500 D ASIO [conn1] startCommand: RemoteCommand 816 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:54:12.257-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.966-0500 s20014| 2016-04-06T02:53:42.258-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 816 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:08.986-0500 s20014| 2016-04-06T02:53:42.260-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] warning: log line attempted (22kB) over max size (10kB), printing beginning and end ... Request 816 finished with response: { host: "mongovm16:20012", advisoryHostFQDNs: [], version: "3.3.4-37-g36f3ff8", process: "mongod", pid: 65723, uptime: 105.0, uptimeMillis: 105118, uptimeEstimate: 91.0, localTime: new Date(1459929222258), asserts: { regular: 0, warning: 0, msg: 0, user: 44, rollovers: 0 }, connections: { current: 17, available: 51183, totalCreated: 48 }, extra_info: { note: "fields vary by platform", heap_usage_bytes: 133857432, page_faults: 0 }, globalLock: { totalTime: 105115000, currentQueue: { total: 0, readers: 0, writers: 0 }, activeClients: { total: 34, readers: 0, writers: 0 } }, locks: { Global: { acquireCount: { r: 3800, w: 807, R: 172, W: 342 }, acquireWaitCount: { r: 18, w: 2, W: 9 }, timeAcquiringMicros: { r: 79690, w: 22138, W: 3261 } }, Database: { acquireCount: { r: 1251, w: 252, W: 555 }, acquireWaitCount: { r: 115, w: 1, W: 22 }, timeAcquiringMicros: { r: 15661, w: 7420, W: 5681 } }, Collection: { acquireCount: { r: 677, w: 222 } }, Metadata: { acquireCount: { w: 81, W: 490 }, acquireWaitCount: { W: 7 }, timeAcquiringMicros: { W: 620 } }, oplog: { acquireCount: { r: 588, w: 37, R: 1, W: 1 } } }, network: { bytesIn: 209880, bytesOut: 1434505, numRequests: 870 }, opcounters: { insert: 6, query: 267, update: 10, delete: 0, getmore: 113, command: 493 }, opcountersRepl: { insert: 61, query: 0, update: 170, delete: 0, getmore: 0, command: 0 }, repl: { hosts: [ "mongovm16:20011", "mongovm16:20012", "mongovm16:20013" ], setName: "multidrop-configRS", setVersion: 1, ismaster: true, secondary: false, primary: "mongovm16:20012", me: "mongovm16:20012", electionId: ObjectId('7fffffff0000000000000007'), rbid: 1287542267 }, storageEngine: { name: "wiredTiger", supportsCommittedReads: true, readOnly: false, persistent: true }, tcmalloc: { generic: { current_allocated_bytes: 133858952, heap_size: 138121216 }, tcmalloc: { pageheap_free_bytes: 1327104, pageheap_unmapped_bytes: 0, max_total_thread_cache_bytes: 1073741824, current_total_thread_cache_bytes: 1819656, total_free_bytes: 2935160, central_cache_free_bytes: 205168, transfer_cache_free_bytes: 910336, thread_cache_free_bytes: 1819656, aggressive_memory_decommit: 0, size_classes: [ { bytes_per_object: 0, pages_per_span: 0, num_spans: 0, num_thread_objs: 0, num_central_objs: 0, num_transfer_objs: 0, free_bytes: 0, allocated_bytes: 0 }, { bytes_per_object: 8, pages_per_span: 2, num_spans: 2, num_thread_objs: 153, num_central_objs: 920, num_transfer_objs: 0, free_bytes: 8584, allocated_bytes: 16384 }, { bytes_per_object: 16, pages_per_span: 2, num_spans: 4, num_thread_objs: 400, num_central_objs: 587, num_transfer_objs: 0, free_bytes: 15792, allocated_bytes: 32768 }, { bytes_per_object: 32, pages_per_span: 2, num_spans: 36, num_thread_objs: 1622, num_central_objs: 114, num_transfer_objs: 1280, free_bytes: 96512, allocated_bytes: 294912 }, { bytes_per_object: 48, pages_per_span: 2, num_spans: 24, num_thread_objs: 645, num_central_objs: 65, num_transfer_objs: 340, free_bytes: 50400, allocated_bytes: 196608 }, { bytes_per_object: 64, pages_per_span: 2, num_spans: 58, num_thread_objs: 552, num_central_objs: 73, num_transfer_objs: 5632, free_bytes: 400448, allocated_bytes: 475136 }, { bytes_per_object: 80, pages_per_span: 2, num_spans: 34, num_thread_objs: 495, num_central_objs: 39, num_transfer_objs: 1836, free_bytes: 189600, allocated_bytes: 278528 }, { bytes_per_object: 96, pages_pe .......... cheSetFilter: { failed: 0, total: 0 }, profile: { failed: 0, total: 0 }, reIndex: { failed: 0, total: 0 }, renameCollection: { failed: 0, total: 0 }, repairCursor: { failed: 0, total: 0 }, repairDatabase: { failed: 0, total: 0 }, replSetDeclareElectionWinner: { failed: 0, total: 0 }, replSetElect: { failed: 0, total: 0 }, replSetFreeze: { failed: 0, total: 0 }, replSetFresh: { failed: 0, total: 0 }, replSetGetConfig: { failed: 0, total: 0 }, replSetGetRBID: { failed: 0, total: 2 }, replSetGetStatus: { failed: 0, total: 0 }, replSetHeartbeat: { failed: 0, total: 75 }, replSetInitiate: { failed: 0, total: 0 }, replSetMaintenance: { failed: 0, total: 0 }, replSetReconfig: { failed: 0, total: 0 }, replSetRequestVotes: { failed: 0, total: 8 }, replSetStepDown: { failed: 0, total: 1 }, replSetSyncFrom: { failed: 0, total: 0 }, replSetTest: { failed: 0, total: 0 }, replSetUpdatePosition: { failed: 0, total: 128 }, resetError: { failed: 0, total: 0 }, resync: { failed: 0, total: 0 }, revokePrivilegesFromRole: { failed: 0, total: 0 }, revokeRolesFromRole: { failed: 0, total: 0 }, revokeRolesFromUser: { failed: 0, total: 0 }, rolesInfo: { failed: 0, total: 0 }, saslContinue: { failed: 0, total: 0 }, saslStart: { failed: 0, total: 0 }, serverStatus: { failed: 0, total: 43 }, setCommittedSnapshot: { failed: 0, total: 0 }, setParameter: { failed: 0, total: 0 }, setShardVersion: { failed: 0, total: 0 }, shardConnPoolStats: { failed: 0, total: 0 }, shardingState: { failed: 0, total: 0 }, shutdown: { failed: 0, total: 0 }, sleep: { failed: 0, total: 0 }, splitChunk: { failed: 0, total: 0 }, splitVector: { failed: 0, total: 0 }, stageDebug: { failed: 0, total: 0 }, top: { failed: 0, total: 0 }, touch: { failed: 0, total: 0 }, unsetSharding: { failed: 0, total: 0 }, update: { failed: 0, total: 10 }, updateRole: { failed: 0, total: 0 }, updateUser: { failed: 0, total: 0 }, usersInfo: { failed: 0, total: 0 }, validate: { failed: 0, total: 0 }, whatsmyuri: { failed: 0, total: 0 }, writebacklisten: { failed: 0, total: 0 } }, cursor: { timedOut: 0, open: { noTimeout: 0, pinned: 2, total: 2 } }, document: { deleted: 0, inserted: 12, returned: 432, updated: 22 }, getLastError: { wtime: { num: 34, totalMillis: 5770 }, wtimeouts: 0 }, operation: { fastmod: 0, idhack: 106, scanAndOrder: 0, writeConflicts: 0 }, queryExecutor: { scanned: 270, scannedObjects: 402 }, record: { moves: 0 }, repl: { executor: { counters: { eventCreated: 14, eventWait: 14, cancels: 459, waits: 1681, scheduledNetCmd: 93, scheduledDBWork: 3, scheduledXclWork: 0, scheduledWorkAt: 543, scheduledWork: 1836, schedulingFailures: 0 }, queues: { networkInProgress: 0, dbWorkInProgress: 0, exclusiveInProgress: 0, sleepers: 3, ready: 0, free: 30 }, unsignaledEvents: 3, eventWaiters: 0, shuttingDown: false, networkInterface: " [js_test:multi_coll_drop] 2016-04-06T02:54:08.987-0500 s20014| NetworkInterfaceASIO Operations' Diagnostic: [js_test:multi_coll_drop] 2016-04-06T02:54:08.987-0500 s20014| Operation: Count: [js_test:multi_coll_drop] 2016-04-06T02:54:08.987-0500 s20014| Connecting 0 [js_test:multi_coll_drop] 2016-04-06T02:54:08.988-0500 s20014| In Progress 0 [js_test:multi_coll_drop] 2016-04-06T02:54:08.989-0500 s20014| Succeeded 82 [js_test:multi_coll_drop] 2016-04-06T02:54:08.994-0500 s20014| Canceled..." }, apply: { batches: { num: 168, totalMillis: 0 }, ops: 196 }, buffer: { count: 0, maxSizeBytes: 268435456, sizeBytes: 0 }, network: { bytes: 67253, getmores: { num: 266, totalMillis: 15808 }, ops: 206, readersCreated: 1 }, preload: { docs: { num: 0, totalMillis: 0 }, indexes: { num: 0, totalMillis: 0 } } }, storage: { freelist: { search: { bucketExhausted: 0, requests: 0, scanned: 0 } } }, ttl: { deletedDocuments: 0, passes: 1 } }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:08.994-0500 s20014| 2016-04-06T02:53:42.260-0500 D SHARDING [conn1] checking last ping for lock 'multidrop.coll' against last seen process mongovm16:20010:1459929128:185613966 and ping 2016-04-06T02:53:11.721-0500 [js_test:multi_coll_drop] 2016-04-06T02:54:08.996-0500 s20014| 2016-04-06T02:53:42.260-0500 D SHARDING [conn1] could not force lock 'multidrop.coll' because elapsed time 1527 < takeover time 900000 ms [js_test:multi_coll_drop] 2016-04-06T02:54:08.998-0500 s20014| 2016-04-06T02:53:42.260-0500 D SHARDING [conn1] distributed lock 'multidrop.coll' was not acquired. [js_test:multi_coll_drop] 2016-04-06T02:54:09.026-0500 s20014| 2016-04-06T02:53:42.760-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c08606c33406d4d9c0c9, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:09.037-0500 s20014| 2016-04-06T02:53:42.761-0500 D ASIO [conn1] startCommand: RemoteCommand 818 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:12.761-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08606c33406d4d9c0c9'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929222760), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.037-0500 s20014| 2016-04-06T02:53:42.761-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 818 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.041-0500 s20014| 2016-04-06T02:53:42.761-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 818 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.042-0500 s20014| 2016-04-06T02:53:42.762-0500 D ASIO [conn1] startCommand: RemoteCommand 820 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:12.762-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.044-0500 s20014| 2016-04-06T02:53:42.762-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 820 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.051-0500 s20014| 2016-04-06T02:53:42.762-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 820 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.053-0500 s20014| 2016-04-06T02:53:42.762-0500 D ASIO [conn1] startCommand: RemoteCommand 822 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:12.762-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.054-0500 s20014| 2016-04-06T02:53:42.762-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 822 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.057-0500 s20014| 2016-04-06T02:53:42.763-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 822 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929191721) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.063-0500 s20014| 2016-04-06T02:53:42.763-0500 D ASIO [conn1] startCommand: RemoteCommand 824 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:54:12.763-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.065-0500 s20014| 2016-04-06T02:53:42.763-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 824 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.082-0500 s20014| 2016-04-06T02:53:42.764-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] warning: log line attempted (22kB) over max size (10kB), printing beginning and end ... Request 824 finished with response: { host: "mongovm16:20012", advisoryHostFQDNs: [], version: "3.3.4-37-g36f3ff8", process: "mongod", pid: 65723, uptime: 105.0, uptimeMillis: 105623, uptimeEstimate: 91.0, localTime: new Date(1459929222763), asserts: { regular: 0, warning: 0, msg: 0, user: 45, rollovers: 0 }, connections: { current: 17, available: 51183, totalCreated: 48 }, extra_info: { note: "fields vary by platform", heap_usage_bytes: 133857688, page_faults: 0 }, globalLock: { totalTime: 105620000, currentQueue: { total: 0, readers: 0, writers: 0 }, activeClients: { total: 34, readers: 0, writers: 0 } }, locks: { Global: { acquireCount: { r: 3805, w: 808, R: 172, W: 342 }, acquireWaitCount: { r: 18, w: 2, W: 9 }, timeAcquiringMicros: { r: 79690, w: 22138, W: 3261 } }, Database: { acquireCount: { r: 1253, w: 253, W: 555 }, acquireWaitCount: { r: 115, w: 1, W: 22 }, timeAcquiringMicros: { r: 15661, w: 7420, W: 5681 } }, Collection: { acquireCount: { r: 679, w: 223 } }, Metadata: { acquireCount: { w: 81, W: 490 }, acquireWaitCount: { W: 7 }, timeAcquiringMicros: { W: 620 } }, oplog: { acquireCount: { r: 588, w: 37, R: 1, W: 1 } } }, network: { bytesIn: 211013, bytesOut: 1462058, numRequests: 875 }, opcounters: { insert: 6, query: 269, update: 10, delete: 0, getmore: 113, command: 496 }, opcountersRepl: { insert: 61, query: 0, update: 170, delete: 0, getmore: 0, command: 0 }, repl: { hosts: [ "mongovm16:20011", "mongovm16:20012", "mongovm16:20013" ], setName: "multidrop-configRS", setVersion: 1, ismaster: true, secondary: false, primary: "mongovm16:20012", me: "mongovm16:20012", electionId: ObjectId('7fffffff0000000000000007'), rbid: 1287542267 }, storageEngine: { name: "wiredTiger", supportsCommittedReads: true, readOnly: false, persistent: true }, tcmalloc: { generic: { current_allocated_bytes: 133859208, heap_size: 138121216 }, tcmalloc: { pageheap_free_bytes: 1327104, pageheap_unmapped_bytes: 0, max_total_thread_cache_bytes: 1073741824, current_total_thread_cache_bytes: 1834776, total_free_bytes: 2934904, central_cache_free_bytes: 189792, transfer_cache_free_bytes: 910336, thread_cache_free_bytes: 1834776, aggressive_memory_decommit: 0, size_classes: [ { bytes_per_object: 0, pages_per_span: 0, num_spans: 0, num_thread_objs: 0, num_central_objs: 0, num_transfer_objs: 0, free_bytes: 0, allocated_bytes: 0 }, { bytes_per_object: 8, pages_per_span: 2, num_spans: 2, num_thread_objs: 153, num_central_objs: 920, num_transfer_objs: 0, free_bytes: 8584, allocated_bytes: 16384 }, { bytes_per_object: 16, pages_per_span: 2, num_spans: 4, num_thread_objs: 400, num_central_objs: 587, num_transfer_objs: 0, free_bytes: 15792, allocated_bytes: 32768 }, { bytes_per_object: 32, pages_per_span: 2, num_spans: 36, num_thread_objs: 1621, num_central_objs: 114, num_transfer_objs: 1280, free_bytes: 96480, allocated_bytes: 294912 }, { bytes_per_object: 48, pages_per_span: 2, num_spans: 24, num_thread_objs: 710, num_central_objs: 0, num_transfer_objs: 340, free_bytes: 50400, allocated_bytes: 196608 }, { bytes_per_object: 64, pages_per_span: 2, num_spans: 58, num_thread_objs: 580, num_central_objs: 45, num_transfer_objs: 5632, free_bytes: 400448, allocated_bytes: 475136 }, { bytes_per_object: 80, pages_per_span: 2, num_spans: 34, num_thread_objs: 495, num_central_objs: 39, num_transfer_objs: 1836, free_bytes: 189600, allocated_bytes: 278528 }, { bytes_per_object: 96, pages_per .......... cheSetFilter: { failed: 0, total: 0 }, profile: { failed: 0, total: 0 }, reIndex: { failed: 0, total: 0 }, renameCollection: { failed: 0, total: 0 }, repairCursor: { failed: 0, total: 0 }, repairDatabase: { failed: 0, total: 0 }, replSetDeclareElectionWinner: { failed: 0, total: 0 }, replSetElect: { failed: 0, total: 0 }, replSetFreeze: { failed: 0, total: 0 }, replSetFresh: { failed: 0, total: 0 }, replSetGetConfig: { failed: 0, total: 0 }, replSetGetRBID: { failed: 0, total: 2 }, replSetGetStatus: { failed: 0, total: 0 }, replSetHeartbeat: { failed: 0, total: 76 }, replSetInitiate: { failed: 0, total: 0 }, replSetMaintenance: { failed: 0, total: 0 }, replSetReconfig: { failed: 0, total: 0 }, replSetRequestVotes: { failed: 0, total: 8 }, replSetStepDown: { failed: 0, total: 1 }, replSetSyncFrom: { failed: 0, total: 0 }, replSetTest: { failed: 0, total: 0 }, replSetUpdatePosition: { failed: 0, total: 128 }, resetError: { failed: 0, total: 0 }, resync: { failed: 0, total: 0 }, revokePrivilegesFromRole: { failed: 0, total: 0 }, revokeRolesFromRole: { failed: 0, total: 0 }, revokeRolesFromUser: { failed: 0, total: 0 }, rolesInfo: { failed: 0, total: 0 }, saslContinue: { failed: 0, total: 0 }, saslStart: { failed: 0, total: 0 }, serverStatus: { failed: 0, total: 44 }, setCommittedSnapshot: { failed: 0, total: 0 }, setParameter: { failed: 0, total: 0 }, setShardVersion: { failed: 0, total: 0 }, shardConnPoolStats: { failed: 0, total: 0 }, shardingState: { failed: 0, total: 0 }, shutdown: { failed: 0, total: 0 }, sleep: { failed: 0, total: 0 }, splitChunk: { failed: 0, total: 0 }, splitVector: { failed: 0, total: 0 }, stageDebug: { failed: 0, total: 0 }, top: { failed: 0, total: 0 }, touch: { failed: 0, total: 0 }, unsetSharding: { failed: 0, total: 0 }, update: { failed: 0, total: 10 }, updateRole: { failed: 0, total: 0 }, updateUser: { failed: 0, total: 0 }, usersInfo: { failed: 0, total: 0 }, validate: { failed: 0, total: 0 }, whatsmyuri: { failed: 0, total: 0 }, writebacklisten: { failed: 0, total: 0 } }, cursor: { timedOut: 0, open: { noTimeout: 0, pinned: 2, total: 2 } }, document: { deleted: 0, inserted: 12, returned: 434, updated: 22 }, getLastError: { wtime: { num: 34, totalMillis: 5770 }, wtimeouts: 0 }, operation: { fastmod: 0, idhack: 108, scanAndOrder: 0, writeConflicts: 0 }, queryExecutor: { scanned: 272, scannedObjects: 404 }, record: { moves: 0 }, repl: { executor: { counters: { eventCreated: 14, eventWait: 14, cancels: 459, waits: 1687, scheduledNetCmd: 94, scheduledDBWork: 3, scheduledXclWork: 0, scheduledWorkAt: 544, scheduledWork: 1842, schedulingFailures: 0 }, queues: { networkInProgress: 0, dbWorkInProgress: 0, exclusiveInProgress: 0, sleepers: 3, ready: 0, free: 30 }, unsignaledEvents: 3, eventWaiters: 0, shuttingDown: false, networkInterface: " [js_test:multi_coll_drop] 2016-04-06T02:54:09.082-0500 s20014| NetworkInterfaceASIO Operations' Diagnostic: [js_test:multi_coll_drop] 2016-04-06T02:54:09.083-0500 s20014| Operation: Count: [js_test:multi_coll_drop] 2016-04-06T02:54:09.083-0500 s20014| Connecting 0 [js_test:multi_coll_drop] 2016-04-06T02:54:09.083-0500 s20014| In Progress 0 [js_test:multi_coll_drop] 2016-04-06T02:54:09.084-0500 s20014| Succeeded 83 [js_test:multi_coll_drop] 2016-04-06T02:54:09.086-0500 s20014| Canceled..." }, apply: { batches: { num: 168, totalMillis: 0 }, ops: 196 }, buffer: { count: 0, maxSizeBytes: 268435456, sizeBytes: 0 }, network: { bytes: 67253, getmores: { num: 266, totalMillis: 15808 }, ops: 206, readersCreated: 1 }, preload: { docs: { num: 0, totalMillis: 0 }, indexes: { num: 0, totalMillis: 0 } } }, storage: { freelist: { search: { bucketExhausted: 0, requests: 0, scanned: 0 } } }, ttl: { deletedDocuments: 0, passes: 1 } }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.091-0500 s20014| 2016-04-06T02:53:42.764-0500 D SHARDING [conn1] checking last ping for lock 'multidrop.coll' against last seen process mongovm16:20010:1459929128:185613966 and ping 2016-04-06T02:53:11.721-0500 [js_test:multi_coll_drop] 2016-04-06T02:54:09.091-0500 s20014| 2016-04-06T02:53:42.764-0500 D SHARDING [conn1] could not force lock 'multidrop.coll' because elapsed time 2033 < takeover time 900000 ms [js_test:multi_coll_drop] 2016-04-06T02:54:09.097-0500 s20014| 2016-04-06T02:53:42.764-0500 D SHARDING [conn1] distributed lock 'multidrop.coll' was not acquired. [js_test:multi_coll_drop] 2016-04-06T02:54:09.101-0500 s20014| 2016-04-06T02:53:43.265-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c08706c33406d4d9c0ca, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:09.105-0500 s20014| 2016-04-06T02:53:43.265-0500 D ASIO [conn1] startCommand: RemoteCommand 826 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:13.265-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08706c33406d4d9c0ca'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929223265), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.106-0500 s20014| 2016-04-06T02:53:43.265-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 826 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.110-0500 s20014| 2016-04-06T02:53:43.266-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 826 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.114-0500 s20014| 2016-04-06T02:53:43.266-0500 D ASIO [conn1] startCommand: RemoteCommand 828 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:13.266-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.116-0500 s20014| 2016-04-06T02:53:43.266-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 828 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.121-0500 s20014| 2016-04-06T02:53:43.267-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 828 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.126-0500 s20014| 2016-04-06T02:53:43.267-0500 D ASIO [conn1] startCommand: RemoteCommand 830 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:13.267-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.127-0500 s20014| 2016-04-06T02:53:43.267-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 830 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.131-0500 s20014| 2016-04-06T02:53:43.267-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 830 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929191721) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.137-0500 s20014| 2016-04-06T02:53:43.267-0500 D ASIO [conn1] startCommand: RemoteCommand 832 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:54:13.267-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.142-0500 s20014| 2016-04-06T02:53:43.267-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 832 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.176-0500 s20014| 2016-04-06T02:53:43.269-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] warning: log line attempted (22kB) over max size (10kB), printing beginning and end ... Request 832 finished with response: { host: "mongovm16:20012", advisoryHostFQDNs: [], version: "3.3.4-37-g36f3ff8", process: "mongod", pid: 65723, uptime: 106.0, uptimeMillis: 106127, uptimeEstimate: 92.0, localTime: new Date(1459929223267), asserts: { regular: 0, warning: 0, msg: 0, user: 46, rollovers: 0 }, connections: { current: 17, available: 51183, totalCreated: 48 }, extra_info: { note: "fields vary by platform", heap_usage_bytes: 133857944, page_faults: 0 }, globalLock: { totalTime: 106125000, currentQueue: { total: 0, readers: 0, writers: 0 }, activeClients: { total: 34, readers: 0, writers: 0 } }, locks: { Global: { acquireCount: { r: 3824, w: 809, R: 172, W: 342 }, acquireWaitCount: { r: 18, w: 2, W: 9 }, timeAcquiringMicros: { r: 79690, w: 22138, W: 3261 } }, Database: { acquireCount: { r: 1262, w: 254, W: 555 }, acquireWaitCount: { r: 115, w: 1, W: 22 }, timeAcquiringMicros: { r: 15661, w: 7420, W: 5681 } }, Collection: { acquireCount: { r: 681, w: 224 } }, Metadata: { acquireCount: { w: 81, W: 490 }, acquireWaitCount: { W: 7 }, timeAcquiringMicros: { W: 620 } }, oplog: { acquireCount: { r: 595, w: 37, R: 1, W: 1 } } }, network: { bytesIn: 213276, bytesOut: 1490027, numRequests: 883 }, opcounters: { insert: 6, query: 271, update: 10, delete: 0, getmore: 115, command: 500 }, opcountersRepl: { insert: 61, query: 0, update: 170, delete: 0, getmore: 0, command: 0 }, repl: { hosts: [ "mongovm16:20011", "mongovm16:20012", "mongovm16:20013" ], setName: "multidrop-configRS", setVersion: 1, ismaster: true, secondary: false, primary: "mongovm16:20012", me: "mongovm16:20012", electionId: ObjectId('7fffffff0000000000000007'), rbid: 1287542267 }, storageEngine: { name: "wiredTiger", supportsCommittedReads: true, readOnly: false, persistent: true }, tcmalloc: { generic: { current_allocated_bytes: 133859464, heap_size: 138121216 }, tcmalloc: { pageheap_free_bytes: 1327104, pageheap_unmapped_bytes: 0, max_total_thread_cache_bytes: 1073741824, current_total_thread_cache_bytes: 1831384, total_free_bytes: 2934648, central_cache_free_bytes: 192928, transfer_cache_free_bytes: 910336, thread_cache_free_bytes: 1831384, aggressive_memory_decommit: 0, size_classes: [ { bytes_per_object: 0, pages_per_span: 0, num_spans: 0, num_thread_objs: 0, num_central_objs: 0, num_transfer_objs: 0, free_bytes: 0, allocated_bytes: 0 }, { bytes_per_object: 8, pages_per_span: 2, num_spans: 2, num_thread_objs: 153, num_central_objs: 920, num_transfer_objs: 0, free_bytes: 8584, allocated_bytes: 16384 }, { bytes_per_object: 16, pages_per_span: 2, num_spans: 4, num_thread_objs: 400, num_central_objs: 587, num_transfer_objs: 0, free_bytes: 15792, allocated_bytes: 32768 }, { bytes_per_object: 32, pages_per_span: 2, num_spans: 36, num_thread_objs: 1638, num_central_objs: 96, num_transfer_objs: 1280, free_bytes: 96448, allocated_bytes: 294912 }, { bytes_per_object: 48, pages_per_span: 2, num_spans: 24, num_thread_objs: 710, num_central_objs: 0, num_transfer_objs: 340, free_bytes: 50400, allocated_bytes: 196608 }, { bytes_per_object: 64, pages_per_span: 2, num_spans: 58, num_thread_objs: 580, num_central_objs: 45, num_transfer_objs: 5632, free_bytes: 400448, allocated_bytes: 475136 }, { bytes_per_object: 80, pages_per_span: 2, num_spans: 34, num_thread_objs: 493, num_central_objs: 41, num_transfer_objs: 1836, free_bytes: 189600, allocated_bytes: 278528 }, { bytes_per_object: 96, pages_per_ .......... cheSetFilter: { failed: 0, total: 0 }, profile: { failed: 0, total: 0 }, reIndex: { failed: 0, total: 0 }, renameCollection: { failed: 0, total: 0 }, repairCursor: { failed: 0, total: 0 }, repairDatabase: { failed: 0, total: 0 }, replSetDeclareElectionWinner: { failed: 0, total: 0 }, replSetElect: { failed: 0, total: 0 }, replSetFreeze: { failed: 0, total: 0 }, replSetFresh: { failed: 0, total: 0 }, replSetGetConfig: { failed: 0, total: 0 }, replSetGetRBID: { failed: 0, total: 2 }, replSetGetStatus: { failed: 0, total: 0 }, replSetHeartbeat: { failed: 0, total: 76 }, replSetInitiate: { failed: 0, total: 0 }, replSetMaintenance: { failed: 0, total: 0 }, replSetReconfig: { failed: 0, total: 0 }, replSetRequestVotes: { failed: 0, total: 8 }, replSetStepDown: { failed: 0, total: 1 }, replSetSyncFrom: { failed: 0, total: 0 }, replSetTest: { failed: 0, total: 0 }, replSetUpdatePosition: { failed: 0, total: 130 }, resetError: { failed: 0, total: 0 }, resync: { failed: 0, total: 0 }, revokePrivilegesFromRole: { failed: 0, total: 0 }, revokeRolesFromRole: { failed: 0, total: 0 }, revokeRolesFromUser: { failed: 0, total: 0 }, rolesInfo: { failed: 0, total: 0 }, saslContinue: { failed: 0, total: 0 }, saslStart: { failed: 0, total: 0 }, serverStatus: { failed: 0, total: 45 }, setCommittedSnapshot: { failed: 0, total: 0 }, setParameter: { failed: 0, total: 0 }, setShardVersion: { failed: 0, total: 0 }, shardConnPoolStats: { failed: 0, total: 0 }, shardingState: { failed: 0, total: 0 }, shutdown: { failed: 0, total: 0 }, sleep: { failed: 0, total: 0 }, splitChunk: { failed: 0, total: 0 }, splitVector: { failed: 0, total: 0 }, stageDebug: { failed: 0, total: 0 }, top: { failed: 0, total: 0 }, touch: { failed: 0, total: 0 }, unsetSharding: { failed: 0, total: 0 }, update: { failed: 0, total: 10 }, updateRole: { failed: 0, total: 0 }, updateUser: { failed: 0, total: 0 }, usersInfo: { failed: 0, total: 0 }, validate: { failed: 0, total: 0 }, whatsmyuri: { failed: 0, total: 0 }, writebacklisten: { failed: 0, total: 0 } }, cursor: { timedOut: 0, open: { noTimeout: 0, pinned: 2, total: 2 } }, document: { deleted: 0, inserted: 12, returned: 436, updated: 22 }, getLastError: { wtime: { num: 34, totalMillis: 5770 }, wtimeouts: 0 }, operation: { fastmod: 0, idhack: 110, scanAndOrder: 0, writeConflicts: 0 }, queryExecutor: { scanned: 274, scannedObjects: 406 }, record: { moves: 0 }, repl: { executor: { counters: { eventCreated: 14, eventWait: 14, cancels: 461, waits: 1697, scheduledNetCmd: 94, scheduledDBWork: 3, scheduledXclWork: 0, scheduledWorkAt: 545, scheduledWork: 1854, schedulingFailures: 0 }, queues: { networkInProgress: 0, dbWorkInProgress: 0, exclusiveInProgress: 0, sleepers: 3, ready: 0, free: 30 }, unsignaledEvents: 3, eventWaiters: 0, shuttingDown: false, networkInterface: " [js_test:multi_coll_drop] 2016-04-06T02:54:09.180-0500 s20014| NetworkInterfaceASIO Operations' Diagnostic: [js_test:multi_coll_drop] 2016-04-06T02:54:09.181-0500 s20014| Operation: Count: [js_test:multi_coll_drop] 2016-04-06T02:54:09.182-0500 s20014| Connecting 0 [js_test:multi_coll_drop] 2016-04-06T02:54:09.182-0500 s20014| In Progress 0 [js_test:multi_coll_drop] 2016-04-06T02:54:09.187-0500 s20014| Succeeded 83 [js_test:multi_coll_drop] 2016-04-06T02:54:09.192-0500 s20014| Canceled..." }, apply: { batches: { num: 168, totalMillis: 0 }, ops: 196 }, buffer: { count: 0, maxSizeBytes: 268435456, sizeBytes: 0 }, network: { bytes: 67253, getmores: { num: 266, totalMillis: 15808 }, ops: 206, readersCreated: 1 }, preload: { docs: { num: 0, totalMillis: 0 }, indexes: { num: 0, totalMillis: 0 } } }, storage: { freelist: { search: { bucketExhausted: 0, requests: 0, scanned: 0 } } }, ttl: { deletedDocuments: 0, passes: 1 } }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.194-0500 s20014| 2016-04-06T02:53:43.269-0500 D SHARDING [conn1] checking last ping for lock 'multidrop.coll' against last seen process mongovm16:20010:1459929128:185613966 and ping 2016-04-06T02:53:11.721-0500 [js_test:multi_coll_drop] 2016-04-06T02:54:09.196-0500 s20014| 2016-04-06T02:53:43.269-0500 D SHARDING [conn1] could not force lock 'multidrop.coll' because elapsed time 2537 < takeover time 900000 ms [js_test:multi_coll_drop] 2016-04-06T02:54:09.202-0500 s20014| 2016-04-06T02:53:43.269-0500 D SHARDING [conn1] distributed lock 'multidrop.coll' was not acquired. [js_test:multi_coll_drop] 2016-04-06T02:54:09.209-0500 s20014| 2016-04-06T02:53:43.769-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c08706c33406d4d9c0cb, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:09.220-0500 s20014| 2016-04-06T02:53:43.769-0500 D ASIO [conn1] startCommand: RemoteCommand 834 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:13.769-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08706c33406d4d9c0cb'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929223769), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.225-0500 s20014| 2016-04-06T02:53:43.769-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 834 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.227-0500 s20014| 2016-04-06T02:53:43.770-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 834 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.231-0500 s20014| 2016-04-06T02:53:43.770-0500 D ASIO [conn1] startCommand: RemoteCommand 836 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:13.770-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.231-0500 s20014| 2016-04-06T02:53:43.770-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 836 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.246-0500 s20014| 2016-04-06T02:53:43.770-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 836 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.248-0500 s20014| 2016-04-06T02:53:43.770-0500 D ASIO [conn1] startCommand: RemoteCommand 838 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:13.770-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.250-0500 s20014| 2016-04-06T02:53:43.770-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 838 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.255-0500 s20014| 2016-04-06T02:53:43.771-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 838 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929191721) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.258-0500 s20014| 2016-04-06T02:53:43.771-0500 D ASIO [conn1] startCommand: RemoteCommand 840 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:54:13.771-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.261-0500 s20014| 2016-04-06T02:53:43.771-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 840 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.273-0500 s20014| 2016-04-06T02:53:43.772-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] warning: log line attempted (22kB) over max size (10kB), printing beginning and end ... Request 840 finished with response: { host: "mongovm16:20012", advisoryHostFQDNs: [], version: "3.3.4-37-g36f3ff8", process: "mongod", pid: 65723, uptime: 106.0, uptimeMillis: 106631, uptimeEstimate: 92.0, localTime: new Date(1459929223771), asserts: { regular: 0, warning: 0, msg: 0, user: 47, rollovers: 0 }, connections: { current: 17, available: 51183, totalCreated: 48 }, extra_info: { note: "fields vary by platform", heap_usage_bytes: 133858200, page_faults: 0 }, globalLock: { totalTime: 106628000, currentQueue: { total: 0, readers: 0, writers: 0 }, activeClients: { total: 34, readers: 0, writers: 0 } }, locks: { Global: { acquireCount: { r: 3829, w: 810, R: 172, W: 342 }, acquireWaitCount: { r: 18, w: 2, W: 9 }, timeAcquiringMicros: { r: 79690, w: 22138, W: 3261 } }, Database: { acquireCount: { r: 1264, w: 255, W: 555 }, acquireWaitCount: { r: 115, w: 1, W: 22 }, timeAcquiringMicros: { r: 15661, w: 7420, W: 5681 } }, Collection: { acquireCount: { r: 683, w: 225 } }, Metadata: { acquireCount: { w: 81, W: 490 }, acquireWaitCount: { W: 7 }, timeAcquiringMicros: { W: 620 } }, oplog: { acquireCount: { r: 595, w: 37, R: 1, W: 1 } } }, network: { bytesIn: 214237, bytesOut: 1517064, numRequests: 887 }, opcounters: { insert: 6, query: 273, update: 10, delete: 0, getmore: 115, command: 502 }, opcountersRepl: { insert: 61, query: 0, update: 170, delete: 0, getmore: 0, command: 0 }, repl: { hosts: [ "mongovm16:20011", "mongovm16:20012", "mongovm16:20013" ], setName: "multidrop-configRS", setVersion: 1, ismaster: true, secondary: false, primary: "mongovm16:20012", me: "mongovm16:20012", electionId: ObjectId('7fffffff0000000000000007'), rbid: 1287542267 }, storageEngine: { name: "wiredTiger", supportsCommittedReads: true, readOnly: false, persistent: true }, tcmalloc: { generic: { current_allocated_bytes: 133859720, heap_size: 138121216 }, tcmalloc: { pageheap_free_bytes: 1327104, pageheap_unmapped_bytes: 0, max_total_thread_cache_bytes: 1073741824, current_total_thread_cache_bytes: 1831288, total_free_bytes: 2934392, central_cache_free_bytes: 192768, transfer_cache_free_bytes: 910336, thread_cache_free_bytes: 1831288, aggressive_memory_decommit: 0, size_classes: [ { bytes_per_object: 0, pages_per_span: 0, num_spans: 0, num_thread_objs: 0, num_central_objs: 0, num_transfer_objs: 0, free_bytes: 0, allocated_bytes: 0 }, { bytes_per_object: 8, pages_per_span: 2, num_spans: 2, num_thread_objs: 153, num_central_objs: 920, num_transfer_objs: 0, free_bytes: 8584, allocated_bytes: 16384 }, { bytes_per_object: 16, pages_per_span: 2, num_spans: 4, num_thread_objs: 400, num_central_objs: 587, num_transfer_objs: 0, free_bytes: 15792, allocated_bytes: 32768 }, { bytes_per_object: 32, pages_per_span: 2, num_spans: 36, num_thread_objs: 1637, num_central_objs: 96, num_transfer_objs: 1280, free_bytes: 96416, allocated_bytes: 294912 }, { bytes_per_object: 48, pages_per_span: 2, num_spans: 24, num_thread_objs: 710, num_central_objs: 0, num_transfer_objs: 340, free_bytes: 50400, allocated_bytes: 196608 }, { bytes_per_object: 64, pages_per_span: 2, num_spans: 58, num_thread_objs: 522, num_central_objs: 103, num_transfer_objs: 5632, free_bytes: 400448, allocated_bytes: 475136 }, { bytes_per_object: 80, pages_per_span: 2, num_spans: 34, num_thread_objs: 517, num_central_objs: 17, num_transfer_objs: 1836, free_bytes: 189600, allocated_bytes: 278528 }, { bytes_per_object: 96, pages_per .......... cheSetFilter: { failed: 0, total: 0 }, profile: { failed: 0, total: 0 }, reIndex: { failed: 0, total: 0 }, renameCollection: { failed: 0, total: 0 }, repairCursor: { failed: 0, total: 0 }, repairDatabase: { failed: 0, total: 0 }, replSetDeclareElectionWinner: { failed: 0, total: 0 }, replSetElect: { failed: 0, total: 0 }, replSetFreeze: { failed: 0, total: 0 }, replSetFresh: { failed: 0, total: 0 }, replSetGetConfig: { failed: 0, total: 0 }, replSetGetRBID: { failed: 0, total: 2 }, replSetGetStatus: { failed: 0, total: 0 }, replSetHeartbeat: { failed: 0, total: 76 }, replSetInitiate: { failed: 0, total: 0 }, replSetMaintenance: { failed: 0, total: 0 }, replSetReconfig: { failed: 0, total: 0 }, replSetRequestVotes: { failed: 0, total: 8 }, replSetStepDown: { failed: 0, total: 1 }, replSetSyncFrom: { failed: 0, total: 0 }, replSetTest: { failed: 0, total: 0 }, replSetUpdatePosition: { failed: 0, total: 130 }, resetError: { failed: 0, total: 0 }, resync: { failed: 0, total: 0 }, revokePrivilegesFromRole: { failed: 0, total: 0 }, revokeRolesFromRole: { failed: 0, total: 0 }, revokeRolesFromUser: { failed: 0, total: 0 }, rolesInfo: { failed: 0, total: 0 }, saslContinue: { failed: 0, total: 0 }, saslStart: { failed: 0, total: 0 }, serverStatus: { failed: 0, total: 46 }, setCommittedSnapshot: { failed: 0, total: 0 }, setParameter: { failed: 0, total: 0 }, setShardVersion: { failed: 0, total: 0 }, shardConnPoolStats: { failed: 0, total: 0 }, shardingState: { failed: 0, total: 0 }, shutdown: { failed: 0, total: 0 }, sleep: { failed: 0, total: 0 }, splitChunk: { failed: 0, total: 0 }, splitVector: { failed: 0, total: 0 }, stageDebug: { failed: 0, total: 0 }, top: { failed: 0, total: 0 }, touch: { failed: 0, total: 0 }, unsetSharding: { failed: 0, total: 0 }, update: { failed: 0, total: 10 }, updateRole: { failed: 0, total: 0 }, updateUser: { failed: 0, total: 0 }, usersInfo: { failed: 0, total: 0 }, validate: { failed: 0, total: 0 }, whatsmyuri: { failed: 0, total: 0 }, writebacklisten: { failed: 0, total: 0 } }, cursor: { timedOut: 0, open: { noTimeout: 0, pinned: 2, total: 2 } }, document: { deleted: 0, inserted: 12, returned: 438, updated: 22 }, getLastError: { wtime: { num: 34, totalMillis: 5770 }, wtimeouts: 0 }, operation: { fastmod: 0, idhack: 112, scanAndOrder: 0, writeConflicts: 0 }, queryExecutor: { scanned: 276, scannedObjects: 408 }, record: { moves: 0 }, repl: { executor: { counters: { eventCreated: 14, eventWait: 14, cancels: 461, waits: 1701, scheduledNetCmd: 95, scheduledDBWork: 3, scheduledXclWork: 0, scheduledWorkAt: 546, scheduledWork: 1858, schedulingFailures: 0 }, queues: { networkInProgress: 0, dbWorkInProgress: 0, exclusiveInProgress: 0, sleepers: 3, ready: 0, free: 30 }, unsignaledEvents: 3, eventWaiters: 0, shuttingDown: false, networkInterface: " [js_test:multi_coll_drop] 2016-04-06T02:54:09.274-0500 s20014| NetworkInterfaceASIO Operations' Diagnostic: [js_test:multi_coll_drop] 2016-04-06T02:54:09.274-0500 s20014| Operation: Count: [js_test:multi_coll_drop] 2016-04-06T02:54:09.274-0500 s20014| Connecting 0 [js_test:multi_coll_drop] 2016-04-06T02:54:09.275-0500 s20014| In Progress 0 [js_test:multi_coll_drop] 2016-04-06T02:54:09.275-0500 s20014| Succeeded 84 [js_test:multi_coll_drop] 2016-04-06T02:54:09.282-0500 s20014| Canceled..." }, apply: { batches: { num: 168, totalMillis: 0 }, ops: 196 }, buffer: { count: 0, maxSizeBytes: 268435456, sizeBytes: 0 }, network: { bytes: 67253, getmores: { num: 266, totalMillis: 15808 }, ops: 206, readersCreated: 1 }, preload: { docs: { num: 0, totalMillis: 0 }, indexes: { num: 0, totalMillis: 0 } } }, storage: { freelist: { search: { bucketExhausted: 0, requests: 0, scanned: 0 } } }, ttl: { deletedDocuments: 0, passes: 1 } }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.285-0500 s20014| 2016-04-06T02:53:43.773-0500 D SHARDING [conn1] checking last ping for lock 'multidrop.coll' against last seen process mongovm16:20010:1459929128:185613966 and ping 2016-04-06T02:53:11.721-0500 [js_test:multi_coll_drop] 2016-04-06T02:54:09.286-0500 s20014| 2016-04-06T02:53:43.773-0500 D SHARDING [conn1] could not force lock 'multidrop.coll' because elapsed time 3041 < takeover time 900000 ms [js_test:multi_coll_drop] 2016-04-06T02:54:09.288-0500 s20014| 2016-04-06T02:53:43.773-0500 D SHARDING [conn1] distributed lock 'multidrop.coll' was not acquired. [js_test:multi_coll_drop] 2016-04-06T02:54:09.292-0500 s20014| 2016-04-06T02:53:44.273-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c08806c33406d4d9c0cc, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:09.299-0500 s20014| 2016-04-06T02:53:44.273-0500 D ASIO [conn1] startCommand: RemoteCommand 842 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:14.273-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08806c33406d4d9c0cc'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929224273), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.300-0500 s20014| 2016-04-06T02:53:44.273-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 842 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.304-0500 s20014| 2016-04-06T02:53:44.274-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 842 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.308-0500 s20014| 2016-04-06T02:53:44.274-0500 D ASIO [conn1] startCommand: RemoteCommand 844 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:14.274-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.313-0500 s20014| 2016-04-06T02:53:44.274-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 844 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.327-0500 s20014| 2016-04-06T02:53:44.274-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 844 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.344-0500 s20014| 2016-04-06T02:53:44.274-0500 D ASIO [conn1] startCommand: RemoteCommand 846 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:14.274-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.346-0500 s20014| 2016-04-06T02:53:44.275-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 846 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.356-0500 s20014| 2016-04-06T02:53:44.275-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 846 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929191721) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.362-0500 s20014| 2016-04-06T02:53:44.277-0500 D ASIO [conn1] startCommand: RemoteCommand 848 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:54:14.277-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.364-0500 s20014| 2016-04-06T02:53:44.277-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 848 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.408-0500 s20014| 2016-04-06T02:53:44.283-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] warning: log line attempted (22kB) over max size (10kB), printing beginning and end ... Request 848 finished with response: { host: "mongovm16:20012", advisoryHostFQDNs: [], version: "3.3.4-37-g36f3ff8", process: "mongod", pid: 65723, uptime: 107.0, uptimeMillis: 107137, uptimeEstimate: 93.0, localTime: new Date(1459929224277), asserts: { regular: 0, warning: 0, msg: 0, user: 48, rollovers: 0 }, connections: { current: 17, available: 51183, totalCreated: 48 }, extra_info: { note: "fields vary by platform", heap_usage_bytes: 133858456, page_faults: 0 }, globalLock: { totalTime: 107134000, currentQueue: { total: 0, readers: 0, writers: 0 }, activeClients: { total: 34, readers: 0, writers: 0 } }, locks: { Global: { acquireCount: { r: 3836, w: 811, R: 172, W: 342 }, acquireWaitCount: { r: 18, w: 2, W: 9 }, timeAcquiringMicros: { r: 79690, w: 22138, W: 3261 } }, Database: { acquireCount: { r: 1267, w: 256, W: 555 }, acquireWaitCount: { r: 115, w: 1, W: 22 }, timeAcquiringMicros: { r: 15661, w: 7420, W: 5681 } }, Collection: { acquireCount: { r: 685, w: 226 } }, Metadata: { acquireCount: { w: 81, W: 490 }, acquireWaitCount: { W: 7 }, timeAcquiringMicros: { W: 620 } }, oplog: { acquireCount: { r: 596, w: 37, R: 1, W: 1 } } }, network: { bytesIn: 215370, bytesOut: 1544617, numRequests: 892 }, opcounters: { insert: 6, query: 275, update: 10, delete: 0, getmore: 115, command: 505 }, opcountersRepl: { insert: 61, query: 0, update: 170, delete: 0, getmore: 0, command: 0 }, repl: { hosts: [ "mongovm16:20011", "mongovm16:20012", "mongovm16:20013" ], setName: "multidrop-configRS", setVersion: 1, ismaster: true, secondary: false, primary: "mongovm16:20012", me: "mongovm16:20012", electionId: ObjectId('7fffffff0000000000000007'), rbid: 1287542267 }, storageEngine: { name: "wiredTiger", supportsCommittedReads: true, readOnly: false, persistent: true }, tcmalloc: { generic: { current_allocated_bytes: 133859976, heap_size: 138121216 }, tcmalloc: { pageheap_free_bytes: 1327104, pageheap_unmapped_bytes: 0, max_total_thread_cache_bytes: 1073741824, current_total_thread_cache_bytes: 1827864, total_free_bytes: 2934136, central_cache_free_bytes: 195936, transfer_cache_free_bytes: 910336, thread_cache_free_bytes: 1827864, aggressive_memory_decommit: 0, size_classes: [ { bytes_per_object: 0, pages_per_span: 0, num_spans: 0, num_thread_objs: 0, num_central_objs: 0, num_transfer_objs: 0, free_bytes: 0, allocated_bytes: 0 }, { bytes_per_object: 8, pages_per_span: 2, num_spans: 2, num_thread_objs: 153, num_central_objs: 920, num_transfer_objs: 0, free_bytes: 8584, allocated_bytes: 16384 }, { bytes_per_object: 16, pages_per_span: 2, num_spans: 4, num_thread_objs: 400, num_central_objs: 587, num_transfer_objs: 0, free_bytes: 15792, allocated_bytes: 32768 }, { bytes_per_object: 32, pages_per_span: 2, num_spans: 36, num_thread_objs: 1636, num_central_objs: 96, num_transfer_objs: 1280, free_bytes: 96384, allocated_bytes: 294912 }, { bytes_per_object: 48, pages_per_span: 2, num_spans: 24, num_thread_objs: 710, num_central_objs: 0, num_transfer_objs: 340, free_bytes: 50400, allocated_bytes: 196608 }, { bytes_per_object: 64, pages_per_span: 2, num_spans: 58, num_thread_objs: 522, num_central_objs: 103, num_transfer_objs: 5632, free_bytes: 400448, allocated_bytes: 475136 }, { bytes_per_object: 80, pages_per_span: 2, num_spans: 34, num_thread_objs: 517, num_central_objs: 17, num_transfer_objs: 1836, free_bytes: 189600, allocated_bytes: 278528 }, { bytes_per_object: 96, pages_per .......... cheSetFilter: { failed: 0, total: 0 }, profile: { failed: 0, total: 0 }, reIndex: { failed: 0, total: 0 }, renameCollection: { failed: 0, total: 0 }, repairCursor: { failed: 0, total: 0 }, repairDatabase: { failed: 0, total: 0 }, replSetDeclareElectionWinner: { failed: 0, total: 0 }, replSetElect: { failed: 0, total: 0 }, replSetFreeze: { failed: 0, total: 0 }, replSetFresh: { failed: 0, total: 0 }, replSetGetConfig: { failed: 0, total: 0 }, replSetGetRBID: { failed: 0, total: 2 }, replSetGetStatus: { failed: 0, total: 0 }, replSetHeartbeat: { failed: 0, total: 77 }, replSetInitiate: { failed: 0, total: 0 }, replSetMaintenance: { failed: 0, total: 0 }, replSetReconfig: { failed: 0, total: 0 }, replSetRequestVotes: { failed: 0, total: 8 }, replSetStepDown: { failed: 0, total: 1 }, replSetSyncFrom: { failed: 0, total: 0 }, replSetTest: { failed: 0, total: 0 }, replSetUpdatePosition: { failed: 0, total: 130 }, resetError: { failed: 0, total: 0 }, resync: { failed: 0, total: 0 }, revokePrivilegesFromRole: { failed: 0, total: 0 }, revokeRolesFromRole: { failed: 0, total: 0 }, revokeRolesFromUser: { failed: 0, total: 0 }, rolesInfo: { failed: 0, total: 0 }, saslContinue: { failed: 0, total: 0 }, saslStart: { failed: 0, total: 0 }, serverStatus: { failed: 0, total: 47 }, setCommittedSnapshot: { failed: 0, total: 0 }, setParameter: { failed: 0, total: 0 }, setShardVersion: { failed: 0, total: 0 }, shardConnPoolStats: { failed: 0, total: 0 }, shardingState: { failed: 0, total: 0 }, shutdown: { failed: 0, total: 0 }, sleep: { failed: 0, total: 0 }, splitChunk: { failed: 0, total: 0 }, splitVector: { failed: 0, total: 0 }, stageDebug: { failed: 0, total: 0 }, top: { failed: 0, total: 0 }, touch: { failed: 0, total: 0 }, unsetSharding: { failed: 0, total: 0 }, update: { failed: 0, total: 10 }, updateRole: { failed: 0, total: 0 }, updateUser: { failed: 0, total: 0 }, usersInfo: { failed: 0, total: 0 }, validate: { failed: 0, total: 0 }, whatsmyuri: { failed: 0, total: 0 }, writebacklisten: { failed: 0, total: 0 } }, cursor: { timedOut: 0, open: { noTimeout: 0, pinned: 2, total: 2 } }, document: { deleted: 0, inserted: 12, returned: 440, updated: 22 }, getLastError: { wtime: { num: 34, totalMillis: 5770 }, wtimeouts: 0 }, operation: { fastmod: 0, idhack: 114, scanAndOrder: 0, writeConflicts: 0 }, queryExecutor: { scanned: 278, scannedObjects: 410 }, record: { moves: 0 }, repl: { executor: { counters: { eventCreated: 14, eventWait: 14, cancels: 461, waits: 1709, scheduledNetCmd: 95, scheduledDBWork: 3, scheduledXclWork: 0, scheduledWorkAt: 546, scheduledWork: 1866, schedulingFailures: 0 }, queues: { networkInProgress: 0, dbWorkInProgress: 0, exclusiveInProgress: 0, sleepers: 3, ready: 0, free: 30 }, unsignaledEvents: 3, eventWaiters: 0, shuttingDown: false, networkInterface: " [js_test:multi_coll_drop] 2016-04-06T02:54:09.410-0500 s20014| NetworkInterfaceASIO Operations' Diagnostic: [js_test:multi_coll_drop] 2016-04-06T02:54:09.411-0500 s20014| Operation: Count: [js_test:multi_coll_drop] 2016-04-06T02:54:09.413-0500 s20014| Connecting 0 [js_test:multi_coll_drop] 2016-04-06T02:54:09.416-0500 s20014| In Progress 0 [js_test:multi_coll_drop] 2016-04-06T02:54:09.417-0500 s20014| Succeeded 84 [js_test:multi_coll_drop] 2016-04-06T02:54:09.422-0500 s20014| Canceled..." }, apply: { batches: { num: 168, totalMillis: 0 }, ops: 196 }, buffer: { count: 0, maxSizeBytes: 268435456, sizeBytes: 0 }, network: { bytes: 67253, getmores: { num: 266, totalMillis: 15808 }, ops: 206, readersCreated: 1 }, preload: { docs: { num: 0, totalMillis: 0 }, indexes: { num: 0, totalMillis: 0 } } }, storage: { freelist: { search: { bucketExhausted: 0, requests: 0, scanned: 0 } } }, ttl: { deletedDocuments: 0, passes: 1 } }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.426-0500 s20014| 2016-04-06T02:53:44.283-0500 D SHARDING [conn1] checking last ping for lock 'multidrop.coll' against last seen process mongovm16:20010:1459929128:185613966 and ping 2016-04-06T02:53:11.721-0500 [js_test:multi_coll_drop] 2016-04-06T02:54:09.429-0500 s20014| 2016-04-06T02:53:44.283-0500 D SHARDING [conn1] could not force lock 'multidrop.coll' because elapsed time 3545 < takeover time 900000 ms [js_test:multi_coll_drop] 2016-04-06T02:54:09.430-0500 s20014| 2016-04-06T02:53:44.283-0500 D SHARDING [conn1] distributed lock 'multidrop.coll' was not acquired. [js_test:multi_coll_drop] 2016-04-06T02:54:09.433-0500 s20014| 2016-04-06T02:53:44.783-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c08806c33406d4d9c0cd, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:09.436-0500 s20014| 2016-04-06T02:53:44.783-0500 D ASIO [conn1] startCommand: RemoteCommand 850 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:14.783-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08806c33406d4d9c0cd'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929224783), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.437-0500 s20014| 2016-04-06T02:53:44.783-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 850 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.439-0500 s20014| 2016-04-06T02:53:44.784-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 850 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.454-0500 s20014| 2016-04-06T02:53:44.784-0500 D ASIO [conn1] startCommand: RemoteCommand 852 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:14.784-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.455-0500 s20014| 2016-04-06T02:53:44.784-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 852 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.465-0500 s20014| 2016-04-06T02:53:44.784-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 852 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.468-0500 s20014| 2016-04-06T02:53:44.784-0500 D ASIO [conn1] startCommand: RemoteCommand 854 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:14.784-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.469-0500 s20014| 2016-04-06T02:53:44.784-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 854 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.476-0500 s20014| 2016-04-06T02:53:44.784-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 854 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929191721) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.483-0500 s20014| 2016-04-06T02:53:44.785-0500 D ASIO [conn1] startCommand: RemoteCommand 856 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:54:14.785-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.494-0500 s20014| 2016-04-06T02:53:44.785-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 856 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.515-0500 s20014| 2016-04-06T02:53:44.786-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] warning: log line attempted (22kB) over max size (10kB), printing beginning and end ... Request 856 finished with response: { host: "mongovm16:20012", advisoryHostFQDNs: [], version: "3.3.4-37-g36f3ff8", process: "mongod", pid: 65723, uptime: 107.0, uptimeMillis: 107645, uptimeEstimate: 93.0, localTime: new Date(1459929224785), asserts: { regular: 0, warning: 0, msg: 0, user: 49, rollovers: 0 }, connections: { current: 17, available: 51183, totalCreated: 48 }, extra_info: { note: "fields vary by platform", heap_usage_bytes: 133858712, page_faults: 0 }, globalLock: { totalTime: 107642000, currentQueue: { total: 0, readers: 0, writers: 0 }, activeClients: { total: 34, readers: 0, writers: 0 } }, locks: { Global: { acquireCount: { r: 3841, w: 812, R: 172, W: 342 }, acquireWaitCount: { r: 18, w: 2, W: 9 }, timeAcquiringMicros: { r: 79690, w: 22138, W: 3261 } }, Database: { acquireCount: { r: 1269, w: 257, W: 555 }, acquireWaitCount: { r: 115, w: 1, W: 22 }, timeAcquiringMicros: { r: 15661, w: 7420, W: 5681 } }, Collection: { acquireCount: { r: 687, w: 227 } }, Metadata: { acquireCount: { w: 81, W: 490 }, acquireWaitCount: { W: 7 }, timeAcquiringMicros: { W: 620 } }, oplog: { acquireCount: { r: 596, w: 37, R: 1, W: 1 } } }, network: { bytesIn: 216503, bytesOut: 1572170, numRequests: 897 }, opcounters: { insert: 6, query: 277, update: 10, delete: 0, getmore: 115, command: 508 }, opcountersRepl: { insert: 61, query: 0, update: 170, delete: 0, getmore: 0, command: 0 }, repl: { hosts: [ "mongovm16:20011", "mongovm16:20012", "mongovm16:20013" ], setName: "multidrop-configRS", setVersion: 1, ismaster: true, secondary: false, primary: "mongovm16:20012", me: "mongovm16:20012", electionId: ObjectId('7fffffff0000000000000007'), rbid: 1287542267 }, storageEngine: { name: "wiredTiger", supportsCommittedReads: true, readOnly: false, persistent: true }, tcmalloc: { generic: { current_allocated_bytes: 133860232, heap_size: 138121216 }, tcmalloc: { pageheap_free_bytes: 1327104, pageheap_unmapped_bytes: 0, max_total_thread_cache_bytes: 1073741824, current_total_thread_cache_bytes: 1837320, total_free_bytes: 2933880, central_cache_free_bytes: 186224, transfer_cache_free_bytes: 910336, thread_cache_free_bytes: 1837320, aggressive_memory_decommit: 0, size_classes: [ { bytes_per_object: 0, pages_per_span: 0, num_spans: 0, num_thread_objs: 0, num_central_objs: 0, num_transfer_objs: 0, free_bytes: 0, allocated_bytes: 0 }, { bytes_per_object: 8, pages_per_span: 2, num_spans: 2, num_thread_objs: 153, num_central_objs: 920, num_transfer_objs: 0, free_bytes: 8584, allocated_bytes: 16384 }, { bytes_per_object: 16, pages_per_span: 2, num_spans: 4, num_thread_objs: 400, num_central_objs: 587, num_transfer_objs: 0, free_bytes: 15792, allocated_bytes: 32768 }, { bytes_per_object: 32, pages_per_span: 2, num_spans: 36, num_thread_objs: 1663, num_central_objs: 68, num_transfer_objs: 1280, free_bytes: 96352, allocated_bytes: 294912 }, { bytes_per_object: 48, pages_per_span: 2, num_spans: 24, num_thread_objs: 710, num_central_objs: 0, num_transfer_objs: 340, free_bytes: 50400, allocated_bytes: 196608 }, { bytes_per_object: 64, pages_per_span: 2, num_spans: 58, num_thread_objs: 522, num_central_objs: 103, num_transfer_objs: 5632, free_bytes: 400448, allocated_bytes: 475136 }, { bytes_per_object: 80, pages_per_span: 2, num_spans: 34, num_thread_objs: 526, num_central_objs: 8, num_transfer_objs: 1836, free_bytes: 189600, allocated_bytes: 278528 }, { bytes_per_object: 96, pages_per_ .......... cheSetFilter: { failed: 0, total: 0 }, profile: { failed: 0, total: 0 }, reIndex: { failed: 0, total: 0 }, renameCollection: { failed: 0, total: 0 }, repairCursor: { failed: 0, total: 0 }, repairDatabase: { failed: 0, total: 0 }, replSetDeclareElectionWinner: { failed: 0, total: 0 }, replSetElect: { failed: 0, total: 0 }, replSetFreeze: { failed: 0, total: 0 }, replSetFresh: { failed: 0, total: 0 }, replSetGetConfig: { failed: 0, total: 0 }, replSetGetRBID: { failed: 0, total: 2 }, replSetGetStatus: { failed: 0, total: 0 }, replSetHeartbeat: { failed: 0, total: 78 }, replSetInitiate: { failed: 0, total: 0 }, replSetMaintenance: { failed: 0, total: 0 }, replSetReconfig: { failed: 0, total: 0 }, replSetRequestVotes: { failed: 0, total: 8 }, replSetStepDown: { failed: 0, total: 1 }, replSetSyncFrom: { failed: 0, total: 0 }, replSetTest: { failed: 0, total: 0 }, replSetUpdatePosition: { failed: 0, total: 130 }, resetError: { failed: 0, total: 0 }, resync: { failed: 0, total: 0 }, revokePrivilegesFromRole: { failed: 0, total: 0 }, revokeRolesFromRole: { failed: 0, total: 0 }, revokeRolesFromUser: { failed: 0, total: 0 }, rolesInfo: { failed: 0, total: 0 }, saslContinue: { failed: 0, total: 0 }, saslStart: { failed: 0, total: 0 }, serverStatus: { failed: 0, total: 48 }, setCommittedSnapshot: { failed: 0, total: 0 }, setParameter: { failed: 0, total: 0 }, setShardVersion: { failed: 0, total: 0 }, shardConnPoolStats: { failed: 0, total: 0 }, shardingState: { failed: 0, total: 0 }, shutdown: { failed: 0, total: 0 }, sleep: { failed: 0, total: 0 }, splitChunk: { failed: 0, total: 0 }, splitVector: { failed: 0, total: 0 }, stageDebug: { failed: 0, total: 0 }, top: { failed: 0, total: 0 }, touch: { failed: 0, total: 0 }, unsetSharding: { failed: 0, total: 0 }, update: { failed: 0, total: 10 }, updateRole: { failed: 0, total: 0 }, updateUser: { failed: 0, total: 0 }, usersInfo: { failed: 0, total: 0 }, validate: { failed: 0, total: 0 }, whatsmyuri: { failed: 0, total: 0 }, writebacklisten: { failed: 0, total: 0 } }, cursor: { timedOut: 0, open: { noTimeout: 0, pinned: 2, total: 2 } }, document: { deleted: 0, inserted: 12, returned: 442, updated: 22 }, getLastError: { wtime: { num: 34, totalMillis: 5770 }, wtimeouts: 0 }, operation: { fastmod: 0, idhack: 116, scanAndOrder: 0, writeConflicts: 0 }, queryExecutor: { scanned: 280, scannedObjects: 412 }, record: { moves: 0 }, repl: { executor: { counters: { eventCreated: 14, eventWait: 14, cancels: 461, waits: 1715, scheduledNetCmd: 96, scheduledDBWork: 3, scheduledXclWork: 0, scheduledWorkAt: 547, scheduledWork: 1872, schedulingFailures: 0 }, queues: { networkInProgress: 0, dbWorkInProgress: 0, exclusiveInProgress: 0, sleepers: 3, ready: 0, free: 30 }, unsignaledEvents: 3, eventWaiters: 0, shuttingDown: false, networkInterface: " [js_test:multi_coll_drop] 2016-04-06T02:54:09.516-0500 s20014| NetworkInterfaceASIO Operations' Diagnostic: [js_test:multi_coll_drop] 2016-04-06T02:54:09.516-0500 s20014| Operation: Count: [js_test:multi_coll_drop] 2016-04-06T02:54:09.520-0500 s20014| Connecting 0 [js_test:multi_coll_drop] 2016-04-06T02:54:09.522-0500 s20014| In Progress 0 [js_test:multi_coll_drop] 2016-04-06T02:54:09.526-0500 s20014| Succeeded 85 [js_test:multi_coll_drop] 2016-04-06T02:54:09.558-0500 s20014| Canceled..." }, apply: { batches: { num: 168, totalMillis: 0 }, ops: 196 }, buffer: { count: 0, maxSizeBytes: 268435456, sizeBytes: 0 }, network: { bytes: 67253, getmores: { num: 266, totalMillis: 15808 }, ops: 206, readersCreated: 1 }, preload: { docs: { num: 0, totalMillis: 0 }, indexes: { num: 0, totalMillis: 0 } } }, storage: { freelist: { search: { bucketExhausted: 0, requests: 0, scanned: 0 } } }, ttl: { deletedDocuments: 0, passes: 1 } }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.567-0500 s20014| 2016-04-06T02:53:44.786-0500 D SHARDING [conn1] checking last ping for lock 'multidrop.coll' against last seen process mongovm16:20010:1459929128:185613966 and ping 2016-04-06T02:53:11.721-0500 [js_test:multi_coll_drop] 2016-04-06T02:54:09.576-0500 s20014| 2016-04-06T02:53:44.786-0500 D SHARDING [conn1] could not force lock 'multidrop.coll' because elapsed time 4055 < takeover time 900000 ms [js_test:multi_coll_drop] 2016-04-06T02:54:09.594-0500 s20014| 2016-04-06T02:53:44.786-0500 D SHARDING [conn1] distributed lock 'multidrop.coll' was not acquired. [js_test:multi_coll_drop] 2016-04-06T02:54:09.600-0500 s20014| 2016-04-06T02:53:45.286-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c08906c33406d4d9c0ce, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:09.605-0500 s20014| 2016-04-06T02:53:45.286-0500 D ASIO [conn1] startCommand: RemoteCommand 858 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:15.286-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08906c33406d4d9c0ce'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929225286), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.606-0500 s20014| 2016-04-06T02:53:45.286-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 858 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.610-0500 s20014| 2016-04-06T02:53:45.287-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 858 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.612-0500 s20014| 2016-04-06T02:53:45.287-0500 D ASIO [conn1] startCommand: RemoteCommand 860 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:15.287-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.614-0500 s20014| 2016-04-06T02:53:45.287-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 860 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.621-0500 s20014| 2016-04-06T02:53:45.287-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 860 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.626-0500 s20014| 2016-04-06T02:53:45.288-0500 D ASIO [conn1] startCommand: RemoteCommand 862 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:15.287-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.626-0500 s20014| 2016-04-06T02:53:45.288-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 862 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.633-0500 s20014| 2016-04-06T02:53:45.288-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 862 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929191721) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.635-0500 s20014| 2016-04-06T02:53:45.288-0500 D ASIO [conn1] startCommand: RemoteCommand 864 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:54:15.288-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.638-0500 s20014| 2016-04-06T02:53:45.288-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 864 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.649-0500 s20014| 2016-04-06T02:53:45.289-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] warning: log line attempted (22kB) over max size (10kB), printing beginning and end ... Request 864 finished with response: { host: "mongovm16:20012", advisoryHostFQDNs: [], version: "3.3.4-37-g36f3ff8", process: "mongod", pid: 65723, uptime: 108.0, uptimeMillis: 108148, uptimeEstimate: 94.0, localTime: new Date(1459929225288), asserts: { regular: 0, warning: 0, msg: 0, user: 50, rollovers: 0 }, connections: { current: 17, available: 51183, totalCreated: 48 }, extra_info: { note: "fields vary by platform", heap_usage_bytes: 133858968, page_faults: 0 }, globalLock: { totalTime: 108145000, currentQueue: { total: 0, readers: 0, writers: 0 }, activeClients: { total: 34, readers: 0, writers: 0 } }, locks: { Global: { acquireCount: { r: 3848, w: 813, R: 172, W: 342 }, acquireWaitCount: { r: 18, w: 2, W: 9 }, timeAcquiringMicros: { r: 79690, w: 22138, W: 3261 } }, Database: { acquireCount: { r: 1272, w: 258, W: 555 }, acquireWaitCount: { r: 115, w: 1, W: 22 }, timeAcquiringMicros: { r: 15661, w: 7420, W: 5681 } }, Collection: { acquireCount: { r: 689, w: 228 } }, Metadata: { acquireCount: { w: 81, W: 490 }, acquireWaitCount: { W: 7 }, timeAcquiringMicros: { W: 620 } }, oplog: { acquireCount: { r: 597, w: 37, R: 1, W: 1 } } }, network: { bytesIn: 217464, bytesOut: 1599207, numRequests: 901 }, opcounters: { insert: 6, query: 279, update: 10, delete: 0, getmore: 115, command: 510 }, opcountersRepl: { insert: 61, query: 0, update: 170, delete: 0, getmore: 0, command: 0 }, repl: { hosts: [ "mongovm16:20011", "mongovm16:20012", "mongovm16:20013" ], setName: "multidrop-configRS", setVersion: 1, ismaster: true, secondary: false, primary: "mongovm16:20012", me: "mongovm16:20012", electionId: ObjectId('7fffffff0000000000000007'), rbid: 1287542267 }, storageEngine: { name: "wiredTiger", supportsCommittedReads: true, readOnly: false, persistent: true }, tcmalloc: { generic: { current_allocated_bytes: 133860488, heap_size: 138121216 }, tcmalloc: { pageheap_free_bytes: 1327104, pageheap_unmapped_bytes: 0, max_total_thread_cache_bytes: 1073741824, current_total_thread_cache_bytes: 1828008, total_free_bytes: 2933624, central_cache_free_bytes: 187088, transfer_cache_free_bytes: 918528, thread_cache_free_bytes: 1828008, aggressive_memory_decommit: 0, size_classes: [ { bytes_per_object: 0, pages_per_span: 0, num_spans: 0, num_thread_objs: 0, num_central_objs: 0, num_transfer_objs: 0, free_bytes: 0, allocated_bytes: 0 }, { bytes_per_object: 8, pages_per_span: 2, num_spans: 2, num_thread_objs: 153, num_central_objs: 920, num_transfer_objs: 0, free_bytes: 8584, allocated_bytes: 16384 }, { bytes_per_object: 16, pages_per_span: 2, num_spans: 4, num_thread_objs: 400, num_central_objs: 587, num_transfer_objs: 0, free_bytes: 15792, allocated_bytes: 32768 }, { bytes_per_object: 32, pages_per_span: 2, num_spans: 36, num_thread_objs: 1730, num_central_objs: 0, num_transfer_objs: 1280, free_bytes: 96320, allocated_bytes: 294912 }, { bytes_per_object: 48, pages_per_span: 2, num_spans: 24, num_thread_objs: 710, num_central_objs: 0, num_transfer_objs: 340, free_bytes: 50400, allocated_bytes: 196608 }, { bytes_per_object: 64, pages_per_span: 2, num_spans: 58, num_thread_objs: 522, num_central_objs: 103, num_transfer_objs: 5632, free_bytes: 400448, allocated_bytes: 475136 }, { bytes_per_object: 80, pages_per_span: 2, num_spans: 34, num_thread_objs: 526, num_central_objs: 8, num_transfer_objs: 1836, free_bytes: 189600, allocated_bytes: 278528 }, { bytes_per_object: 96, pages_per_s .......... cheSetFilter: { failed: 0, total: 0 }, profile: { failed: 0, total: 0 }, reIndex: { failed: 0, total: 0 }, renameCollection: { failed: 0, total: 0 }, repairCursor: { failed: 0, total: 0 }, repairDatabase: { failed: 0, total: 0 }, replSetDeclareElectionWinner: { failed: 0, total: 0 }, replSetElect: { failed: 0, total: 0 }, replSetFreeze: { failed: 0, total: 0 }, replSetFresh: { failed: 0, total: 0 }, replSetGetConfig: { failed: 0, total: 0 }, replSetGetRBID: { failed: 0, total: 2 }, replSetGetStatus: { failed: 0, total: 0 }, replSetHeartbeat: { failed: 0, total: 78 }, replSetInitiate: { failed: 0, total: 0 }, replSetMaintenance: { failed: 0, total: 0 }, replSetReconfig: { failed: 0, total: 0 }, replSetRequestVotes: { failed: 0, total: 8 }, replSetStepDown: { failed: 0, total: 1 }, replSetSyncFrom: { failed: 0, total: 0 }, replSetTest: { failed: 0, total: 0 }, replSetUpdatePosition: { failed: 0, total: 130 }, resetError: { failed: 0, total: 0 }, resync: { failed: 0, total: 0 }, revokePrivilegesFromRole: { failed: 0, total: 0 }, revokeRolesFromRole: { failed: 0, total: 0 }, revokeRolesFromUser: { failed: 0, total: 0 }, rolesInfo: { failed: 0, total: 0 }, saslContinue: { failed: 0, total: 0 }, saslStart: { failed: 0, total: 0 }, serverStatus: { failed: 0, total: 49 }, setCommittedSnapshot: { failed: 0, total: 0 }, setParameter: { failed: 0, total: 0 }, setShardVersion: { failed: 0, total: 0 }, shardConnPoolStats: { failed: 0, total: 0 }, shardingState: { failed: 0, total: 0 }, shutdown: { failed: 0, total: 0 }, sleep: { failed: 0, total: 0 }, splitChunk: { failed: 0, total: 0 }, splitVector: { failed: 0, total: 0 }, stageDebug: { failed: 0, total: 0 }, top: { failed: 0, total: 0 }, touch: { failed: 0, total: 0 }, unsetSharding: { failed: 0, total: 0 }, update: { failed: 0, total: 10 }, updateRole: { failed: 0, total: 0 }, updateUser: { failed: 0, total: 0 }, usersInfo: { failed: 0, total: 0 }, validate: { failed: 0, total: 0 }, whatsmyuri: { failed: 0, total: 0 }, writebacklisten: { failed: 0, total: 0 } }, cursor: { timedOut: 0, open: { noTimeout: 0, pinned: 2, total: 2 } }, document: { deleted: 0, inserted: 12, returned: 444, updated: 22 }, getLastError: { wtime: { num: 34, totalMillis: 5770 }, wtimeouts: 0 }, operation: { fastmod: 0, idhack: 118, scanAndOrder: 0, writeConflicts: 0 }, queryExecutor: { scanned: 282, scannedObjects: 414 }, record: { moves: 0 }, repl: { executor: { counters: { eventCreated: 14, eventWait: 14, cancels: 461, waits: 1721, scheduledNetCmd: 96, scheduledDBWork: 3, scheduledXclWork: 0, scheduledWorkAt: 547, scheduledWork: 1878, schedulingFailures: 0 }, queues: { networkInProgress: 0, dbWorkInProgress: 0, exclusiveInProgress: 0, sleepers: 3, ready: 0, free: 30 }, unsignaledEvents: 3, eventWaiters: 0, shuttingDown: false, networkInterface: " [js_test:multi_coll_drop] 2016-04-06T02:54:09.651-0500 s20014| NetworkInterfaceASIO Operations' Diagnostic: [js_test:multi_coll_drop] 2016-04-06T02:54:09.651-0500 s20014| Operation: Count: [js_test:multi_coll_drop] 2016-04-06T02:54:09.652-0500 s20014| Connecting 0 [js_test:multi_coll_drop] 2016-04-06T02:54:09.657-0500 s20014| In Progress 0 [js_test:multi_coll_drop] 2016-04-06T02:54:09.658-0500 s20014| Succeeded 85 [js_test:multi_coll_drop] 2016-04-06T02:54:09.661-0500 s20014| Canceled..." }, apply: { batches: { num: 168, totalMillis: 0 }, ops: 196 }, buffer: { count: 0, maxSizeBytes: 268435456, sizeBytes: 0 }, network: { bytes: 67253, getmores: { num: 266, totalMillis: 15808 }, ops: 206, readersCreated: 1 }, preload: { docs: { num: 0, totalMillis: 0 }, indexes: { num: 0, totalMillis: 0 } } }, storage: { freelist: { search: { bucketExhausted: 0, requests: 0, scanned: 0 } } }, ttl: { deletedDocuments: 0, passes: 1 } }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.661-0500 s20014| 2016-04-06T02:53:45.290-0500 D SHARDING [conn1] checking last ping for lock 'multidrop.coll' against last seen process mongovm16:20010:1459929128:185613966 and ping 2016-04-06T02:53:11.721-0500 [js_test:multi_coll_drop] 2016-04-06T02:54:09.662-0500 s20014| 2016-04-06T02:53:45.290-0500 D SHARDING [conn1] could not force lock 'multidrop.coll' because elapsed time 4558 < takeover time 900000 ms [js_test:multi_coll_drop] 2016-04-06T02:54:09.663-0500 s20014| 2016-04-06T02:53:45.290-0500 D SHARDING [conn1] distributed lock 'multidrop.coll' was not acquired. [js_test:multi_coll_drop] 2016-04-06T02:54:09.666-0500 s20014| 2016-04-06T02:53:45.790-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c08906c33406d4d9c0cf, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:09.670-0500 s20014| 2016-04-06T02:53:45.790-0500 D ASIO [conn1] startCommand: RemoteCommand 866 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:15.790-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08906c33406d4d9c0cf'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929225790), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.693-0500 s20014| 2016-04-06T02:53:45.791-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 866 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.718-0500 s20014| 2016-04-06T02:53:45.791-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 866 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.729-0500 s20014| 2016-04-06T02:53:45.797-0500 D ASIO [conn1] startCommand: RemoteCommand 868 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:15.797-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.730-0500 s20014| 2016-04-06T02:53:45.797-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 868 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.732-0500 s20014| 2016-04-06T02:53:45.798-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 868 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.738-0500 s20014| 2016-04-06T02:53:45.798-0500 D ASIO [conn1] startCommand: RemoteCommand 870 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:15.798-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.739-0500 s20014| 2016-04-06T02:53:45.798-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 870 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.744-0500 s20014| 2016-04-06T02:53:45.800-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 870 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929191721) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.746-0500 s20014| 2016-04-06T02:53:45.800-0500 D ASIO [conn1] startCommand: RemoteCommand 872 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:54:15.800-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.747-0500 s20014| 2016-04-06T02:53:45.800-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 872 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.762-0500 s20014| 2016-04-06T02:53:45.802-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] warning: log line attempted (22kB) over max size (10kB), printing beginning and end ... Request 872 finished with response: { host: "mongovm16:20012", advisoryHostFQDNs: [], version: "3.3.4-37-g36f3ff8", process: "mongod", pid: 65723, uptime: 108.0, uptimeMillis: 108661, uptimeEstimate: 94.0, localTime: new Date(1459929225801), asserts: { regular: 0, warning: 0, msg: 0, user: 51, rollovers: 0 }, connections: { current: 17, available: 51183, totalCreated: 48 }, extra_info: { note: "fields vary by platform", heap_usage_bytes: 133859224, page_faults: 0 }, globalLock: { totalTime: 108658000, currentQueue: { total: 0, readers: 0, writers: 0 }, activeClients: { total: 34, readers: 0, writers: 0 } }, locks: { Global: { acquireCount: { r: 3899, w: 814, R: 172, W: 342 }, acquireWaitCount: { r: 18, w: 2, W: 9 }, timeAcquiringMicros: { r: 79690, w: 22138, W: 3261 } }, Database: { acquireCount: { r: 1297, w: 259, W: 555 }, acquireWaitCount: { r: 115, w: 1, W: 22 }, timeAcquiringMicros: { r: 15661, w: 7420, W: 5681 } }, Collection: { acquireCount: { r: 707, w: 229 } }, Metadata: { acquireCount: { w: 81, W: 490 }, acquireWaitCount: { W: 7 }, timeAcquiringMicros: { W: 620 } }, oplog: { acquireCount: { r: 604, w: 37, R: 1, W: 1 } } }, network: { bytesIn: 219727, bytesOut: 1627176, numRequests: 909 }, opcounters: { insert: 6, query: 281, update: 10, delete: 0, getmore: 117, command: 514 }, opcountersRepl: { insert: 61, query: 0, update: 170, delete: 0, getmore: 0, command: 0 }, repl: { hosts: [ "mongovm16:20011", "mongovm16:20012", "mongovm16:20013" ], setName: "multidrop-configRS", setVersion: 1, ismaster: true, secondary: false, primary: "mongovm16:20012", me: "mongovm16:20012", electionId: ObjectId('7fffffff0000000000000007'), rbid: 1287542267 }, storageEngine: { name: "wiredTiger", supportsCommittedReads: true, readOnly: false, persistent: true }, tcmalloc: { generic: { current_allocated_bytes: 133860744, heap_size: 138121216 }, tcmalloc: { pageheap_free_bytes: 1318912, pageheap_unmapped_bytes: 0, max_total_thread_cache_bytes: 1073741824, current_total_thread_cache_bytes: 1834696, total_free_bytes: 2941560, central_cache_free_bytes: 188336, transfer_cache_free_bytes: 918528, thread_cache_free_bytes: 1834696, aggressive_memory_decommit: 0, size_classes: [ { bytes_per_object: 0, pages_per_span: 0, num_spans: 0, num_thread_objs: 0, num_central_objs: 0, num_transfer_objs: 0, free_bytes: 0, allocated_bytes: 0 }, { bytes_per_object: 8, pages_per_span: 2, num_spans: 2, num_thread_objs: 153, num_central_objs: 920, num_transfer_objs: 0, free_bytes: 8584, allocated_bytes: 16384 }, { bytes_per_object: 16, pages_per_span: 2, num_spans: 4, num_thread_objs: 400, num_central_objs: 587, num_transfer_objs: 0, free_bytes: 15792, allocated_bytes: 32768 }, { bytes_per_object: 32, pages_per_span: 2, num_spans: 37, num_thread_objs: 1808, num_central_objs: 177, num_transfer_objs: 1280, free_bytes: 104480, allocated_bytes: 303104 }, { bytes_per_object: 48, pages_per_span: 2, num_spans: 24, num_thread_objs: 710, num_central_objs: 0, num_transfer_objs: 340, free_bytes: 50400, allocated_bytes: 196608 }, { bytes_per_object: 64, pages_per_span: 2, num_spans: 58, num_thread_objs: 522, num_central_objs: 103, num_transfer_objs: 5632, free_bytes: 400448, allocated_bytes: 475136 }, { bytes_per_object: 80, pages_per_span: 2, num_spans: 34, num_thread_objs: 532, num_central_objs: 2, num_transfer_objs: 1836, free_bytes: 189600, allocated_bytes: 278528 }, { bytes_per_object: 96, pages_pe .......... cheSetFilter: { failed: 0, total: 0 }, profile: { failed: 0, total: 0 }, reIndex: { failed: 0, total: 0 }, renameCollection: { failed: 0, total: 0 }, repairCursor: { failed: 0, total: 0 }, repairDatabase: { failed: 0, total: 0 }, replSetDeclareElectionWinner: { failed: 0, total: 0 }, replSetElect: { failed: 0, total: 0 }, replSetFreeze: { failed: 0, total: 0 }, replSetFresh: { failed: 0, total: 0 }, replSetGetConfig: { failed: 0, total: 0 }, replSetGetRBID: { failed: 0, total: 2 }, replSetGetStatus: { failed: 0, total: 0 }, replSetHeartbeat: { failed: 0, total: 78 }, replSetInitiate: { failed: 0, total: 0 }, replSetMaintenance: { failed: 0, total: 0 }, replSetReconfig: { failed: 0, total: 0 }, replSetRequestVotes: { failed: 0, total: 8 }, replSetStepDown: { failed: 0, total: 1 }, replSetSyncFrom: { failed: 0, total: 0 }, replSetTest: { failed: 0, total: 0 }, replSetUpdatePosition: { failed: 0, total: 132 }, resetError: { failed: 0, total: 0 }, resync: { failed: 0, total: 0 }, revokePrivilegesFromRole: { failed: 0, total: 0 }, revokeRolesFromRole: { failed: 0, total: 0 }, revokeRolesFromUser: { failed: 0, total: 0 }, rolesInfo: { failed: 0, total: 0 }, saslContinue: { failed: 0, total: 0 }, saslStart: { failed: 0, total: 0 }, serverStatus: { failed: 0, total: 50 }, setCommittedSnapshot: { failed: 0, total: 0 }, setParameter: { failed: 0, total: 0 }, setShardVersion: { failed: 0, total: 0 }, shardConnPoolStats: { failed: 0, total: 0 }, shardingState: { failed: 0, total: 0 }, shutdown: { failed: 0, total: 0 }, sleep: { failed: 0, total: 0 }, splitChunk: { failed: 0, total: 0 }, splitVector: { failed: 0, total: 0 }, stageDebug: { failed: 0, total: 0 }, top: { failed: 0, total: 0 }, touch: { failed: 0, total: 0 }, unsetSharding: { failed: 0, total: 0 }, update: { failed: 0, total: 10 }, updateRole: { failed: 0, total: 0 }, updateUser: { failed: 0, total: 0 }, usersInfo: { failed: 0, total: 0 }, validate: { failed: 0, total: 0 }, whatsmyuri: { failed: 0, total: 0 }, writebacklisten: { failed: 0, total: 0 } }, cursor: { timedOut: 0, open: { noTimeout: 0, pinned: 2, total: 2 } }, document: { deleted: 0, inserted: 12, returned: 446, updated: 22 }, getLastError: { wtime: { num: 34, totalMillis: 5770 }, wtimeouts: 0 }, operation: { fastmod: 0, idhack: 120, scanAndOrder: 0, writeConflicts: 0 }, queryExecutor: { scanned: 284, scannedObjects: 416 }, record: { moves: 0 }, repl: { executor: { counters: { eventCreated: 14, eventWait: 14, cancels: 463, waits: 1729, scheduledNetCmd: 97, scheduledDBWork: 3, scheduledXclWork: 0, scheduledWorkAt: 550, scheduledWork: 1888, schedulingFailures: 0 }, queues: { networkInProgress: 0, dbWorkInProgress: 0, exclusiveInProgress: 0, sleepers: 3, ready: 0, free: 30 }, unsignaledEvents: 3, eventWaiters: 0, shuttingDown: false, networkInterface: " [js_test:multi_coll_drop] 2016-04-06T02:54:09.762-0500 s20014| NetworkInterfaceASIO Operations' Diagnostic: [js_test:multi_coll_drop] 2016-04-06T02:54:09.767-0500 s20014| Operation: Count: [js_test:multi_coll_drop] 2016-04-06T02:54:09.768-0500 s20014| Connecting 0 [js_test:multi_coll_drop] 2016-04-06T02:54:09.768-0500 s20014| In Progress 0 [js_test:multi_coll_drop] 2016-04-06T02:54:09.769-0500 s20014| Succeeded 86 [js_test:multi_coll_drop] 2016-04-06T02:54:09.776-0500 s20014| Canceled..." }, apply: { batches: { num: 168, totalMillis: 0 }, ops: 196 }, buffer: { count: 0, maxSizeBytes: 268435456, sizeBytes: 0 }, network: { bytes: 67253, getmores: { num: 266, totalMillis: 15808 }, ops: 206, readersCreated: 1 }, preload: { docs: { num: 0, totalMillis: 0 }, indexes: { num: 0, totalMillis: 0 } } }, storage: { freelist: { search: { bucketExhausted: 0, requests: 0, scanned: 0 } } }, ttl: { deletedDocuments: 0, passes: 1 } }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.778-0500 s20014| 2016-04-06T02:53:45.803-0500 D SHARDING [conn1] checking last ping for lock 'multidrop.coll' against last seen process mongovm16:20010:1459929128:185613966 and ping 2016-04-06T02:53:11.721-0500 [js_test:multi_coll_drop] 2016-04-06T02:54:09.781-0500 s20014| 2016-04-06T02:53:45.803-0500 D SHARDING [conn1] could not force lock 'multidrop.coll' because elapsed time 5070 < takeover time 900000 ms [js_test:multi_coll_drop] 2016-04-06T02:54:09.782-0500 s20014| 2016-04-06T02:53:45.803-0500 D SHARDING [conn1] distributed lock 'multidrop.coll' was not acquired. [js_test:multi_coll_drop] 2016-04-06T02:54:09.784-0500 s20014| 2016-04-06T02:53:46.303-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c08a06c33406d4d9c0d0, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:09.791-0500 s20014| 2016-04-06T02:53:46.303-0500 D ASIO [conn1] startCommand: RemoteCommand 874 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:16.303-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08a06c33406d4d9c0d0'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929226303), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.793-0500 s20014| 2016-04-06T02:53:46.303-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 874 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.796-0500 s20014| 2016-04-06T02:53:46.304-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 874 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.800-0500 s20014| 2016-04-06T02:53:46.304-0500 D ASIO [conn1] startCommand: RemoteCommand 876 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:16.304-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.801-0500 s20014| 2016-04-06T02:53:46.304-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 876 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.805-0500 s20014| 2016-04-06T02:53:46.304-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 876 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.807-0500 s20014| 2016-04-06T02:53:46.304-0500 D ASIO [conn1] startCommand: RemoteCommand 878 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:16.304-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.808-0500 s20014| 2016-04-06T02:53:46.305-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 878 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.813-0500 s20014| 2016-04-06T02:53:46.305-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 878 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929191721) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.819-0500 s20014| 2016-04-06T02:53:46.305-0500 D ASIO [conn1] startCommand: RemoteCommand 880 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:54:16.305-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.821-0500 s20014| 2016-04-06T02:53:46.305-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 880 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.841-0500 s20014| 2016-04-06T02:53:46.307-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] warning: log line attempted (22kB) over max size (10kB), printing beginning and end ... Request 880 finished with response: { host: "mongovm16:20012", advisoryHostFQDNs: [], version: "3.3.4-37-g36f3ff8", process: "mongod", pid: 65723, uptime: 109.0, uptimeMillis: 109165, uptimeEstimate: 95.0, localTime: new Date(1459929226305), asserts: { regular: 0, warning: 0, msg: 0, user: 52, rollovers: 0 }, connections: { current: 17, available: 51183, totalCreated: 48 }, extra_info: { note: "fields vary by platform", heap_usage_bytes: 133859480, page_faults: 0 }, globalLock: { totalTime: 109162000, currentQueue: { total: 0, readers: 0, writers: 0 }, activeClients: { total: 34, readers: 0, writers: 0 } }, locks: { Global: { acquireCount: { r: 3906, w: 815, R: 172, W: 342 }, acquireWaitCount: { r: 18, w: 2, W: 9 }, timeAcquiringMicros: { r: 79690, w: 22138, W: 3261 } }, Database: { acquireCount: { r: 1300, w: 260, W: 555 }, acquireWaitCount: { r: 115, w: 1, W: 22 }, timeAcquiringMicros: { r: 15661, w: 7420, W: 5681 } }, Collection: { acquireCount: { r: 709, w: 230 } }, Metadata: { acquireCount: { w: 81, W: 490 }, acquireWaitCount: { W: 7 }, timeAcquiringMicros: { W: 620 } }, oplog: { acquireCount: { r: 605, w: 37, R: 1, W: 1 } } }, network: { bytesIn: 220860, bytesOut: 1654729, numRequests: 914 }, opcounters: { insert: 6, query: 283, update: 10, delete: 0, getmore: 117, command: 517 }, opcountersRepl: { insert: 61, query: 0, update: 170, delete: 0, getmore: 0, command: 0 }, repl: { hosts: [ "mongovm16:20011", "mongovm16:20012", "mongovm16:20013" ], setName: "multidrop-configRS", setVersion: 1, ismaster: true, secondary: false, primary: "mongovm16:20012", me: "mongovm16:20012", electionId: ObjectId('7fffffff0000000000000007'), rbid: 1287542267 }, storageEngine: { name: "wiredTiger", supportsCommittedReads: true, readOnly: false, persistent: true }, tcmalloc: { generic: { current_allocated_bytes: 133861000, heap_size: 138121216 }, tcmalloc: { pageheap_free_bytes: 1318912, pageheap_unmapped_bytes: 0, max_total_thread_cache_bytes: 1073741824, current_total_thread_cache_bytes: 1838568, total_free_bytes: 2941304, central_cache_free_bytes: 184208, transfer_cache_free_bytes: 918528, thread_cache_free_bytes: 1838568, aggressive_memory_decommit: 0, size_classes: [ { bytes_per_object: 0, pages_per_span: 0, num_spans: 0, num_thread_objs: 0, num_central_objs: 0, num_transfer_objs: 0, free_bytes: 0, allocated_bytes: 0 }, { bytes_per_object: 8, pages_per_span: 2, num_spans: 2, num_thread_objs: 153, num_central_objs: 920, num_transfer_objs: 0, free_bytes: 8584, allocated_bytes: 16384 }, { bytes_per_object: 16, pages_per_span: 2, num_spans: 4, num_thread_objs: 400, num_central_objs: 587, num_transfer_objs: 0, free_bytes: 15792, allocated_bytes: 32768 }, { bytes_per_object: 32, pages_per_span: 2, num_spans: 37, num_thread_objs: 1807, num_central_objs: 177, num_transfer_objs: 1280, free_bytes: 104448, allocated_bytes: 303104 }, { bytes_per_object: 48, pages_per_span: 2, num_spans: 24, num_thread_objs: 710, num_central_objs: 0, num_transfer_objs: 340, free_bytes: 50400, allocated_bytes: 196608 }, { bytes_per_object: 64, pages_per_span: 2, num_spans: 58, num_thread_objs: 522, num_central_objs: 103, num_transfer_objs: 5632, free_bytes: 400448, allocated_bytes: 475136 }, { bytes_per_object: 80, pages_per_span: 2, num_spans: 34, num_thread_objs: 532, num_central_objs: 2, num_transfer_objs: 1836, free_bytes: 189600, allocated_bytes: 278528 }, { bytes_per_object: 96, pages_pe .......... cheSetFilter: { failed: 0, total: 0 }, profile: { failed: 0, total: 0 }, reIndex: { failed: 0, total: 0 }, renameCollection: { failed: 0, total: 0 }, repairCursor: { failed: 0, total: 0 }, repairDatabase: { failed: 0, total: 0 }, replSetDeclareElectionWinner: { failed: 0, total: 0 }, replSetElect: { failed: 0, total: 0 }, replSetFreeze: { failed: 0, total: 0 }, replSetFresh: { failed: 0, total: 0 }, replSetGetConfig: { failed: 0, total: 0 }, replSetGetRBID: { failed: 0, total: 2 }, replSetGetStatus: { failed: 0, total: 0 }, replSetHeartbeat: { failed: 0, total: 79 }, replSetInitiate: { failed: 0, total: 0 }, replSetMaintenance: { failed: 0, total: 0 }, replSetReconfig: { failed: 0, total: 0 }, replSetRequestVotes: { failed: 0, total: 8 }, replSetStepDown: { failed: 0, total: 1 }, replSetSyncFrom: { failed: 0, total: 0 }, replSetTest: { failed: 0, total: 0 }, replSetUpdatePosition: { failed: 0, total: 132 }, resetError: { failed: 0, total: 0 }, resync: { failed: 0, total: 0 }, revokePrivilegesFromRole: { failed: 0, total: 0 }, revokeRolesFromRole: { failed: 0, total: 0 }, revokeRolesFromUser: { failed: 0, total: 0 }, rolesInfo: { failed: 0, total: 0 }, saslContinue: { failed: 0, total: 0 }, saslStart: { failed: 0, total: 0 }, serverStatus: { failed: 0, total: 51 }, setCommittedSnapshot: { failed: 0, total: 0 }, setParameter: { failed: 0, total: 0 }, setShardVersion: { failed: 0, total: 0 }, shardConnPoolStats: { failed: 0, total: 0 }, shardingState: { failed: 0, total: 0 }, shutdown: { failed: 0, total: 0 }, sleep: { failed: 0, total: 0 }, splitChunk: { failed: 0, total: 0 }, splitVector: { failed: 0, total: 0 }, stageDebug: { failed: 0, total: 0 }, top: { failed: 0, total: 0 }, touch: { failed: 0, total: 0 }, unsetSharding: { failed: 0, total: 0 }, update: { failed: 0, total: 10 }, updateRole: { failed: 0, total: 0 }, updateUser: { failed: 0, total: 0 }, usersInfo: { failed: 0, total: 0 }, validate: { failed: 0, total: 0 }, whatsmyuri: { failed: 0, total: 0 }, writebacklisten: { failed: 0, total: 0 } }, cursor: { timedOut: 0, open: { noTimeout: 0, pinned: 2, total: 2 } }, document: { deleted: 0, inserted: 12, returned: 448, updated: 22 }, getLastError: { wtime: { num: 34, totalMillis: 5770 }, wtimeouts: 0 }, operation: { fastmod: 0, idhack: 122, scanAndOrder: 0, writeConflicts: 0 }, queryExecutor: { scanned: 286, scannedObjects: 418 }, record: { moves: 0 }, repl: { executor: { counters: { eventCreated: 14, eventWait: 14, cancels: 463, waits: 1737, scheduledNetCmd: 97, scheduledDBWork: 3, scheduledXclWork: 0, scheduledWorkAt: 550, scheduledWork: 1896, schedulingFailures: 0 }, queues: { networkInProgress: 0, dbWorkInProgress: 0, exclusiveInProgress: 0, sleepers: 3, ready: 0, free: 30 }, unsignaledEvents: 3, eventWaiters: 0, shuttingDown: false, networkInterface: " [js_test:multi_coll_drop] 2016-04-06T02:54:09.841-0500 s20014| NetworkInterfaceASIO Operations' Diagnostic: [js_test:multi_coll_drop] 2016-04-06T02:54:09.842-0500 s20014| Operation: Count: [js_test:multi_coll_drop] 2016-04-06T02:54:09.842-0500 s20014| Connecting 0 [js_test:multi_coll_drop] 2016-04-06T02:54:09.847-0500 s20014| In Progress 0 [js_test:multi_coll_drop] 2016-04-06T02:54:09.847-0500 s20014| Succeeded 86 [js_test:multi_coll_drop] 2016-04-06T02:54:09.859-0500 s20014| Canceled..." }, apply: { batches: { num: 168, totalMillis: 0 }, ops: 196 }, buffer: { count: 0, maxSizeBytes: 268435456, sizeBytes: 0 }, network: { bytes: 67253, getmores: { num: 266, totalMillis: 15808 }, ops: 206, readersCreated: 1 }, preload: { docs: { num: 0, totalMillis: 0 }, indexes: { num: 0, totalMillis: 0 } } }, storage: { freelist: { search: { bucketExhausted: 0, requests: 0, scanned: 0 } } }, ttl: { deletedDocuments: 0, passes: 1 } }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.862-0500 s20014| 2016-04-06T02:53:46.307-0500 D SHARDING [conn1] checking last ping for lock 'multidrop.coll' against last seen process mongovm16:20010:1459929128:185613966 and ping 2016-04-06T02:53:11.721-0500 [js_test:multi_coll_drop] 2016-04-06T02:54:09.863-0500 s20014| 2016-04-06T02:53:46.307-0500 D SHARDING [conn1] could not force lock 'multidrop.coll' because elapsed time 5574 < takeover time 900000 ms [js_test:multi_coll_drop] 2016-04-06T02:54:09.866-0500 s20014| 2016-04-06T02:53:46.307-0500 D SHARDING [conn1] distributed lock 'multidrop.coll' was not acquired. [js_test:multi_coll_drop] 2016-04-06T02:54:09.873-0500 s20014| 2016-04-06T02:53:46.390-0500 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:09.874-0500 s20014| 2016-04-06T02:53:46.390-0500 D NETWORK [ReplicaSetMonitorWatcher] creating new connection to:mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:09.885-0500 s20014| 2016-04-06T02:53:46.391-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:54:09.896-0500 s20014| 2016-04-06T02:53:46.392-0500 D NETWORK [ReplicaSetMonitorWatcher] connected to server mongovm16:20013 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:54:09.897-0500 s20014| 2016-04-06T02:53:46.394-0500 D NETWORK [ReplicaSetMonitorWatcher] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:54:09.899-0500 s20014| 2016-04-06T02:53:46.808-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c08a06c33406d4d9c0d1, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:09.902-0500 s20014| 2016-04-06T02:53:46.808-0500 D ASIO [conn1] startCommand: RemoteCommand 882 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:16.808-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08a06c33406d4d9c0d1'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929226808), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.903-0500 s20014| 2016-04-06T02:53:46.808-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 882 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.905-0500 s20014| 2016-04-06T02:53:46.808-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 882 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.910-0500 s20014| 2016-04-06T02:53:46.809-0500 D ASIO [conn1] startCommand: RemoteCommand 884 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:16.809-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.913-0500 s20014| 2016-04-06T02:53:46.809-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 884 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.919-0500 s20014| 2016-04-06T02:53:46.809-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 884 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.925-0500 s20014| 2016-04-06T02:53:46.809-0500 D ASIO [conn1] startCommand: RemoteCommand 886 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:16.809-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.929-0500 s20014| 2016-04-06T02:53:46.809-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 886 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.931-0500 s20014| 2016-04-06T02:53:46.810-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 886 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929191721) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.934-0500 s20014| 2016-04-06T02:53:46.810-0500 D ASIO [conn1] startCommand: RemoteCommand 888 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:54:16.810-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:09.939-0500 s20014| 2016-04-06T02:53:46.810-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 888 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:09.958-0500 s20014| 2016-04-06T02:53:46.811-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] warning: log line attempted (22kB) over max size (10kB), printing beginning and end ... Request 888 finished with response: { host: "mongovm16:20012", advisoryHostFQDNs: [], version: "3.3.4-37-g36f3ff8", process: "mongod", pid: 65723, uptime: 109.0, uptimeMillis: 109670, uptimeEstimate: 95.0, localTime: new Date(1459929226810), asserts: { regular: 0, warning: 0, msg: 0, user: 53, rollovers: 0 }, connections: { current: 17, available: 51183, totalCreated: 48 }, extra_info: { note: "fields vary by platform", heap_usage_bytes: 133859736, page_faults: 0 }, globalLock: { totalTime: 109667000, currentQueue: { total: 0, readers: 0, writers: 0 }, activeClients: { total: 34, readers: 0, writers: 0 } }, locks: { Global: { acquireCount: { r: 3911, w: 816, R: 172, W: 342 }, acquireWaitCount: { r: 18, w: 2, W: 9 }, timeAcquiringMicros: { r: 79690, w: 22138, W: 3261 } }, Database: { acquireCount: { r: 1302, w: 261, W: 555 }, acquireWaitCount: { r: 115, w: 1, W: 22 }, timeAcquiringMicros: { r: 15661, w: 7420, W: 5681 } }, Collection: { acquireCount: { r: 711, w: 231 } }, Metadata: { acquireCount: { w: 81, W: 490 }, acquireWaitCount: { W: 7 }, timeAcquiringMicros: { W: 620 } }, oplog: { acquireCount: { r: 605, w: 37, R: 1, W: 1 } } }, network: { bytesIn: 221993, bytesOut: 1682282, numRequests: 919 }, opcounters: { insert: 6, query: 285, update: 10, delete: 0, getmore: 117, command: 520 }, opcountersRepl: { insert: 61, query: 0, update: 170, delete: 0, getmore: 0, command: 0 }, repl: { hosts: [ "mongovm16:20011", "mongovm16:20012", "mongovm16:20013" ], setName: "multidrop-configRS", setVersion: 1, ismaster: true, secondary: false, primary: "mongovm16:20012", me: "mongovm16:20012", electionId: ObjectId('7fffffff0000000000000007'), rbid: 1287542267 }, storageEngine: { name: "wiredTiger", supportsCommittedReads: true, readOnly: false, persistent: true }, tcmalloc: { generic: { current_allocated_bytes: 133861256, heap_size: 138121216 }, tcmalloc: { pageheap_free_bytes: 1302528, pageheap_unmapped_bytes: 0, max_total_thread_cache_bytes: 1073741824, current_total_thread_cache_bytes: 1847976, total_free_bytes: 2957432, central_cache_free_bytes: 190928, transfer_cache_free_bytes: 918528, thread_cache_free_bytes: 1847976, aggressive_memory_decommit: 0, size_classes: [ { bytes_per_object: 0, pages_per_span: 0, num_spans: 0, num_thread_objs: 0, num_central_objs: 0, num_transfer_objs: 0, free_bytes: 0, allocated_bytes: 0 }, { bytes_per_object: 8, pages_per_span: 2, num_spans: 2, num_thread_objs: 153, num_central_objs: 920, num_transfer_objs: 0, free_bytes: 8584, allocated_bytes: 16384 }, { bytes_per_object: 16, pages_per_span: 2, num_spans: 4, num_thread_objs: 400, num_central_objs: 587, num_transfer_objs: 0, free_bytes: 15792, allocated_bytes: 32768 }, { bytes_per_object: 32, pages_per_span: 2, num_spans: 37, num_thread_objs: 1806, num_central_objs: 177, num_transfer_objs: 1280, free_bytes: 104416, allocated_bytes: 303104 }, { bytes_per_object: 48, pages_per_span: 2, num_spans: 24, num_thread_objs: 710, num_central_objs: 0, num_transfer_objs: 340, free_bytes: 50400, allocated_bytes: 196608 }, { bytes_per_object: 64, pages_per_span: 2, num_spans: 58, num_thread_objs: 522, num_central_objs: 103, num_transfer_objs: 5632, free_bytes: 400448, allocated_bytes: 475136 }, { bytes_per_object: 80, pages_per_span: 2, num_spans: 34, num_thread_objs: 534, num_central_objs: 0, num_transfer_objs: 1836, free_bytes: 189600, allocated_bytes: 278528 }, { bytes_per_object: 96, pages_pe .......... cheSetFilter: { failed: 0, total: 0 }, profile: { failed: 0, total: 0 }, reIndex: { failed: 0, total: 0 }, renameCollection: { failed: 0, total: 0 }, repairCursor: { failed: 0, total: 0 }, repairDatabase: { failed: 0, total: 0 }, replSetDeclareElectionWinner: { failed: 0, total: 0 }, replSetElect: { failed: 0, total: 0 }, replSetFreeze: { failed: 0, total: 0 }, replSetFresh: { failed: 0, total: 0 }, replSetGetConfig: { failed: 0, total: 0 }, replSetGetRBID: { failed: 0, total: 2 }, replSetGetStatus: { failed: 0, total: 0 }, replSetHeartbeat: { failed: 0, total: 80 }, replSetInitiate: { failed: 0, total: 0 }, replSetMaintenance: { failed: 0, total: 0 }, replSetReconfig: { failed: 0, total: 0 }, replSetRequestVotes: { failed: 0, total: 8 }, replSetStepDown: { failed: 0, total: 1 }, replSetSyncFrom: { failed: 0, total: 0 }, replSetTest: { failed: 0, total: 0 }, replSetUpdatePosition: { failed: 0, total: 132 }, resetError: { failed: 0, total: 0 }, resync: { failed: 0, total: 0 }, revokePrivilegesFromRole: { failed: 0, total: 0 }, revokeRolesFromRole: { failed: 0, total: 0 }, revokeRolesFromUser: { failed: 0, total: 0 }, rolesInfo: { failed: 0, total: 0 }, saslContinue: { failed: 0, total: 0 }, saslStart: { failed: 0, total: 0 }, serverStatus: { failed: 0, total: 52 }, setCommittedSnapshot: { failed: 0, total: 0 }, setParameter: { failed: 0, total: 0 }, setShardVersion: { failed: 0, total: 0 }, shardConnPoolStats: { failed: 0, total: 0 }, shardingState: { failed: 0, total: 0 }, shutdown: { failed: 0, total: 0 }, sleep: { failed: 0, total: 0 }, splitChunk: { failed: 0, total: 0 }, splitVector: { failed: 0, total: 0 }, stageDebug: { failed: 0, total: 0 }, top: { failed: 0, total: 0 }, touch: { failed: 0, total: 0 }, unsetSharding: { failed: 0, total: 0 }, update: { failed: 0, total: 10 }, updateRole: { failed: 0, total: 0 }, updateUser: { failed: 0, total: 0 }, usersInfo: { failed: 0, total: 0 }, validate: { failed: 0, total: 0 }, whatsmyuri: { failed: 0, total: 0 }, writebacklisten: { failed: 0, total: 0 } }, cursor: { timedOut: 0, open: { noTimeout: 0, pinned: 2, total: 2 } }, document: { deleted: 0, inserted: 12, returned: 450, updated: 22 }, getLastError: { wtime: { num: 34, totalMillis: 5770 }, wtimeouts: 0 }, operation: { fastmod: 0, idhack: 124, scanAndOrder: 0, writeConflicts: 0 }, queryExecutor: { scanned: 288, scannedObjects: 420 }, record: { moves: 0 }, repl: { executor: { counters: { eventCreated: 14, eventWait: 14, cancels: 463, waits: 1743, scheduledNetCmd: 98, scheduledDBWork: 3, scheduledXclWork: 0, scheduledWorkAt: 551, scheduledWork: 1902, schedulingFailures: 0 }, queues: { networkInProgress: 0, dbWorkInProgress: 0, exclusiveInProgress: 0, sleepers: 3, ready: 0, free: 30 }, unsignaledEvents: 3, eventWaiters: 0, shuttingDown: false, networkInterface: " [js_test:multi_coll_drop] 2016-04-06T02:54:09.958-0500 s20014| NetworkInterfaceASIO Operations' Diagnostic: [js_test:multi_coll_drop] 2016-04-06T02:54:09.960-0500 s20014| Operation: Count: [js_test:multi_coll_drop] 2016-04-06T02:54:09.967-0500 s20014| Connecting 0 [js_test:multi_coll_drop] 2016-04-06T02:54:09.977-0500 s20014| In Progress 0 [js_test:multi_coll_drop] 2016-04-06T02:54:09.980-0500 s20014| Succeeded 87 [js_test:multi_coll_drop] 2016-04-06T02:54:09.986-0500 s20014| Canceled..." }, apply: { batches: { num: 168, totalMillis: 0 }, ops: 196 }, buffer: { count: 0, maxSizeBytes: 268435456, sizeBytes: 0 }, network: { bytes: 67253, getmores: { num: 266, totalMillis: 15808 }, ops: 206, readersCreated: 1 }, preload: { docs: { num: 0, totalMillis: 0 }, indexes: { num: 0, totalMillis: 0 } } }, storage: { freelist: { search: { bucketExhausted: 0, requests: 0, scanned: 0 } } }, ttl: { deletedDocuments: 0, passes: 1 } }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.015-0500 s20014| 2016-04-06T02:53:46.811-0500 D SHARDING [conn1] checking last ping for lock 'multidrop.coll' against last seen process mongovm16:20010:1459929128:185613966 and ping 2016-04-06T02:53:11.721-0500 [js_test:multi_coll_drop] 2016-04-06T02:54:10.024-0500 s20014| 2016-04-06T02:53:46.811-0500 D SHARDING [conn1] could not force lock 'multidrop.coll' because elapsed time 6080 < takeover time 900000 ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.025-0500 s20014| 2016-04-06T02:53:46.811-0500 D SHARDING [conn1] distributed lock 'multidrop.coll' was not acquired. [js_test:multi_coll_drop] 2016-04-06T02:54:10.026-0500 s20014| 2016-04-06T02:53:46.934-0500 D ASIO [Balancer] startCommand: RemoteCommand 890 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:16.934-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929226934), up: 99, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.030-0500 s20014| 2016-04-06T02:53:46.934-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 890 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:10.031-0500 s20014| 2016-04-06T02:53:46.953-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 890 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929226000|1, t: 7 }, electionId: ObjectId('7fffffff0000000000000007') } [js_test:multi_coll_drop] 2016-04-06T02:54:10.035-0500 s20014| 2016-04-06T02:53:46.953-0500 D ASIO [Balancer] startCommand: RemoteCommand 892 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:16.953-0500 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|1, t: 7 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.036-0500 s20014| 2016-04-06T02:53:46.954-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 892 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:10.039-0500 s20014| 2016-04-06T02:53:46.955-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 892 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "shard0000", host: "mongovm16:20010" } ], id: 0, ns: "config.shards" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.040-0500 s20015| 2016-04-06T02:53:50.092-0500 I ASIO [Balancer] dropping unhealthy pooled connection to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:10.040-0500 s20015| 2016-04-06T02:53:50.092-0500 I ASIO [Balancer] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:54:10.042-0500 s20015| 2016-04-06T02:53:50.092-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:10.044-0500 s20015| 2016-04-06T02:53:50.092-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 158 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:10.044-0500 s20015| 2016-04-06T02:53:50.093-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:10.046-0500 s20015| 2016-04-06T02:53:50.093-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 158 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:54:10.047-0500 s20015| 2016-04-06T02:53:50.093-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 157 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:10.051-0500 s20015| 2016-04-06T02:53:50.093-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 157 finished with response: { ok: 0.0, errmsg: "not master", code: 10107 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.051-0500 s20015| 2016-04-06T02:53:50.093-0500 D NETWORK [Balancer] Marking host mongovm16:20012 as failed [js_test:multi_coll_drop] 2016-04-06T02:54:10.053-0500 s20015| 2016-04-06T02:53:50.093-0500 D SHARDING [Balancer] Command failed with retriable error and will be retried :: caused by :: NotMaster: not master [js_test:multi_coll_drop] 2016-04-06T02:54:10.053-0500 s20015| 2016-04-06T02:53:50.093-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:10.054-0500 s20015| 2016-04-06T02:53:50.093-0500 D NETWORK [Balancer] SocketException: remote: (NONE):0 error: 9001 socket exception [CLOSED] server [192.168.100.28:20012] [js_test:multi_coll_drop] 2016-04-06T02:54:10.056-0500 s20015| 2016-04-06T02:53:50.093-0500 D - [Balancer] User Assertion: 6:network error while attempting to run command 'ismaster' on host 'mongovm16:20012' [js_test:multi_coll_drop] 2016-04-06T02:54:10.060-0500 s20015| 2016-04-06T02:53:50.093-0500 I NETWORK [Balancer] Detected bad connection created at 1459929176715008 microSec, clearing pool for mongovm16:20012 of 0 connections [js_test:multi_coll_drop] 2016-04-06T02:54:10.075-0500 s20015| 2016-04-06T02:53:50.093-0500 D NETWORK [Balancer] Marking host mongovm16:20012 as failed [js_test:multi_coll_drop] 2016-04-06T02:54:10.082-0500 s20015| 2016-04-06T02:53:50.094-0500 W NETWORK [Balancer] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:10.091-0500 s20015| 2016-04-06T02:53:50.594-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:10.099-0500 s20015| 2016-04-06T02:53:50.594-0500 D NETWORK [Balancer] creating new connection to:mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:10.099-0500 s20015| 2016-04-06T02:53:50.594-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:54:10.113-0500 s20015| 2016-04-06T02:53:50.595-0500 D NETWORK [Balancer] connected to server mongovm16:20012 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:54:10.113-0500 s20015| 2016-04-06T02:53:50.595-0500 D NETWORK [Balancer] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:54:10.116-0500 s20015| 2016-04-06T02:53:50.596-0500 W NETWORK [Balancer] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:10.117-0500 s20015| 2016-04-06T02:53:51.096-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:10.129-0500 s20015| 2016-04-06T02:53:51.105-0500 W NETWORK [Balancer] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:10.136-0500 s20015| 2016-04-06T02:53:51.605-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:10.137-0500 s20015| 2016-04-06T02:53:51.606-0500 W NETWORK [Balancer] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:10.155-0500 s20015| 2016-04-06T02:53:52.106-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:10.163-0500 s20015| 2016-04-06T02:53:52.108-0500 W NETWORK [Balancer] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:10.173-0500 s20015| 2016-04-06T02:53:52.608-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:10.176-0500 s20015| 2016-04-06T02:53:52.609-0500 W NETWORK [Balancer] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:10.182-0500 s20015| 2016-04-06T02:53:53.110-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:10.183-0500 s20015| 2016-04-06T02:53:53.112-0500 W NETWORK [Balancer] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:10.185-0500 s20015| 2016-04-06T02:53:53.612-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:10.188-0500 s20015| 2016-04-06T02:53:53.613-0500 W NETWORK [Balancer] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:10.193-0500 s20015| 2016-04-06T02:53:54.113-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:10.205-0500 s20015| 2016-04-06T02:53:54.114-0500 D NETWORK [Balancer] polling for status of connection to 192.168.100.28:20011, no events [js_test:multi_coll_drop] 2016-04-06T02:54:10.206-0500 s20015| 2016-04-06T02:53:54.117-0500 D NETWORK [Balancer] polling for status of connection to 192.168.100.28:20013, no events [js_test:multi_coll_drop] 2016-04-06T02:54:10.210-0500 s20015| 2016-04-06T02:53:54.118-0500 W NETWORK [Balancer] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:10.235-0500 s20015| 2016-04-06T02:53:54.618-0500 D NETWORK [Balancer] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:10.242-0500 s20015| 2016-04-06T02:53:54.619-0500 D ASIO [Balancer] startCommand: RemoteCommand 160 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:24.619-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929230092), up: 103, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.245-0500 s20015| 2016-04-06T02:53:54.620-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 160 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:10.246-0500 s20015| 2016-04-06T02:53:56.106-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 160 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929234000|3, t: 8 }, electionId: ObjectId('7fffffff0000000000000008') } [js_test:multi_coll_drop] 2016-04-06T02:54:10.258-0500 s20015| 2016-04-06T02:53:56.106-0500 D ASIO [Balancer] startCommand: RemoteCommand 162 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:26.106-0500 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929234000|3, t: 8 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.259-0500 s20015| 2016-04-06T02:53:56.106-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 162 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:10.264-0500 s20015| 2016-04-06T02:53:56.106-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 162 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "shard0000", host: "mongovm16:20010" } ], id: 0, ns: "config.shards" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.268-0500 s20015| 2016-04-06T02:53:56.107-0500 D SHARDING [Balancer] found 1 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp 1459929234000|3, t: 8 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.271-0500 s20015| 2016-04-06T02:53:56.107-0500 D ASIO [Balancer] startCommand: RemoteCommand 164 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:26.107-0500 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929234000|3, t: 8 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.273-0500 s20015| 2016-04-06T02:53:56.107-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 164 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:10.277-0500 s20015| 2016-04-06T02:53:56.108-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 164 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "chunksize", value: 50 } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.277-0500 s20015| 2016-04-06T02:53:56.108-0500 D SHARDING [Balancer] Refreshing MaxChunkSize: 50MB [js_test:multi_coll_drop] 2016-04-06T02:54:10.293-0500 s20015| 2016-04-06T02:53:56.108-0500 D ASIO [Balancer] startCommand: RemoteCommand 166 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:26.108-0500 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929234000|3, t: 8 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.295-0500 s20015| 2016-04-06T02:53:56.108-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 166 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:10.305-0500 s20015| 2016-04-06T02:53:56.117-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 166 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "balancer", stopped: true } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.306-0500 s20015| 2016-04-06T02:53:56.117-0500 D SHARDING [Balancer] skipping balancing round because balancing is disabled [js_test:multi_coll_drop] 2016-04-06T02:54:10.323-0500 s20015| 2016-04-06T02:53:56.117-0500 D ASIO [Balancer] startCommand: RemoteCommand 168 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:26.117-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929236117), up: 109, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.325-0500 s20015| 2016-04-06T02:53:56.117-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 168 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:10.336-0500 s20015| 2016-04-06T02:53:56.138-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 168 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929236000|1, t: 8 }, electionId: ObjectId('7fffffff0000000000000008') } [js_test:multi_coll_drop] 2016-04-06T02:54:10.346-0500 c20011| 2016-04-06T02:53:18.985-0500 D COMMAND [conn59] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929194000|2, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|3, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:10.347-0500 c20011| 2016-04-06T02:53:18.985-0500 D COMMAND [conn59] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:10.351-0500 c20011| 2016-04-06T02:53:18.985-0500 D REPL [conn59] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929198000|3, t: 5 } and is durable through: { ts: Timestamp 1459929194000|2, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.352-0500 c20011| 2016-04-06T02:53:18.985-0500 D REPL [conn59] Required snapshot optime: { ts: Timestamp 1459929198000|1, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929194000|2, t: 5 }, name-id: "269" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.355-0500 c20012| 2016-04-06T02:53:40.727-0500 D COMMAND [conn46] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:10.360-0500 c20012| 2016-04-06T02:53:40.727-0500 D REPL [conn46] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929216000|3, t: 7 } and is durable through: { ts: Timestamp 1459929216000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.364-0500 c20012| 2016-04-06T02:53:40.727-0500 D REPL [conn46] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|3, t: 7 } and is durable through: { ts: Timestamp 1459929220000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.366-0500 c20012| 2016-04-06T02:53:40.727-0500 I COMMAND [conn46] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929216000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.373-0500 c20012| 2016-04-06T02:53:40.727-0500 I COMMAND [conn47] command local.oplog.rs command: getMore { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|2, t: 7 } } cursorid:22842679084 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 22ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.380-0500 c20012| 2016-04-06T02:53:40.727-0500 I COMMAND [conn42] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:53:40.701-0500-5704c08406c33406d4d9c0c4", server: "mongovm16", clientAddr: "127.0.0.1:55066", time: new Date(1459929220701), what: "dropCollection.start", ns: "multidrop.coll", details: {} } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 26ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.382-0500 c20012| 2016-04-06T02:53:40.727-0500 D COMMAND [conn40] run command local.$cmd { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:10.418-0500 c20012| 2016-04-06T02:53:40.728-0500 D COMMAND [conn42] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.422-0500 c20012| 2016-04-06T02:53:40.728-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:10.424-0500 c20012| 2016-04-06T02:53:40.728-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.426-0500 c20012| 2016-04-06T02:53:40.728-0500 D QUERY [conn42] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:10.428-0500 c20012| 2016-04-06T02:53:40.728-0500 D COMMAND [conn47] run command local.$cmd { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:10.432-0500 c20012| 2016-04-06T02:53:40.728-0500 I COMMAND [conn42] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, maxTimeMS: 30000 } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:443 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.444-0500 c20012| 2016-04-06T02:53:40.728-0500 I COMMAND [conn45] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.449-0500 c20012| 2016-04-06T02:53:40.728-0500 D COMMAND [conn42] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08406c33406d4d9c0c5'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929220728), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.450-0500 c20012| 2016-04-06T02:53:40.728-0500 D QUERY [conn42] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.452-0500 c20012| 2016-04-06T02:53:40.728-0500 D QUERY [conn42] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.456-0500 c20012| 2016-04-06T02:53:40.728-0500 D QUERY [conn42] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.474-0500 c20012| 2016-04-06T02:53:40.728-0500 D - [conn42] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.483-0500 c20012| 2016-04-06T02:53:40.729-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:10.484-0500 c20012| 2016-04-06T02:53:40.729-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:10.491-0500 c20012| 2016-04-06T02:53:40.729-0500 D COMMAND [conn42] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08406c33406d4d9c0c5'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929220728), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.496-0500 c20012| 2016-04-06T02:53:40.729-0500 I COMMAND [conn42] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08406c33406d4d9c0c5'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929220728), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08406c33406d4d9c0c5'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929220728), why: "drop" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.502-0500 c20012| 2016-04-06T02:53:40.729-0500 D COMMAND [conn42] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.504-0500 c20012| 2016-04-06T02:53:40.729-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:10.515-0500 c20012| 2016-04-06T02:53:40.729-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.522-0500 c20012| 2016-04-06T02:53:40.729-0500 D QUERY [conn42] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:10.527-0500 c20012| 2016-04-06T02:53:40.729-0500 I COMMAND [conn42] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.533-0500 c20012| 2016-04-06T02:53:40.729-0500 D COMMAND [conn42] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.536-0500 c20012| 2016-04-06T02:53:40.729-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:10.538-0500 c20012| 2016-04-06T02:53:40.729-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.543-0500 c20012| 2016-04-06T02:53:40.729-0500 D QUERY [conn42] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:10.547-0500 c20012| 2016-04-06T02:53:40.730-0500 I COMMAND [conn42] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.564-0500 c20012| 2016-04-06T02:53:40.730-0500 D COMMAND [conn42] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.577-0500 c20012| 2016-04-06T02:53:40.731-0500 I COMMAND [conn42] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.585-0500 c20012| 2016-04-06T02:53:41.232-0500 D COMMAND [conn42] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08506c33406d4d9c0c6'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929221232), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.586-0500 c20012| 2016-04-06T02:53:41.232-0500 D QUERY [conn42] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.591-0500 c20012| 2016-04-06T02:53:41.232-0500 D QUERY [conn42] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.594-0500 c20012| 2016-04-06T02:53:41.232-0500 D QUERY [conn42] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.602-0500 c20012| 2016-04-06T02:53:41.232-0500 D - [conn42] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.602-0500 c20012| 2016-04-06T02:53:41.233-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:10.602-0500 c20012| 2016-04-06T02:53:41.233-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:10.606-0500 c20012| 2016-04-06T02:53:41.233-0500 D COMMAND [conn42] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08506c33406d4d9c0c6'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929221232), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.613-0500 c20012| 2016-04-06T02:53:41.233-0500 I COMMAND [conn42] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08506c33406d4d9c0c6'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929221232), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08506c33406d4d9c0c6'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929221232), why: "drop" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.646-0500 c20012| 2016-04-06T02:53:41.233-0500 D COMMAND [conn42] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.660-0500 c20012| 2016-04-06T02:53:41.233-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:10.675-0500 c20012| 2016-04-06T02:53:41.233-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.675-0500 c20012| 2016-04-06T02:53:41.233-0500 D QUERY [conn42] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:10.682-0500 c20012| 2016-04-06T02:53:41.233-0500 I COMMAND [conn42] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.685-0500 c20012| 2016-04-06T02:53:41.233-0500 D COMMAND [conn42] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.687-0500 c20012| 2016-04-06T02:53:41.233-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:10.688-0500 c20012| 2016-04-06T02:53:41.233-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.691-0500 c20011| 2016-04-06T02:53:18.985-0500 D REPL [conn59] Required snapshot optime: { ts: Timestamp 1459929198000|2, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929194000|2, t: 5 }, name-id: "269" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.698-0500 c20011| 2016-04-06T02:53:18.985-0500 D REPL [conn59] Required snapshot optime: { ts: Timestamp 1459929198000|3, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929194000|2, t: 5 }, name-id: "269" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.701-0500 c20011| 2016-04-06T02:53:18.985-0500 D REPL [conn59] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929198000|2, t: 4 } and is durable through: { ts: Timestamp 1459929198000|2, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.703-0500 c20013| 2016-04-06T02:52:43.299-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:10.706-0500 c20012| 2016-04-06T02:53:41.233-0500 D QUERY [conn42] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:10.706-0500 c20013| 2016-04-06T02:52:43.299-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:10.707-0500 c20013| 2016-04-06T02:52:43.299-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:10.712-0500 c20012| 2016-04-06T02:53:41.233-0500 I COMMAND [conn42] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.715-0500 c20012| 2016-04-06T02:53:41.233-0500 D COMMAND [conn42] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.716-0500 c20012| 2016-04-06T02:53:41.234-0500 I COMMAND [conn42] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.716-0500 c20012| 2016-04-06T02:53:41.483-0500 D COMMAND [conn35] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.717-0500 c20012| 2016-04-06T02:53:41.484-0500 I COMMAND [conn35] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.719-0500 c20012| 2016-04-06T02:53:41.735-0500 D COMMAND [conn42] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08506c33406d4d9c0c7'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929221735), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.721-0500 c20012| 2016-04-06T02:53:41.735-0500 D QUERY [conn42] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.725-0500 c20012| 2016-04-06T02:53:41.735-0500 D QUERY [conn42] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.732-0500 c20012| 2016-04-06T02:53:41.735-0500 D QUERY [conn42] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.734-0500 c20012| 2016-04-06T02:53:41.735-0500 D - [conn42] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.735-0500 c20012| 2016-04-06T02:53:41.735-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:10.736-0500 c20012| 2016-04-06T02:53:41.735-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:10.740-0500 c20012| 2016-04-06T02:53:41.735-0500 D COMMAND [conn42] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08506c33406d4d9c0c7'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929221735), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.746-0500 c20012| 2016-04-06T02:53:41.735-0500 I COMMAND [conn42] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08506c33406d4d9c0c7'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929221735), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08506c33406d4d9c0c7'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929221735), why: "drop" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.756-0500 c20012| 2016-04-06T02:53:41.736-0500 D COMMAND [conn42] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.758-0500 c20012| 2016-04-06T02:53:41.736-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:10.761-0500 d20010| 2016-04-06T02:53:56.628-0500 I NETWORK [PeriodicTaskRunner] Socket closed remotely, no longer connected (idle 7 secs, remote host 192.168.100.28:20012) [js_test:multi_coll_drop] 2016-04-06T02:54:10.767-0500 c20011| 2016-04-06T02:53:18.985-0500 I COMMAND [conn59] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929194000|2, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|3, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.771-0500 c20011| 2016-04-06T02:53:18.986-0500 D COMMAND [conn59] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|3, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:10.771-0500 c20011| 2016-04-06T02:53:18.986-0500 D COMMAND [conn59] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:10.776-0500 c20011| 2016-04-06T02:53:18.986-0500 D REPL [conn59] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929198000|3, t: 5 } and is durable through: { ts: Timestamp 1459929198000|1, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.780-0500 c20011| 2016-04-06T02:53:18.986-0500 D REPL [conn59] Updating _lastCommittedOpTime to { ts: Timestamp 1459929198000|1, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.783-0500 c20011| 2016-04-06T02:53:18.986-0500 D REPL [conn59] Required snapshot optime: { ts: Timestamp 1459929198000|2, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929198000|1, t: 5 }, name-id: "270" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.784-0500 c20011| 2016-04-06T02:53:18.986-0500 D REPL [conn59] Required snapshot optime: { ts: Timestamp 1459929198000|3, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929198000|1, t: 5 }, name-id: "270" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.790-0500 c20011| 2016-04-06T02:53:18.986-0500 D REPL [conn59] Required snapshot optime: { ts: Timestamp 1459929198000|2, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929198000|1, t: 5 }, name-id: "270" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.792-0500 c20011| 2016-04-06T02:53:18.986-0500 D REPL [conn59] Required snapshot optime: { ts: Timestamp 1459929198000|3, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929198000|1, t: 5 }, name-id: "270" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.795-0500 c20011| 2016-04-06T02:53:18.986-0500 D REPL [conn59] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929198000|2, t: 4 } and is durable through: { ts: Timestamp 1459929198000|2, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.800-0500 c20011| 2016-04-06T02:53:18.986-0500 I COMMAND [conn59] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|3, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.803-0500 c20012| 2016-04-06T02:53:41.736-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.804-0500 c20012| 2016-04-06T02:53:41.736-0500 D QUERY [conn42] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:10.808-0500 c20012| 2016-04-06T02:53:41.736-0500 I COMMAND [conn42] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.812-0500 c20012| 2016-04-06T02:53:41.736-0500 D COMMAND [conn42] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.813-0500 c20012| 2016-04-06T02:53:41.736-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:10.835-0500 c20012| 2016-04-06T02:53:41.736-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.836-0500 c20012| 2016-04-06T02:53:41.736-0500 D QUERY [conn42] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:10.840-0500 c20012| 2016-04-06T02:53:41.736-0500 I COMMAND [conn42] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.842-0500 c20012| 2016-04-06T02:53:41.736-0500 D COMMAND [conn42] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.844-0500 c20012| 2016-04-06T02:53:41.743-0500 I COMMAND [conn42] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.846-0500 c20012| 2016-04-06T02:53:41.752-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1426 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:51.752-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.847-0500 c20012| 2016-04-06T02:53:41.752-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1426 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:10.851-0500 c20012| 2016-04-06T02:53:41.752-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1426 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20012", term: 7, primaryId: 1, durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, opTime: { ts: Timestamp 1459929220000|3, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:10.852-0500 c20012| 2016-04-06T02:53:41.752-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:43.752Z [js_test:multi_coll_drop] 2016-04-06T02:54:10.853-0500 c20012| 2016-04-06T02:53:42.143-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.854-0500 c20012| 2016-04-06T02:53:42.143-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:54:10.856-0500 c20012| 2016-04-06T02:53:42.143-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 7 } numYields:0 reslen:500 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.873-0500 c20012| 2016-04-06T02:53:42.244-0500 D COMMAND [conn42] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08606c33406d4d9c0c8'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929222243), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.875-0500 c20012| 2016-04-06T02:53:42.244-0500 D QUERY [conn42] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.882-0500 c20012| 2016-04-06T02:53:42.244-0500 D QUERY [conn42] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.888-0500 c20012| 2016-04-06T02:53:42.244-0500 D QUERY [conn42] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.889-0500 c20012| 2016-04-06T02:53:42.244-0500 D - [conn42] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.892-0500 c20012| 2016-04-06T02:53:42.244-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:10.894-0500 c20012| 2016-04-06T02:53:42.244-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:10.896-0500 c20012| 2016-04-06T02:53:42.245-0500 D COMMAND [conn42] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08606c33406d4d9c0c8'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929222243), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.900-0500 c20012| 2016-04-06T02:53:42.245-0500 I COMMAND [conn42] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08606c33406d4d9c0c8'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929222243), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08606c33406d4d9c0c8'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929222243), why: "drop" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.901-0500 c20012| 2016-04-06T02:53:42.245-0500 D COMMAND [conn42] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.901-0500 c20012| 2016-04-06T02:53:42.245-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:10.903-0500 c20012| 2016-04-06T02:53:42.245-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.904-0500 c20012| 2016-04-06T02:53:42.245-0500 D QUERY [conn42] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:10.906-0500 c20012| 2016-04-06T02:53:42.245-0500 I COMMAND [conn42] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.908-0500 c20012| 2016-04-06T02:53:42.256-0500 D COMMAND [conn42] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.909-0500 c20012| 2016-04-06T02:53:42.257-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:10.910-0500 c20012| 2016-04-06T02:53:42.257-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.910-0500 c20012| 2016-04-06T02:53:42.257-0500 D QUERY [conn42] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:10.915-0500 c20012| 2016-04-06T02:53:42.257-0500 I COMMAND [conn42] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.916-0500 c20012| 2016-04-06T02:53:42.258-0500 D COMMAND [conn42] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.917-0500 c20012| 2016-04-06T02:53:42.260-0500 I COMMAND [conn42] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.919-0500 c20012| 2016-04-06T02:53:42.398-0500 D COMMAND [conn37] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.919-0500 c20012| 2016-04-06T02:53:42.398-0500 D COMMAND [conn37] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:54:10.921-0500 c20012| 2016-04-06T02:53:42.398-0500 I COMMAND [conn37] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 7 } numYields:0 reslen:500 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.924-0500 c20012| 2016-04-06T02:53:42.400-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1428 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:52.400-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.926-0500 c20012| 2016-04-06T02:53:42.400-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1428 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:10.929-0500 c20012| 2016-04-06T02:53:42.401-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1428 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20012", term: 7, primaryId: 1, durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, opTime: { ts: Timestamp 1459929220000|3, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:10.932-0500 c20012| 2016-04-06T02:53:42.401-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:44.401Z [js_test:multi_coll_drop] 2016-04-06T02:54:10.935-0500 c20012| 2016-04-06T02:53:42.761-0500 D COMMAND [conn42] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08606c33406d4d9c0c9'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929222760), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.936-0500 c20012| 2016-04-06T02:53:42.761-0500 D QUERY [conn42] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.944-0500 c20012| 2016-04-06T02:53:42.761-0500 D QUERY [conn42] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.946-0500 c20012| 2016-04-06T02:53:42.761-0500 D QUERY [conn42] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.950-0500 c20012| 2016-04-06T02:53:42.761-0500 D - [conn42] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.955-0500 c20012| 2016-04-06T02:53:42.761-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:10.955-0500 c20012| 2016-04-06T02:53:42.761-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:10.966-0500 c20012| 2016-04-06T02:53:42.761-0500 D COMMAND [conn42] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08606c33406d4d9c0c9'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929222760), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:10.973-0500 c20012| 2016-04-06T02:53:42.761-0500 I COMMAND [conn42] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08606c33406d4d9c0c9'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929222760), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08606c33406d4d9c0c9'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929222760), why: "drop" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:10.979-0500 c20012| 2016-04-06T02:53:42.762-0500 D COMMAND [conn42] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:10.987-0500 c20012| 2016-04-06T02:53:42.762-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.044-0500 c20012| 2016-04-06T02:53:42.762-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.045-0500 c20012| 2016-04-06T02:53:42.762-0500 D QUERY [conn42] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:11.068-0500 c20012| 2016-04-06T02:53:42.762-0500 I COMMAND [conn42] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.072-0500 c20012| 2016-04-06T02:53:42.762-0500 D COMMAND [conn42] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.073-0500 c20012| 2016-04-06T02:53:42.762-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.074-0500 c20012| 2016-04-06T02:53:42.762-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.075-0500 c20012| 2016-04-06T02:53:42.762-0500 D QUERY [conn42] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:11.076-0500 c20012| 2016-04-06T02:53:42.763-0500 I COMMAND [conn42] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.077-0500 c20013| 2016-04-06T02:52:43.299-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:11.077-0500 c20013| 2016-04-06T02:52:43.299-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:11.078-0500 c20013| 2016-04-06T02:52:43.299-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:11.078-0500 c20013| 2016-04-06T02:52:43.299-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:11.078-0500 c20013| 2016-04-06T02:52:43.299-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:11.078-0500 c20012| 2016-04-06T02:53:42.763-0500 D COMMAND [conn42] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.079-0500 c20012| 2016-04-06T02:53:42.764-0500 I COMMAND [conn42] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.079-0500 c20012| 2016-04-06T02:53:43.228-0500 D COMMAND [conn45] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:11.080-0500 c20012| 2016-04-06T02:53:43.228-0500 D COMMAND [conn45] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:11.080-0500 c20012| 2016-04-06T02:53:43.228-0500 D REPL [conn45] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|3, t: 7 } and is durable through: { ts: Timestamp 1459929220000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.082-0500 c20012| 2016-04-06T02:53:43.228-0500 D REPL [conn45] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|3, t: 7 } and is durable through: { ts: Timestamp 1459929220000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.084-0500 c20012| 2016-04-06T02:53:43.228-0500 I COMMAND [conn45] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.086-0500 c20012| 2016-04-06T02:53:43.229-0500 D COMMAND [conn46] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:11.086-0500 c20012| 2016-04-06T02:53:43.229-0500 D COMMAND [conn46] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:11.090-0500 c20012| 2016-04-06T02:53:43.229-0500 D REPL [conn46] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|3, t: 7 } and is durable through: { ts: Timestamp 1459929220000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.091-0500 c20012| 2016-04-06T02:53:43.229-0500 D REPL [conn46] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|3, t: 7 } and is durable through: { ts: Timestamp 1459929220000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.094-0500 c20012| 2016-04-06T02:53:43.229-0500 I COMMAND [conn46] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.095-0500 c20012| 2016-04-06T02:53:43.230-0500 I COMMAND [conn47] command local.oplog.rs command: getMore { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } cursorid:22842679084 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 2502ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.107-0500 c20012| 2016-04-06T02:53:43.230-0500 I COMMAND [conn40] command local.oplog.rs command: getMore { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } cursorid:23538204668 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 2502ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.115-0500 c20012| 2016-04-06T02:53:43.231-0500 D COMMAND [conn40] run command local.$cmd { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.117-0500 c20012| 2016-04-06T02:53:43.233-0500 D COMMAND [conn47] run command local.$cmd { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.120-0500 c20012| 2016-04-06T02:53:43.266-0500 D COMMAND [conn42] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08706c33406d4d9c0ca'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929223265), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.122-0500 c20012| 2016-04-06T02:53:43.266-0500 D QUERY [conn42] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.123-0500 c20012| 2016-04-06T02:53:43.266-0500 D QUERY [conn42] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.124-0500 c20012| 2016-04-06T02:53:43.266-0500 D QUERY [conn42] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.126-0500 c20012| 2016-04-06T02:53:43.266-0500 D - [conn42] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.127-0500 c20012| 2016-04-06T02:53:43.266-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:11.128-0500 c20012| 2016-04-06T02:53:43.266-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:11.130-0500 c20012| 2016-04-06T02:53:43.266-0500 D COMMAND [conn42] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08706c33406d4d9c0ca'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929223265), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.134-0500 c20012| 2016-04-06T02:53:43.266-0500 I COMMAND [conn42] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08706c33406d4d9c0ca'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929223265), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08706c33406d4d9c0ca'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929223265), why: "drop" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.137-0500 c20012| 2016-04-06T02:53:43.266-0500 D COMMAND [conn42] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.138-0500 c20012| 2016-04-06T02:53:43.266-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.139-0500 c20012| 2016-04-06T02:53:43.266-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.139-0500 c20012| 2016-04-06T02:53:43.266-0500 D QUERY [conn42] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:11.141-0500 c20012| 2016-04-06T02:53:43.267-0500 I COMMAND [conn42] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.142-0500 c20012| 2016-04-06T02:53:43.267-0500 D COMMAND [conn42] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.143-0500 c20012| 2016-04-06T02:53:43.267-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.145-0500 c20012| 2016-04-06T02:53:43.267-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.146-0500 c20012| 2016-04-06T02:53:43.267-0500 D QUERY [conn42] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:11.148-0500 c20012| 2016-04-06T02:53:43.267-0500 I COMMAND [conn42] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.152-0500 c20012| 2016-04-06T02:53:43.267-0500 D COMMAND [conn42] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.155-0500 c20012| 2016-04-06T02:53:43.268-0500 I COMMAND [conn42] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.156-0500 c20012| 2016-04-06T02:53:43.752-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1430 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:53.752-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.156-0500 c20012| 2016-04-06T02:53:43.752-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1430 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:11.156-0500 c20012| 2016-04-06T02:53:43.754-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1430 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20012", term: 7, primaryId: 1, durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, opTime: { ts: Timestamp 1459929220000|3, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.157-0500 c20012| 2016-04-06T02:53:43.755-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:45.755Z [js_test:multi_coll_drop] 2016-04-06T02:54:11.157-0500 c20012| 2016-04-06T02:53:43.769-0500 D COMMAND [conn42] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08706c33406d4d9c0cb'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929223769), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.157-0500 c20012| 2016-04-06T02:53:43.769-0500 D QUERY [conn42] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.167-0500 c20012| 2016-04-06T02:53:43.769-0500 D QUERY [conn42] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.170-0500 c20012| 2016-04-06T02:53:43.769-0500 D QUERY [conn42] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.173-0500 c20012| 2016-04-06T02:53:43.769-0500 D - [conn42] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.174-0500 c20012| 2016-04-06T02:53:43.770-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:11.175-0500 c20012| 2016-04-06T02:53:43.770-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:11.181-0500 c20012| 2016-04-06T02:53:43.770-0500 D COMMAND [conn42] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08706c33406d4d9c0cb'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929223769), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.209-0500 c20012| 2016-04-06T02:53:43.770-0500 I COMMAND [conn42] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08706c33406d4d9c0cb'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929223769), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08706c33406d4d9c0cb'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929223769), why: "drop" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.213-0500 c20012| 2016-04-06T02:53:43.770-0500 D COMMAND [conn42] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.215-0500 c20012| 2016-04-06T02:53:43.770-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.217-0500 c20012| 2016-04-06T02:53:43.770-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.220-0500 c20012| 2016-04-06T02:53:43.770-0500 D QUERY [conn42] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:11.224-0500 c20012| 2016-04-06T02:53:43.770-0500 I COMMAND [conn42] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.227-0500 c20012| 2016-04-06T02:53:43.770-0500 D COMMAND [conn42] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.232-0500 c20012| 2016-04-06T02:53:43.770-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.242-0500 c20012| 2016-04-06T02:53:43.770-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.244-0500 c20012| 2016-04-06T02:53:43.770-0500 D QUERY [conn42] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:11.255-0500 c20012| 2016-04-06T02:53:43.771-0500 I COMMAND [conn42] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.262-0500 c20012| 2016-04-06T02:53:43.771-0500 D COMMAND [conn42] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.263-0500 c20012| 2016-04-06T02:53:43.772-0500 I COMMAND [conn42] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.265-0500 c20012| 2016-04-06T02:53:44.143-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.265-0500 c20012| 2016-04-06T02:53:44.143-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:54:11.275-0500 c20012| 2016-04-06T02:53:44.143-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 7 } numYields:0 reslen:500 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.278-0500 c20012| 2016-04-06T02:53:44.273-0500 D COMMAND [conn42] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08806c33406d4d9c0cc'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929224273), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.279-0500 c20012| 2016-04-06T02:53:44.273-0500 D QUERY [conn42] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.280-0500 c20012| 2016-04-06T02:53:44.273-0500 D QUERY [conn42] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.281-0500 c20012| 2016-04-06T02:53:44.273-0500 D QUERY [conn42] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.287-0500 c20012| 2016-04-06T02:53:44.273-0500 D - [conn42] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.289-0500 c20012| 2016-04-06T02:53:44.274-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:11.290-0500 c20012| 2016-04-06T02:53:44.274-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:11.293-0500 c20012| 2016-04-06T02:53:44.274-0500 D COMMAND [conn42] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08806c33406d4d9c0cc'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929224273), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.299-0500 c20012| 2016-04-06T02:53:44.274-0500 I COMMAND [conn42] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08806c33406d4d9c0cc'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929224273), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08806c33406d4d9c0cc'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929224273), why: "drop" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.307-0500 c20012| 2016-04-06T02:53:44.274-0500 D COMMAND [conn42] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.311-0500 c20012| 2016-04-06T02:53:44.274-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.314-0500 c20012| 2016-04-06T02:53:44.274-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.315-0500 c20012| 2016-04-06T02:53:44.274-0500 D QUERY [conn42] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:11.318-0500 c20012| 2016-04-06T02:53:44.274-0500 I COMMAND [conn42] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.329-0500 c20012| 2016-04-06T02:53:44.275-0500 D COMMAND [conn42] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.330-0500 c20012| 2016-04-06T02:53:44.275-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.332-0500 c20012| 2016-04-06T02:53:44.275-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.334-0500 c20012| 2016-04-06T02:53:44.275-0500 D QUERY [conn42] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:11.337-0500 c20012| 2016-04-06T02:53:44.275-0500 I COMMAND [conn42] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.338-0500 c20012| 2016-04-06T02:53:44.277-0500 D COMMAND [conn42] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.339-0500 c20012| 2016-04-06T02:53:44.282-0500 I COMMAND [conn42] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.340-0500 c20012| 2016-04-06T02:53:44.398-0500 D COMMAND [conn37] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.340-0500 c20012| 2016-04-06T02:53:44.398-0500 D COMMAND [conn37] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:54:11.342-0500 c20012| 2016-04-06T02:53:44.399-0500 I COMMAND [conn37] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 7 } numYields:0 reslen:500 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.344-0500 c20012| 2016-04-06T02:53:44.401-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1432 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:54.401-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.345-0500 c20012| 2016-04-06T02:53:44.401-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1432 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:11.347-0500 c20012| 2016-04-06T02:53:44.401-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1432 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20012", term: 7, primaryId: 1, durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, opTime: { ts: Timestamp 1459929220000|3, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.348-0500 c20012| 2016-04-06T02:53:44.401-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:46.401Z [js_test:multi_coll_drop] 2016-04-06T02:54:11.355-0500 c20012| 2016-04-06T02:53:44.783-0500 D COMMAND [conn42] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08806c33406d4d9c0cd'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929224783), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.356-0500 c20012| 2016-04-06T02:53:44.783-0500 D QUERY [conn42] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.358-0500 c20012| 2016-04-06T02:53:44.783-0500 D QUERY [conn42] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.362-0500 c20012| 2016-04-06T02:53:44.783-0500 D QUERY [conn42] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.363-0500 c20012| 2016-04-06T02:53:44.783-0500 D - [conn42] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.365-0500 c20012| 2016-04-06T02:53:44.783-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:11.366-0500 c20012| 2016-04-06T02:53:44.783-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:11.373-0500 c20012| 2016-04-06T02:53:44.784-0500 D COMMAND [conn42] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08806c33406d4d9c0cd'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929224783), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.377-0500 c20012| 2016-04-06T02:53:44.784-0500 I COMMAND [conn42] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08806c33406d4d9c0cd'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929224783), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08806c33406d4d9c0cd'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929224783), why: "drop" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.385-0500 c20012| 2016-04-06T02:53:44.784-0500 D COMMAND [conn42] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.388-0500 c20012| 2016-04-06T02:53:44.784-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.389-0500 c20012| 2016-04-06T02:53:44.784-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.391-0500 c20012| 2016-04-06T02:53:44.784-0500 D QUERY [conn42] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:11.396-0500 c20012| 2016-04-06T02:53:44.784-0500 I COMMAND [conn42] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.398-0500 c20012| 2016-04-06T02:53:44.784-0500 D COMMAND [conn42] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.399-0500 c20012| 2016-04-06T02:53:44.784-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.400-0500 c20012| 2016-04-06T02:53:44.784-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.401-0500 c20012| 2016-04-06T02:53:44.784-0500 D QUERY [conn42] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:11.404-0500 c20012| 2016-04-06T02:53:44.784-0500 I COMMAND [conn42] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.405-0500 c20012| 2016-04-06T02:53:44.785-0500 D COMMAND [conn42] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.405-0500 c20012| 2016-04-06T02:53:44.785-0500 I COMMAND [conn42] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.418-0500 c20012| 2016-04-06T02:53:45.286-0500 D COMMAND [conn42] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08906c33406d4d9c0ce'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929225286), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.423-0500 c20012| 2016-04-06T02:53:45.287-0500 D QUERY [conn42] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.425-0500 c20012| 2016-04-06T02:53:45.287-0500 D QUERY [conn42] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.426-0500 c20012| 2016-04-06T02:53:45.287-0500 D QUERY [conn42] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.426-0500 c20012| 2016-04-06T02:53:45.287-0500 D - [conn42] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.427-0500 c20012| 2016-04-06T02:53:45.287-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:11.427-0500 c20012| 2016-04-06T02:53:45.287-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:11.429-0500 c20012| 2016-04-06T02:53:45.287-0500 D COMMAND [conn42] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08906c33406d4d9c0ce'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929225286), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.436-0500 c20012| 2016-04-06T02:53:45.287-0500 I COMMAND [conn42] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08906c33406d4d9c0ce'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929225286), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08906c33406d4d9c0ce'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929225286), why: "drop" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.437-0500 c20012| 2016-04-06T02:53:45.287-0500 D COMMAND [conn42] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.438-0500 c20012| 2016-04-06T02:53:45.287-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.440-0500 c20012| 2016-04-06T02:53:45.287-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.443-0500 c20012| 2016-04-06T02:53:45.287-0500 D QUERY [conn42] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:11.446-0500 c20012| 2016-04-06T02:53:45.287-0500 I COMMAND [conn42] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.447-0500 c20012| 2016-04-06T02:53:45.288-0500 D COMMAND [conn42] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.448-0500 c20012| 2016-04-06T02:53:45.288-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.451-0500 c20012| 2016-04-06T02:53:45.288-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.452-0500 c20012| 2016-04-06T02:53:45.288-0500 D QUERY [conn42] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:11.454-0500 c20012| 2016-04-06T02:53:45.288-0500 I COMMAND [conn42] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.456-0500 c20012| 2016-04-06T02:53:45.288-0500 D COMMAND [conn42] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.456-0500 c20012| 2016-04-06T02:53:45.289-0500 I COMMAND [conn42] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.459-0500 c20012| 2016-04-06T02:53:45.728-0500 D COMMAND [conn45] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:11.459-0500 c20012| 2016-04-06T02:53:45.728-0500 D COMMAND [conn45] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:11.460-0500 c20012| 2016-04-06T02:53:45.728-0500 D REPL [conn45] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|3, t: 7 } and is durable through: { ts: Timestamp 1459929220000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.461-0500 c20012| 2016-04-06T02:53:45.728-0500 D REPL [conn45] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|3, t: 7 } and is durable through: { ts: Timestamp 1459929220000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.463-0500 c20012| 2016-04-06T02:53:45.728-0500 I COMMAND [conn45] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.466-0500 c20012| 2016-04-06T02:53:45.729-0500 D COMMAND [conn46] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:11.467-0500 c20012| 2016-04-06T02:53:45.729-0500 D COMMAND [conn46] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:11.468-0500 c20012| 2016-04-06T02:53:45.729-0500 D REPL [conn46] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|3, t: 7 } and is durable through: { ts: Timestamp 1459929220000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.469-0500 c20012| 2016-04-06T02:53:45.729-0500 D REPL [conn46] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|3, t: 7 } and is durable through: { ts: Timestamp 1459929220000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.472-0500 c20012| 2016-04-06T02:53:45.729-0500 I COMMAND [conn46] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.477-0500 c20012| 2016-04-06T02:53:45.732-0500 I COMMAND [conn40] command local.oplog.rs command: getMore { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } cursorid:23538204668 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 2500ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.480-0500 c20012| 2016-04-06T02:53:45.733-0500 D COMMAND [conn40] run command local.$cmd { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.483-0500 c20012| 2016-04-06T02:53:45.742-0500 I COMMAND [conn47] command local.oplog.rs command: getMore { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } cursorid:22842679084 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 2508ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.486-0500 c20012| 2016-04-06T02:53:45.743-0500 D COMMAND [conn47] run command local.$cmd { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.489-0500 c20012| 2016-04-06T02:53:45.755-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1434 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:55.755-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.492-0500 c20012| 2016-04-06T02:53:45.756-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1434 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:11.499-0500 c20012| 2016-04-06T02:53:45.756-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1434 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20012", term: 7, primaryId: 1, durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, opTime: { ts: Timestamp 1459929220000|3, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.500-0500 c20012| 2016-04-06T02:53:45.757-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:47.757Z [js_test:multi_coll_drop] 2016-04-06T02:54:11.505-0500 c20012| 2016-04-06T02:53:45.791-0500 D COMMAND [conn42] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08906c33406d4d9c0cf'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929225790), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.508-0500 c20012| 2016-04-06T02:53:45.791-0500 D QUERY [conn42] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.510-0500 c20012| 2016-04-06T02:53:45.791-0500 D QUERY [conn42] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.511-0500 c20012| 2016-04-06T02:53:45.791-0500 D QUERY [conn42] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.513-0500 c20012| 2016-04-06T02:53:45.791-0500 D - [conn42] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.514-0500 c20012| 2016-04-06T02:53:45.791-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:11.516-0500 c20012| 2016-04-06T02:53:45.791-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:11.525-0500 c20012| 2016-04-06T02:53:45.791-0500 D COMMAND [conn42] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08906c33406d4d9c0cf'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929225790), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.532-0500 c20012| 2016-04-06T02:53:45.791-0500 I COMMAND [conn42] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08906c33406d4d9c0cf'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929225790), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08906c33406d4d9c0cf'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929225790), why: "drop" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.532-0500 c20012| 2016-04-06T02:53:45.797-0500 D COMMAND [conn42] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.546-0500 c20012| 2016-04-06T02:53:45.797-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.546-0500 c20012| 2016-04-06T02:53:45.797-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.547-0500 c20012| 2016-04-06T02:53:45.797-0500 D QUERY [conn42] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:11.552-0500 c20012| 2016-04-06T02:53:45.797-0500 I COMMAND [conn42] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.555-0500 c20012| 2016-04-06T02:53:45.798-0500 D COMMAND [conn42] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.556-0500 c20012| 2016-04-06T02:53:45.798-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.558-0500 c20012| 2016-04-06T02:53:45.798-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.562-0500 c20012| 2016-04-06T02:53:45.798-0500 D QUERY [conn42] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:11.569-0500 c20012| 2016-04-06T02:53:45.800-0500 I COMMAND [conn42] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.572-0500 c20012| 2016-04-06T02:53:45.801-0500 D COMMAND [conn42] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.575-0500 c20012| 2016-04-06T02:53:45.802-0500 I COMMAND [conn42] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.577-0500 c20012| 2016-04-06T02:53:46.143-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.579-0500 c20012| 2016-04-06T02:53:46.143-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:54:11.580-0500 c20012| 2016-04-06T02:53:46.144-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 7 } numYields:0 reslen:500 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.584-0500 c20012| 2016-04-06T02:53:46.303-0500 D COMMAND [conn42] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08a06c33406d4d9c0d0'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929226303), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.585-0500 c20012| 2016-04-06T02:53:46.303-0500 D QUERY [conn42] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.589-0500 c20012| 2016-04-06T02:53:46.303-0500 D QUERY [conn42] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.591-0500 c20012| 2016-04-06T02:53:46.303-0500 D QUERY [conn42] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.596-0500 c20012| 2016-04-06T02:53:46.304-0500 D - [conn42] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.597-0500 c20012| 2016-04-06T02:53:46.304-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:11.603-0500 c20012| 2016-04-06T02:53:46.304-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:11.606-0500 c20012| 2016-04-06T02:53:46.304-0500 D COMMAND [conn42] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08a06c33406d4d9c0d0'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929226303), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.613-0500 c20012| 2016-04-06T02:53:46.304-0500 I COMMAND [conn42] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08a06c33406d4d9c0d0'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929226303), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08a06c33406d4d9c0d0'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929226303), why: "drop" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.617-0500 c20012| 2016-04-06T02:53:46.304-0500 D COMMAND [conn42] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.626-0500 c20012| 2016-04-06T02:53:46.304-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.627-0500 c20012| 2016-04-06T02:53:46.304-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.636-0500 c20012| 2016-04-06T02:53:46.304-0500 D QUERY [conn42] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:11.655-0500 c20012| 2016-04-06T02:53:46.304-0500 I COMMAND [conn42] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.662-0500 c20012| 2016-04-06T02:53:46.305-0500 D COMMAND [conn42] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.666-0500 c20012| 2016-04-06T02:53:46.305-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.669-0500 c20012| 2016-04-06T02:53:46.305-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.671-0500 c20012| 2016-04-06T02:53:46.305-0500 D QUERY [conn42] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:11.675-0500 c20012| 2016-04-06T02:53:46.305-0500 I COMMAND [conn42] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.675-0500 c20012| 2016-04-06T02:53:46.305-0500 D COMMAND [conn42] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.678-0500 c20012| 2016-04-06T02:53:46.307-0500 I COMMAND [conn42] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.681-0500 c20012| 2016-04-06T02:53:46.399-0500 D COMMAND [conn37] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.681-0500 c20012| 2016-04-06T02:53:46.399-0500 D COMMAND [conn37] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:54:11.684-0500 c20012| 2016-04-06T02:53:46.400-0500 I COMMAND [conn37] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 7 } numYields:0 reslen:500 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.687-0500 c20012| 2016-04-06T02:53:46.401-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1436 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:56.401-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.688-0500 c20012| 2016-04-06T02:53:46.401-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1436 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:11.692-0500 c20012| 2016-04-06T02:53:46.402-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1436 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20012", term: 7, primaryId: 1, durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, opTime: { ts: Timestamp 1459929220000|3, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.693-0500 c20012| 2016-04-06T02:53:46.402-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:48.402Z [js_test:multi_coll_drop] 2016-04-06T02:54:11.698-0500 c20012| 2016-04-06T02:53:46.808-0500 D COMMAND [conn42] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08a06c33406d4d9c0d1'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929226808), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.702-0500 c20012| 2016-04-06T02:53:46.808-0500 D QUERY [conn42] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.705-0500 c20012| 2016-04-06T02:53:46.808-0500 D QUERY [conn42] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.706-0500 c20012| 2016-04-06T02:53:46.808-0500 D QUERY [conn42] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.707-0500 c20012| 2016-04-06T02:53:46.808-0500 D - [conn42] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.710-0500 c20012| 2016-04-06T02:53:46.808-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:11.712-0500 c20012| 2016-04-06T02:53:46.808-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:11.715-0500 c20012| 2016-04-06T02:53:46.808-0500 D COMMAND [conn42] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08a06c33406d4d9c0d1'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929226808), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.723-0500 c20012| 2016-04-06T02:53:46.808-0500 I COMMAND [conn42] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08a06c33406d4d9c0d1'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929226808), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08a06c33406d4d9c0d1'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929226808), why: "drop" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.726-0500 c20012| 2016-04-06T02:53:46.809-0500 D COMMAND [conn42] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.727-0500 c20012| 2016-04-06T02:53:46.809-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.729-0500 c20012| 2016-04-06T02:53:46.809-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.729-0500 c20012| 2016-04-06T02:53:46.809-0500 D QUERY [conn42] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:11.732-0500 c20012| 2016-04-06T02:53:46.809-0500 I COMMAND [conn42] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.734-0500 c20012| 2016-04-06T02:53:46.809-0500 D COMMAND [conn42] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.737-0500 c20012| 2016-04-06T02:53:46.809-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.740-0500 c20012| 2016-04-06T02:53:46.809-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.755-0500 c20012| 2016-04-06T02:53:46.809-0500 D QUERY [conn42] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:11.767-0500 c20012| 2016-04-06T02:53:46.810-0500 I COMMAND [conn42] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929220000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.773-0500 c20012| 2016-04-06T02:53:46.810-0500 D COMMAND [conn42] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.780-0500 c20012| 2016-04-06T02:53:46.810-0500 I COMMAND [conn42] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.789-0500 c20012| 2016-04-06T02:53:46.934-0500 D COMMAND [conn42] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929226934), up: 99, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.789-0500 c20012| 2016-04-06T02:53:46.934-0500 D QUERY [conn42] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.792-0500 c20012| 2016-04-06T02:53:46.934-0500 I WRITE [conn42] update config.mongos query: { _id: "mongovm16:20014" } update: { $set: { _id: "mongovm16:20014", ping: new Date(1459929226934), up: 99, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.796-0500 c20012| 2016-04-06T02:53:46.934-0500 I COMMAND [conn40] command local.oplog.rs command: getMore { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } cursorid:23538204668 numYields:1 nreturned:1 reslen:522 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 1201ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.799-0500 c20012| 2016-04-06T02:53:46.937-0500 I COMMAND [conn47] command local.oplog.rs command: getMore { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } cursorid:22842679084 numYields:1 nreturned:1 reslen:522 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 1193ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.800-0500 c20012| 2016-04-06T02:53:46.938-0500 D REPL [conn42] Required snapshot optime: { ts: Timestamp 1459929226000|1, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929220000|3, t: 7 }, name-id: "264" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.802-0500 c20012| 2016-04-06T02:53:46.940-0500 D COMMAND [conn47] run command local.$cmd { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.805-0500 c20012| 2016-04-06T02:53:46.946-0500 D COMMAND [conn45] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|1, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:11.805-0500 c20012| 2016-04-06T02:53:46.946-0500 D COMMAND [conn45] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:11.810-0500 c20012| 2016-04-06T02:53:46.946-0500 D REPL [conn45] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929226000|1, t: 7 } and is durable through: { ts: Timestamp 1459929220000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.814-0500 c20012| 2016-04-06T02:53:46.946-0500 D REPL [conn45] Required snapshot optime: { ts: Timestamp 1459929226000|1, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929220000|3, t: 7 }, name-id: "264" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.815-0500 c20012| 2016-04-06T02:53:46.946-0500 D REPL [conn45] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|3, t: 7 } and is durable through: { ts: Timestamp 1459929220000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.819-0500 c20012| 2016-04-06T02:53:46.946-0500 I COMMAND [conn45] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|1, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.821-0500 c20012| 2016-04-06T02:53:46.946-0500 D COMMAND [conn40] run command local.$cmd { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.823-0500 c20012| 2016-04-06T02:53:46.952-0500 D COMMAND [conn45] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929226000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|1, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:11.825-0500 c20012| 2016-04-06T02:53:46.952-0500 D COMMAND [conn45] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:11.829-0500 c20012| 2016-04-06T02:53:46.952-0500 D REPL [conn45] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929226000|1, t: 7 } and is durable through: { ts: Timestamp 1459929226000|1, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.829-0500 c20012| 2016-04-06T02:53:46.952-0500 D REPL [conn45] Updating _lastCommittedOpTime to { ts: Timestamp 1459929226000|1, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.833-0500 c20012| 2016-04-06T02:53:46.952-0500 D REPL [conn45] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|3, t: 7 } and is durable through: { ts: Timestamp 1459929220000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.838-0500 c20012| 2016-04-06T02:53:46.952-0500 I COMMAND [conn45] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929226000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|1, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.839-0500 c20012| 2016-04-06T02:53:46.952-0500 I COMMAND [conn47] command local.oplog.rs command: getMore { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } cursorid:22842679084 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 12ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.841-0500 c20012| 2016-04-06T02:53:46.952-0500 I COMMAND [conn40] command local.oplog.rs command: getMore { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929220000|3, t: 7 } } cursorid:23538204668 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.843-0500 c20012| 2016-04-06T02:53:46.953-0500 I COMMAND [conn42] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929226934), up: 99, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:386 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 18ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.846-0500 c20012| 2016-04-06T02:53:46.953-0500 D COMMAND [conn47] run command local.$cmd { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929226000|1, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.850-0500 c20012| 2016-04-06T02:53:46.954-0500 D COMMAND [conn42] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|1, t: 7 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.852-0500 c20012| 2016-04-06T02:53:46.954-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.853-0500 c20012| 2016-04-06T02:53:46.954-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|1, t: 7 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.855-0500 c20012| 2016-04-06T02:53:46.954-0500 D QUERY [conn42] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:11.857-0500 c20012| 2016-04-06T02:53:46.954-0500 D COMMAND [conn40] run command local.$cmd { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929226000|1, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.861-0500 c20012| 2016-04-06T02:53:46.955-0500 I COMMAND [conn42] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|1, t: 7 } }, maxTimeMS: 30000 } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:443 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.870-0500 c20012| 2016-04-06T02:53:46.955-0500 D COMMAND [conn46] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|1, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:11.871-0500 c20012| 2016-04-06T02:53:46.955-0500 D COMMAND [conn46] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:11.873-0500 c20012| 2016-04-06T02:53:46.955-0500 D REPL [conn46] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|3, t: 7 } and is durable through: { ts: Timestamp 1459929220000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.879-0500 c20012| 2016-04-06T02:53:46.955-0500 D REPL [conn46] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929226000|1, t: 7 } and is durable through: { ts: Timestamp 1459929220000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.881-0500 c20012| 2016-04-06T02:53:46.955-0500 I COMMAND [conn46] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|1, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.885-0500 c20012| 2016-04-06T02:53:46.956-0500 D COMMAND [conn42] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.888-0500 c20012| 2016-04-06T02:53:46.956-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|1, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.889-0500 c20012| 2016-04-06T02:53:46.956-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.891-0500 c20012| 2016-04-06T02:53:46.956-0500 D QUERY [conn42] Using idhack: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:11.894-0500 c20012| 2016-04-06T02:53:46.956-0500 I COMMAND [conn42] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:434 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.898-0500 c20012| 2016-04-06T02:53:46.959-0500 D COMMAND [conn42] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929226959), up: 99, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.899-0500 c20012| 2016-04-06T02:53:46.959-0500 D QUERY [conn42] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.903-0500 c20012| 2016-04-06T02:53:46.959-0500 I WRITE [conn42] update config.mongos query: { _id: "mongovm16:20014" } update: { $set: { _id: "mongovm16:20014", ping: new Date(1459929226959), up: 99, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.905-0500 c20012| 2016-04-06T02:53:46.959-0500 I COMMAND [conn47] command local.oplog.rs command: getMore { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929226000|1, t: 7 } } cursorid:22842679084 numYields:0 nreturned:1 reslen:510 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.908-0500 c20012| 2016-04-06T02:53:46.959-0500 I COMMAND [conn40] command local.oplog.rs command: getMore { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929226000|1, t: 7 } } cursorid:23538204668 numYields:0 nreturned:1 reslen:510 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.910-0500 c20012| 2016-04-06T02:53:46.963-0500 D COMMAND [conn40] run command local.$cmd { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929226000|1, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.915-0500 c20012| 2016-04-06T02:53:46.963-0500 D COMMAND [conn45] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929226000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:11.916-0500 c20012| 2016-04-06T02:53:46.963-0500 D COMMAND [conn45] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:11.920-0500 c20012| 2016-04-06T02:53:46.963-0500 D REPL [conn45] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929226000|2, t: 7 } and is durable through: { ts: Timestamp 1459929226000|1, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.922-0500 c20012| 2016-04-06T02:53:46.963-0500 D REPL [conn45] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|3, t: 7 } and is durable through: { ts: Timestamp 1459929220000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.928-0500 c20012| 2016-04-06T02:53:46.963-0500 I COMMAND [conn45] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929226000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.930-0500 c20012| 2016-04-06T02:53:46.963-0500 D COMMAND [conn46] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:11.931-0500 c20012| 2016-04-06T02:53:46.963-0500 D COMMAND [conn46] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:11.934-0500 c20012| 2016-04-06T02:53:46.963-0500 D REPL [conn46] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|3, t: 7 } and is durable through: { ts: Timestamp 1459929220000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.935-0500 c20012| 2016-04-06T02:53:46.963-0500 D REPL [conn46] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929226000|2, t: 7 } and is durable through: { ts: Timestamp 1459929220000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.936-0500 c20012| 2016-04-06T02:53:46.963-0500 D COMMAND [conn47] run command local.$cmd { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929226000|1, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:11.938-0500 c20012| 2016-04-06T02:53:46.963-0500 I COMMAND [conn46] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.942-0500 c20012| 2016-04-06T02:53:46.969-0500 D COMMAND [conn46] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929226000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:11.943-0500 c20012| 2016-04-06T02:53:46.969-0500 D COMMAND [conn46] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:11.947-0500 c20012| 2016-04-06T02:53:46.969-0500 D REPL [conn46] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|3, t: 7 } and is durable through: { ts: Timestamp 1459929220000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.948-0500 c20012| 2016-04-06T02:53:46.969-0500 D REPL [conn46] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929226000|2, t: 7 } and is durable through: { ts: Timestamp 1459929226000|1, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.952-0500 c20012| 2016-04-06T02:53:46.969-0500 I COMMAND [conn46] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929226000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.955-0500 c20012| 2016-04-06T02:53:46.972-0500 D REPL [conn42] Required snapshot optime: { ts: Timestamp 1459929226000|2, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929226000|1, t: 7 }, name-id: "265" } [js_test:multi_coll_drop] 2016-04-06T02:54:11.960-0500 c20012| 2016-04-06T02:53:46.974-0500 D COMMAND [conn46] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:11.961-0500 c20012| 2016-04-06T02:53:46.974-0500 D COMMAND [conn46] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:11.962-0500 c20012| 2016-04-06T02:53:46.974-0500 D REPL [conn46] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|3, t: 7 } and is durable through: { ts: Timestamp 1459929220000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.964-0500 c20012| 2016-04-06T02:53:46.974-0500 D REPL [conn46] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929226000|2, t: 7 } and is durable through: { ts: Timestamp 1459929226000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.967-0500 c20012| 2016-04-06T02:53:46.974-0500 D REPL [conn46] Updating _lastCommittedOpTime to { ts: Timestamp 1459929226000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.971-0500 c20012| 2016-04-06T02:53:46.974-0500 I COMMAND [conn46] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.975-0500 c20012| 2016-04-06T02:53:46.974-0500 I COMMAND [conn42] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929226959), up: 99, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:386 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 15ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.978-0500 c20012| 2016-04-06T02:53:46.976-0500 I COMMAND [conn47] command local.oplog.rs command: getMore { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929226000|1, t: 7 } } cursorid:22842679084 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 13ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.982-0500 c20012| 2016-04-06T02:53:46.976-0500 D COMMAND [conn45] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:11.985-0500 c20012| 2016-04-06T02:53:46.976-0500 D COMMAND [conn45] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:11.988-0500 c20012| 2016-04-06T02:53:46.976-0500 D REPL [conn45] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929226000|2, t: 7 } and is durable through: { ts: Timestamp 1459929226000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.990-0500 c20012| 2016-04-06T02:53:46.976-0500 D REPL [conn45] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929220000|3, t: 7 } and is durable through: { ts: Timestamp 1459929220000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:11.996-0500 c20012| 2016-04-06T02:53:46.976-0500 I COMMAND [conn45] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929220000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.998-0500 c20012| 2016-04-06T02:53:46.977-0500 I COMMAND [conn40] command local.oplog.rs command: getMore { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929226000|1, t: 7 } } cursorid:23538204668 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 14ms [js_test:multi_coll_drop] 2016-04-06T02:54:11.999-0500 c20012| 2016-04-06T02:53:46.978-0500 D COMMAND [conn47] run command local.$cmd { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929226000|2, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:12.024-0500 c20012| 2016-04-06T02:53:46.979-0500 D COMMAND [conn40] run command local.$cmd { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929226000|2, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:12.025-0500 c20012| 2016-04-06T02:53:47.312-0500 D COMMAND [conn42] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08b06c33406d4d9c0d2'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929227311), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.026-0500 c20012| 2016-04-06T02:53:47.312-0500 D QUERY [conn42] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.031-0500 c20012| 2016-04-06T02:53:47.312-0500 D QUERY [conn42] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.032-0500 c20012| 2016-04-06T02:53:47.312-0500 D QUERY [conn42] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.034-0500 c20012| 2016-04-06T02:53:47.312-0500 D - [conn42] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.035-0500 c20012| 2016-04-06T02:53:47.312-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:12.037-0500 c20012| 2016-04-06T02:53:47.312-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:12.048-0500 c20012| 2016-04-06T02:53:47.312-0500 D COMMAND [conn42] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08b06c33406d4d9c0d2'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929227311), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.052-0500 c20012| 2016-04-06T02:53:47.312-0500 I COMMAND [conn42] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08b06c33406d4d9c0d2'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929227311), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08b06c33406d4d9c0d2'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929227311), why: "drop" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.053-0500 c20012| 2016-04-06T02:53:47.313-0500 D COMMAND [conn42] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.057-0500 c20012| 2016-04-06T02:53:47.313-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:12.059-0500 c20012| 2016-04-06T02:53:47.313-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.061-0500 c20012| 2016-04-06T02:53:47.313-0500 D QUERY [conn42] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:12.063-0500 c20012| 2016-04-06T02:53:47.313-0500 I COMMAND [conn42] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.068-0500 c20012| 2016-04-06T02:53:47.314-0500 D COMMAND [conn42] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.070-0500 c20012| 2016-04-06T02:53:47.314-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:12.071-0500 c20012| 2016-04-06T02:53:47.314-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.074-0500 c20012| 2016-04-06T02:53:47.314-0500 D QUERY [conn42] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:12.080-0500 c20012| 2016-04-06T02:53:47.314-0500 I COMMAND [conn42] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.081-0500 c20012| 2016-04-06T02:53:47.314-0500 D COMMAND [conn42] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.082-0500 c20012| 2016-04-06T02:53:47.315-0500 I COMMAND [conn42] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.085-0500 c20012| 2016-04-06T02:53:47.758-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1438 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:57.758-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.088-0500 c20012| 2016-04-06T02:53:47.758-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1438 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:12.090-0500 c20012| 2016-04-06T02:53:47.759-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1438 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20012", term: 7, primaryId: 1, durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, opTime: { ts: Timestamp 1459929226000|2, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:12.091-0500 c20012| 2016-04-06T02:53:47.759-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:49.759Z [js_test:multi_coll_drop] 2016-04-06T02:54:12.096-0500 c20012| 2016-04-06T02:53:47.820-0500 D COMMAND [conn42] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08b06c33406d4d9c0d3'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929227819), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.099-0500 c20012| 2016-04-06T02:53:47.821-0500 D QUERY [conn42] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.100-0500 c20012| 2016-04-06T02:53:47.821-0500 D QUERY [conn42] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.106-0500 c20012| 2016-04-06T02:53:47.821-0500 D QUERY [conn42] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.108-0500 c20012| 2016-04-06T02:53:47.821-0500 D - [conn42] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.111-0500 c20012| 2016-04-06T02:53:47.821-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:12.111-0500 c20012| 2016-04-06T02:53:47.821-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:12.127-0500 c20012| 2016-04-06T02:53:47.821-0500 D COMMAND [conn42] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08b06c33406d4d9c0d3'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929227819), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.133-0500 c20012| 2016-04-06T02:53:47.821-0500 I COMMAND [conn42] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08b06c33406d4d9c0d3'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929227819), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08b06c33406d4d9c0d3'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929227819), why: "drop" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.135-0500 c20012| 2016-04-06T02:53:47.824-0500 D COMMAND [conn42] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.137-0500 c20012| 2016-04-06T02:53:47.824-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:12.141-0500 c20012| 2016-04-06T02:53:47.824-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.142-0500 c20012| 2016-04-06T02:53:47.824-0500 D QUERY [conn42] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:12.147-0500 c20012| 2016-04-06T02:53:47.826-0500 I COMMAND [conn42] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.149-0500 c20012| 2016-04-06T02:53:47.830-0500 D COMMAND [conn42] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.152-0500 c20012| 2016-04-06T02:53:47.830-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:12.158-0500 c20012| 2016-04-06T02:53:47.830-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.159-0500 c20012| 2016-04-06T02:53:47.830-0500 D QUERY [conn42] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:12.164-0500 c20012| 2016-04-06T02:53:47.831-0500 I COMMAND [conn42] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.177-0500 c20012| 2016-04-06T02:53:47.832-0500 D COMMAND [conn42] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.177-0500 c20012| 2016-04-06T02:53:47.833-0500 I COMMAND [conn42] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.179-0500 c20012| 2016-04-06T02:53:48.145-0500 D COMMAND [conn31] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.179-0500 c20012| 2016-04-06T02:53:48.145-0500 D COMMAND [conn31] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:54:12.183-0500 c20012| 2016-04-06T02:53:48.145-0500 I COMMAND [conn31] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 7 } numYields:0 reslen:500 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.196-0500 c20012| 2016-04-06T02:53:48.334-0500 D COMMAND [conn42] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08c06c33406d4d9c0d4'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929228334), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.197-0500 c20012| 2016-04-06T02:53:48.335-0500 D QUERY [conn42] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.199-0500 c20012| 2016-04-06T02:53:48.335-0500 D QUERY [conn42] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.207-0500 c20012| 2016-04-06T02:53:48.335-0500 D QUERY [conn42] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.208-0500 c20012| 2016-04-06T02:53:48.335-0500 D - [conn42] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.209-0500 c20012| 2016-04-06T02:53:48.335-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:12.210-0500 c20012| 2016-04-06T02:53:48.335-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:12.213-0500 c20012| 2016-04-06T02:53:48.335-0500 D COMMAND [conn42] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08c06c33406d4d9c0d4'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929228334), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.216-0500 c20012| 2016-04-06T02:53:48.335-0500 I COMMAND [conn42] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08c06c33406d4d9c0d4'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929228334), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08c06c33406d4d9c0d4'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929228334), why: "drop" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.217-0500 c20012| 2016-04-06T02:53:48.335-0500 D COMMAND [conn42] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.219-0500 c20012| 2016-04-06T02:53:48.335-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:12.221-0500 c20012| 2016-04-06T02:53:48.335-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.221-0500 c20012| 2016-04-06T02:53:48.335-0500 D QUERY [conn42] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:12.225-0500 c20012| 2016-04-06T02:53:48.336-0500 I COMMAND [conn42] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.227-0500 c20012| 2016-04-06T02:53:48.336-0500 D COMMAND [conn42] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.228-0500 c20012| 2016-04-06T02:53:48.336-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:12.232-0500 c20012| 2016-04-06T02:53:48.336-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.233-0500 c20012| 2016-04-06T02:53:48.336-0500 D QUERY [conn42] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:12.237-0500 c20012| 2016-04-06T02:53:48.336-0500 I COMMAND [conn42] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.238-0500 c20012| 2016-04-06T02:53:48.337-0500 D COMMAND [conn42] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.238-0500 c20012| 2016-04-06T02:53:48.338-0500 I COMMAND [conn42] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.241-0500 c20012| 2016-04-06T02:53:48.400-0500 D COMMAND [conn37] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.242-0500 c20012| 2016-04-06T02:53:48.400-0500 D COMMAND [conn37] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:54:12.245-0500 c20012| 2016-04-06T02:53:48.401-0500 I COMMAND [conn37] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 7 } numYields:0 reslen:500 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.250-0500 c20012| 2016-04-06T02:53:48.403-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1440 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:58.403-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.251-0500 c20012| 2016-04-06T02:53:48.403-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1440 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:12.255-0500 c20012| 2016-04-06T02:53:48.403-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1440 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20012", term: 7, primaryId: 1, durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, opTime: { ts: Timestamp 1459929226000|2, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:12.260-0500 c20012| 2016-04-06T02:53:48.403-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:50.403Z [js_test:multi_coll_drop] 2016-04-06T02:54:12.262-0500 c20012| 2016-04-06T02:53:48.839-0500 D COMMAND [conn42] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08c06c33406d4d9c0d5'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929228838), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.263-0500 c20012| 2016-04-06T02:53:48.839-0500 D QUERY [conn42] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.267-0500 c20012| 2016-04-06T02:53:48.839-0500 D QUERY [conn42] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.271-0500 c20012| 2016-04-06T02:53:48.839-0500 D QUERY [conn42] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.273-0500 c20012| 2016-04-06T02:53:48.839-0500 D - [conn42] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.273-0500 c20012| 2016-04-06T02:53:48.839-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:12.274-0500 c20012| 2016-04-06T02:53:48.839-0500 D STORAGE [conn42] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:12.281-0500 c20012| 2016-04-06T02:53:48.840-0500 D COMMAND [conn42] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08c06c33406d4d9c0d5'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929228838), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.287-0500 c20012| 2016-04-06T02:53:48.840-0500 I COMMAND [conn42] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08c06c33406d4d9c0d5'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929228838), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c08c06c33406d4d9c0d5'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929228838), why: "drop" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.294-0500 c20012| 2016-04-06T02:53:48.840-0500 D COMMAND [conn42] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.296-0500 c20012| 2016-04-06T02:53:48.840-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:12.301-0500 c20012| 2016-04-06T02:53:48.840-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.303-0500 c20012| 2016-04-06T02:53:48.840-0500 D QUERY [conn42] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:12.305-0500 c20012| 2016-04-06T02:53:48.840-0500 I COMMAND [conn42] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.313-0500 c20012| 2016-04-06T02:53:48.841-0500 D COMMAND [conn42] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.317-0500 c20012| 2016-04-06T02:53:48.841-0500 D COMMAND [conn42] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:12.320-0500 c20012| 2016-04-06T02:53:48.841-0500 D COMMAND [conn42] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.322-0500 c20012| 2016-04-06T02:53:48.841-0500 D QUERY [conn42] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:12.328-0500 c20012| 2016-04-06T02:53:48.841-0500 I COMMAND [conn42] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.328-0500 c20012| 2016-04-06T02:53:48.842-0500 D COMMAND [conn42] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.330-0500 c20012| 2016-04-06T02:53:48.842-0500 I COMMAND [conn42] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25731 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.333-0500 c20012| 2016-04-06T02:53:48.969-0500 D COMMAND [conn48] run command config.$cmd { findAndModify: "lockpings", query: { _id: "mongovm16:20015:1459929127:-1485108316" }, update: { $set: { ping: new Date(1459929228969) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.334-0500 c20012| 2016-04-06T02:53:48.969-0500 D QUERY [conn48] Using idhack: { _id: "mongovm16:20015:1459929127:-1485108316" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.337-0500 c20012| 2016-04-06T02:53:48.971-0500 I COMMAND [conn47] command local.oplog.rs command: getMore { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929226000|2, t: 7 } } cursorid:22842679084 numYields:1 nreturned:1 reslen:526 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 1992ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.338-0500 c20012| 2016-04-06T02:53:48.971-0500 I COMMAND [conn40] command local.oplog.rs command: getMore { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929226000|2, t: 7 } } cursorid:23538204668 numYields:1 nreturned:1 reslen:526 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 1992ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.340-0500 c20012| 2016-04-06T02:53:48.973-0500 D COMMAND [conn42] run command config.$cmd { findAndModify: "lockpings", query: { _id: "mongovm16:20014:1459929123:-665935931" }, update: { $set: { ping: new Date(1459929228970) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.341-0500 c20012| 2016-04-06T02:53:48.973-0500 D QUERY [conn42] Using idhack: { _id: "mongovm16:20014:1459929123:-665935931" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.342-0500 c20012| 2016-04-06T02:53:48.974-0500 D REPL [conn48] Required snapshot optime: { ts: Timestamp 1459929228000|1, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929226000|2, t: 7 }, name-id: "266" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.345-0500 c20012| 2016-04-06T02:53:48.974-0500 D COMMAND [conn47] run command local.$cmd { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929226000|2, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:12.346-0500 c20012| 2016-04-06T02:53:48.974-0500 D COMMAND [conn40] run command local.$cmd { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929226000|2, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:12.349-0500 c20012| 2016-04-06T02:53:48.974-0500 I COMMAND [conn47] command local.oplog.rs command: getMore { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929226000|2, t: 7 } } cursorid:22842679084 numYields:0 nreturned:1 reslen:525 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.352-0500 c20012| 2016-04-06T02:53:48.974-0500 I COMMAND [conn40] command local.oplog.rs command: getMore { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929226000|2, t: 7 } } cursorid:23538204668 numYields:0 nreturned:1 reslen:525 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.354-0500 c20012| 2016-04-06T02:53:48.975-0500 D REPL [conn42] Required snapshot optime: { ts: Timestamp 1459929228000|1, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929226000|2, t: 7 }, name-id: "266" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.357-0500 c20012| 2016-04-06T02:53:48.975-0500 D REPL [conn42] Required snapshot optime: { ts: Timestamp 1459929228000|2, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929226000|2, t: 7 }, name-id: "266" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.360-0500 c20012| 2016-04-06T02:53:48.976-0500 D COMMAND [conn45] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929228000|1, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:12.361-0500 c20012| 2016-04-06T02:53:48.976-0500 D COMMAND [conn45] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:12.366-0500 c20012| 2016-04-06T02:53:48.976-0500 D REPL [conn45] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929228000|1, t: 7 } and is durable through: { ts: Timestamp 1459929226000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.370-0500 c20012| 2016-04-06T02:53:48.976-0500 D REPL [conn45] Required snapshot optime: { ts: Timestamp 1459929228000|1, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929226000|2, t: 7 }, name-id: "266" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.372-0500 c20012| 2016-04-06T02:53:48.976-0500 D REPL [conn45] Required snapshot optime: { ts: Timestamp 1459929228000|2, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929226000|2, t: 7 }, name-id: "266" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.374-0500 c20012| 2016-04-06T02:53:48.976-0500 D REPL [conn45] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929226000|2, t: 7 } and is durable through: { ts: Timestamp 1459929226000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.379-0500 c20012| 2016-04-06T02:53:48.976-0500 I COMMAND [conn45] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929228000|1, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.382-0500 c20012| 2016-04-06T02:53:48.977-0500 D COMMAND [conn40] run command local.$cmd { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929226000|2, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:12.386-0500 c20012| 2016-04-06T02:53:48.981-0500 D COMMAND [conn47] run command local.$cmd { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929226000|2, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:12.389-0500 c20012| 2016-04-06T02:53:48.981-0500 D COMMAND [conn45] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929228000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929228000|1, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:12.390-0500 c20012| 2016-04-06T02:53:48.981-0500 D COMMAND [conn45] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:12.392-0500 c20012| 2016-04-06T02:53:48.981-0500 D REPL [conn45] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929228000|1, t: 7 } and is durable through: { ts: Timestamp 1459929228000|1, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.392-0500 c20012| 2016-04-06T02:53:48.981-0500 D REPL [conn45] Updating _lastCommittedOpTime to { ts: Timestamp 1459929228000|1, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.395-0500 c20012| 2016-04-06T02:53:48.982-0500 D REPL [conn45] Required snapshot optime: { ts: Timestamp 1459929228000|2, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929228000|1, t: 7 }, name-id: "267" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.396-0500 c20012| 2016-04-06T02:53:48.982-0500 D REPL [conn45] Required snapshot optime: { ts: Timestamp 1459929228000|2, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929228000|1, t: 7 }, name-id: "267" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.400-0500 c20012| 2016-04-06T02:53:48.982-0500 D REPL [conn45] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929226000|2, t: 7 } and is durable through: { ts: Timestamp 1459929226000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.405-0500 c20012| 2016-04-06T02:53:48.982-0500 I COMMAND [conn45] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929228000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929228000|1, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.409-0500 c20012| 2016-04-06T02:53:48.982-0500 I COMMAND [conn40] command local.oplog.rs command: getMore { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929226000|2, t: 7 } } cursorid:23538204668 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.416-0500 c20012| 2016-04-06T02:53:48.982-0500 I COMMAND [conn48] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "mongovm16:20015:1459929127:-1485108316" }, update: { $set: { ping: new Date(1459929228969) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ping: new Date(1459929228969) } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:429 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 12ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.422-0500 c20012| 2016-04-06T02:53:48.982-0500 I COMMAND [conn47] command local.oplog.rs command: getMore { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929226000|2, t: 7 } } cursorid:22842679084 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.424-0500 c20012| 2016-04-06T02:53:48.983-0500 D COMMAND [conn40] run command local.$cmd { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929228000|1, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:12.427-0500 c20012| 2016-04-06T02:53:48.985-0500 D COMMAND [conn46] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929228000|1, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:12.427-0500 c20012| 2016-04-06T02:53:48.985-0500 D COMMAND [conn46] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:12.428-0500 c20012| 2016-04-06T02:53:48.985-0500 D REPL [conn46] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929226000|2, t: 7 } and is durable through: { ts: Timestamp 1459929226000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.429-0500 c20012| 2016-04-06T02:53:48.985-0500 D REPL [conn46] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929228000|1, t: 7 } and is durable through: { ts: Timestamp 1459929226000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.433-0500 c20012| 2016-04-06T02:53:48.985-0500 D REPL [conn46] Required snapshot optime: { ts: Timestamp 1459929228000|2, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929228000|1, t: 7 }, name-id: "267" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.438-0500 c20012| 2016-04-06T02:53:48.985-0500 I COMMAND [conn46] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929228000|1, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.441-0500 c20012| 2016-04-06T02:53:48.985-0500 D COMMAND [conn47] run command local.$cmd { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929228000|1, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:12.445-0500 c20012| 2016-04-06T02:53:48.988-0500 D COMMAND [conn45] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929228000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929228000|2, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:12.446-0500 c20012| 2016-04-06T02:53:48.988-0500 D COMMAND [conn45] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:12.448-0500 s20014| 2016-04-06T02:53:46.955-0500 D SHARDING [Balancer] found 1 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp 1459929226000|1, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.449-0500 s20014| 2016-04-06T02:53:46.955-0500 D ASIO [Balancer] startCommand: RemoteCommand 894 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:16.955-0500 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.451-0500 s20014| 2016-04-06T02:53:46.955-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 894 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:12.453-0500 s20014| 2016-04-06T02:53:46.956-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 894 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "chunksize", value: 50 } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.454-0500 s20014| 2016-04-06T02:53:46.957-0500 D SHARDING [Balancer] Refreshing MaxChunkSize: 50MB [js_test:multi_coll_drop] 2016-04-06T02:54:12.456-0500 s20014| 2016-04-06T02:53:46.957-0500 D ASIO [Balancer] startCommand: RemoteCommand 896 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:54:16.957-0500 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|1, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.457-0500 s20014| 2016-04-06T02:53:46.957-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 896 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:12.460-0500 s20014| 2016-04-06T02:53:46.958-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 896 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "balancer", stopped: true } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.463-0500 s20014| 2016-04-06T02:53:46.959-0500 D SHARDING [Balancer] skipping balancing round because balancing is disabled [js_test:multi_coll_drop] 2016-04-06T02:54:12.466-0500 s20014| 2016-04-06T02:53:46.959-0500 D ASIO [Balancer] startCommand: RemoteCommand 898 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:16.959-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929226959), up: 99, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.467-0500 s20014| 2016-04-06T02:53:46.959-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 898 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:12.468-0500 s20014| 2016-04-06T02:53:46.976-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 898 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929226000|2, t: 7 }, electionId: ObjectId('7fffffff0000000000000007') } [js_test:multi_coll_drop] 2016-04-06T02:54:12.473-0500 s20014| 2016-04-06T02:53:47.311-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c08b06c33406d4d9c0d2, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:12.476-0500 s20014| 2016-04-06T02:53:47.311-0500 D ASIO [conn1] startCommand: RemoteCommand 900 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:17.311-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08b06c33406d4d9c0d2'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929227311), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.478-0500 s20014| 2016-04-06T02:53:47.312-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 900 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:12.482-0500 s20014| 2016-04-06T02:53:47.312-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 900 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.483-0500 s20014| 2016-04-06T02:53:47.313-0500 D ASIO [conn1] startCommand: RemoteCommand 902 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:17.313-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.485-0500 s20014| 2016-04-06T02:53:47.313-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 902 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:12.489-0500 s20014| 2016-04-06T02:53:47.313-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 902 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.492-0500 s20014| 2016-04-06T02:53:47.314-0500 D ASIO [conn1] startCommand: RemoteCommand 904 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:17.314-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.494-0500 s20014| 2016-04-06T02:53:47.314-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 904 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:12.495-0500 s20014| 2016-04-06T02:53:47.314-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 904 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929191721) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.497-0500 s20014| 2016-04-06T02:53:47.314-0500 D ASIO [conn1] startCommand: RemoteCommand 906 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:54:17.314-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.498-0500 s20014| 2016-04-06T02:53:47.314-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 906 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:12.513-0500 s20014| 2016-04-06T02:53:47.316-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] warning: log line attempted (22kB) over max size (10kB), printing beginning and end ... Request 906 finished with response: { host: "mongovm16:20012", advisoryHostFQDNs: [], version: "3.3.4-37-g36f3ff8", process: "mongod", pid: 65723, uptime: 110.0, uptimeMillis: 110174, uptimeEstimate: 96.0, localTime: new Date(1459929227314), asserts: { regular: 0, warning: 0, msg: 0, user: 54, rollovers: 0 }, connections: { current: 17, available: 51183, totalCreated: 48 }, extra_info: { note: "fields vary by platform", heap_usage_bytes: 133862936, page_faults: 0 }, globalLock: { totalTime: 110171000, currentQueue: { total: 0, readers: 0, writers: 0 }, activeClients: { total: 34, readers: 0, writers: 0 } }, locks: { Global: { acquireCount: { r: 3970, w: 821, R: 172, W: 342 }, acquireWaitCount: { r: 18, w: 2, W: 9 }, timeAcquiringMicros: { r: 79690, w: 22138, W: 3261 } }, Database: { acquireCount: { r: 1327, w: 266, W: 555 }, acquireWaitCount: { r: 115, w: 1, W: 22 }, timeAcquiringMicros: { r: 15661, w: 7420, W: 5681 } }, Collection: { acquireCount: { r: 715, w: 234 } }, Metadata: { acquireCount: { w: 83, W: 494 }, acquireWaitCount: { W: 8 }, timeAcquiringMicros: { W: 646 } }, oplog: { acquireCount: { r: 626, w: 39, R: 1, W: 1 } } }, network: { bytesIn: 229265, bytesOut: 1715416, numRequests: 943 }, opcounters: { insert: 6, query: 289, update: 12, delete: 0, getmore: 125, command: 530 }, opcountersRepl: { insert: 61, query: 0, update: 170, delete: 0, getmore: 0, command: 0 }, repl: { hosts: [ "mongovm16:20011", "mongovm16:20012", "mongovm16:20013" ], setName: "multidrop-configRS", setVersion: 1, ismaster: true, secondary: false, primary: "mongovm16:20012", me: "mongovm16:20012", electionId: ObjectId('7fffffff0000000000000007'), rbid: 1287542267 }, storageEngine: { name: "wiredTiger", supportsCommittedReads: true, readOnly: false, persistent: true }, tcmalloc: { generic: { current_allocated_bytes: 133864456, heap_size: 138121216 }, tcmalloc: { pageheap_free_bytes: 1286144, pageheap_unmapped_bytes: 0, max_total_thread_cache_bytes: 1073741824, current_total_thread_cache_bytes: 1837480, total_free_bytes: 2970616, central_cache_free_bytes: 198256, transfer_cache_free_bytes: 934880, thread_cache_free_bytes: 1837480, aggressive_memory_decommit: 0, size_classes: [ { bytes_per_object: 0, pages_per_span: 0, num_spans: 0, num_thread_objs: 0, num_central_objs: 0, num_transfer_objs: 0, free_bytes: 0, allocated_bytes: 0 }, { bytes_per_object: 8, pages_per_span: 2, num_spans: 2, num_thread_objs: 145, num_central_objs: 924, num_transfer_objs: 0, free_bytes: 8552, allocated_bytes: 16384 }, { bytes_per_object: 16, pages_per_span: 2, num_spans: 4, num_thread_objs: 400, num_central_objs: 587, num_transfer_objs: 0, free_bytes: 15792, allocated_bytes: 32768 }, { bytes_per_object: 32, pages_per_span: 2, num_spans: 37, num_thread_objs: 1615, num_central_objs: 111, num_transfer_objs: 1536, free_bytes: 104384, allocated_bytes: 303104 }, { bytes_per_object: 48, pages_per_span: 2, num_spans: 25, num_thread_objs: 763, num_central_objs: 107, num_transfer_objs: 340, free_bytes: 58080, allocated_bytes: 204800 }, { bytes_per_object: 64, pages_per_span: 2, num_spans: 58, num_thread_objs: 521, num_central_objs: 103, num_transfer_objs: 5632, free_bytes: 400384, allocated_bytes: 475136 }, { bytes_per_object: 80, pages_per_span: 2, num_spans: 35, num_thread_objs: 497, num_central_objs: 37, num_transfer_objs: 1938, free_bytes: 197760, allocated_bytes: 286720 }, { bytes_per_object: 96, pages .......... cheSetFilter: { failed: 0, total: 0 }, profile: { failed: 0, total: 0 }, reIndex: { failed: 0, total: 0 }, renameCollection: { failed: 0, total: 0 }, repairCursor: { failed: 0, total: 0 }, repairDatabase: { failed: 0, total: 0 }, replSetDeclareElectionWinner: { failed: 0, total: 0 }, replSetElect: { failed: 0, total: 0 }, replSetFreeze: { failed: 0, total: 0 }, replSetFresh: { failed: 0, total: 0 }, replSetGetConfig: { failed: 0, total: 0 }, replSetGetRBID: { failed: 0, total: 2 }, replSetGetStatus: { failed: 0, total: 0 }, replSetHeartbeat: { failed: 0, total: 80 }, replSetInitiate: { failed: 0, total: 0 }, replSetMaintenance: { failed: 0, total: 0 }, replSetReconfig: { failed: 0, total: 0 }, replSetRequestVotes: { failed: 0, total: 8 }, replSetStepDown: { failed: 0, total: 1 }, replSetSyncFrom: { failed: 0, total: 0 }, replSetTest: { failed: 0, total: 0 }, replSetUpdatePosition: { failed: 0, total: 140 }, resetError: { failed: 0, total: 0 }, resync: { failed: 0, total: 0 }, revokePrivilegesFromRole: { failed: 0, total: 0 }, revokeRolesFromRole: { failed: 0, total: 0 }, revokeRolesFromUser: { failed: 0, total: 0 }, rolesInfo: { failed: 0, total: 0 }, saslContinue: { failed: 0, total: 0 }, saslStart: { failed: 0, total: 0 }, serverStatus: { failed: 0, total: 53 }, setCommittedSnapshot: { failed: 0, total: 0 }, setParameter: { failed: 0, total: 0 }, setShardVersion: { failed: 0, total: 0 }, shardConnPoolStats: { failed: 0, total: 0 }, shardingState: { failed: 0, total: 0 }, shutdown: { failed: 0, total: 0 }, sleep: { failed: 0, total: 0 }, splitChunk: { failed: 0, total: 0 }, splitVector: { failed: 0, total: 0 }, stageDebug: { failed: 0, total: 0 }, top: { failed: 0, total: 0 }, touch: { failed: 0, total: 0 }, unsetSharding: { failed: 0, total: 0 }, update: { failed: 0, total: 12 }, updateRole: { failed: 0, total: 0 }, updateUser: { failed: 0, total: 0 }, usersInfo: { failed: 0, total: 0 }, validate: { failed: 0, total: 0 }, whatsmyuri: { failed: 0, total: 0 }, writebacklisten: { failed: 0, total: 0 } }, cursor: { timedOut: 0, open: { noTimeout: 0, pinned: 2, total: 2 } }, document: { deleted: 0, inserted: 12, returned: 458, updated: 24 }, getLastError: { wtime: { num: 36, totalMillis: 5786 }, wtimeouts: 0 }, operation: { fastmod: 0, idhack: 127, scanAndOrder: 0, writeConflicts: 0 }, queryExecutor: { scanned: 293, scannedObjects: 426 }, record: { moves: 0 }, repl: { executor: { counters: { eventCreated: 14, eventWait: 14, cancels: 471, waits: 1769, scheduledNetCmd: 98, scheduledDBWork: 3, scheduledXclWork: 0, scheduledWorkAt: 559, scheduledWork: 1936, schedulingFailures: 0 }, queues: { networkInProgress: 0, dbWorkInProgress: 0, exclusiveInProgress: 0, sleepers: 3, ready: 0, free: 30 }, unsignaledEvents: 3, eventWaiters: 0, shuttingDown: false, networkInterface: " [js_test:multi_coll_drop] 2016-04-06T02:54:12.513-0500 s20014| NetworkInterfaceASIO Operations' Diagnostic: [js_test:multi_coll_drop] 2016-04-06T02:54:12.514-0500 s20014| Operation: Count: [js_test:multi_coll_drop] 2016-04-06T02:54:12.514-0500 s20014| Connecting 0 [js_test:multi_coll_drop] 2016-04-06T02:54:12.515-0500 s20014| In Progress 0 [js_test:multi_coll_drop] 2016-04-06T02:54:12.516-0500 s20014| Succeeded 87 [js_test:multi_coll_drop] 2016-04-06T02:54:12.519-0500 s20014| Canceled..." }, apply: { batches: { num: 168, totalMillis: 0 }, ops: 196 }, buffer: { count: 0, maxSizeBytes: 268435456, sizeBytes: 0 }, network: { bytes: 67253, getmores: { num: 266, totalMillis: 15808 }, ops: 206, readersCreated: 1 }, preload: { docs: { num: 0, totalMillis: 0 }, indexes: { num: 0, totalMillis: 0 } } }, storage: { freelist: { search: { bucketExhausted: 0, requests: 0, scanned: 0 } } }, ttl: { deletedDocuments: 0, passes: 1 } }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.522-0500 s20014| 2016-04-06T02:53:47.316-0500 D SHARDING [conn1] checking last ping for lock 'multidrop.coll' against last seen process mongovm16:20010:1459929128:185613966 and ping 2016-04-06T02:53:11.721-0500 [js_test:multi_coll_drop] 2016-04-06T02:54:12.524-0500 s20014| 2016-04-06T02:53:47.316-0500 D SHARDING [conn1] could not force lock 'multidrop.coll' because elapsed time 6583 < takeover time 900000 ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.526-0500 s20014| 2016-04-06T02:53:47.316-0500 D SHARDING [conn1] distributed lock 'multidrop.coll' was not acquired. [js_test:multi_coll_drop] 2016-04-06T02:54:12.527-0500 s20014| 2016-04-06T02:53:47.819-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c08b06c33406d4d9c0d3, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:12.530-0500 s20014| 2016-04-06T02:53:47.819-0500 D ASIO [conn1] startCommand: RemoteCommand 908 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:17.819-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08b06c33406d4d9c0d3'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929227819), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.531-0500 s20014| 2016-04-06T02:53:47.820-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 908 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:12.533-0500 s20014| 2016-04-06T02:53:47.821-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 908 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.535-0500 s20014| 2016-04-06T02:53:47.822-0500 D ASIO [conn1] startCommand: RemoteCommand 910 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:17.822-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.536-0500 s20014| 2016-04-06T02:53:47.824-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 910 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:12.540-0500 s20014| 2016-04-06T02:53:47.826-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 910 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.542-0500 s20014| 2016-04-06T02:53:47.829-0500 D ASIO [conn1] startCommand: RemoteCommand 912 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:17.829-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.543-0500 s20014| 2016-04-06T02:53:47.830-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 912 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:12.546-0500 s20014| 2016-04-06T02:53:47.831-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 912 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929191721) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.548-0500 s20014| 2016-04-06T02:53:47.832-0500 D ASIO [conn1] startCommand: RemoteCommand 914 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:54:17.832-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.549-0500 s20014| 2016-04-06T02:53:47.832-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 914 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:12.562-0500 s20014| 2016-04-06T02:53:47.834-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] warning: log line attempted (22kB) over max size (10kB), printing beginning and end ... Request 914 finished with response: { host: "mongovm16:20012", advisoryHostFQDNs: [], version: "3.3.4-37-g36f3ff8", process: "mongod", pid: 65723, uptime: 110.0, uptimeMillis: 110692, uptimeEstimate: 96.0, localTime: new Date(1459929227832), asserts: { regular: 0, warning: 0, msg: 0, user: 55, rollovers: 0 }, connections: { current: 17, available: 51183, totalCreated: 48 }, extra_info: { note: "fields vary by platform", heap_usage_bytes: 133863208, page_faults: 0 }, globalLock: { totalTime: 110690000, currentQueue: { total: 0, readers: 0, writers: 0 }, activeClients: { total: 34, readers: 0, writers: 0 } }, locks: { Global: { acquireCount: { r: 3975, w: 822, R: 172, W: 342 }, acquireWaitCount: { r: 18, w: 2, W: 9 }, timeAcquiringMicros: { r: 79690, w: 22138, W: 3261 } }, Database: { acquireCount: { r: 1329, w: 267, W: 555 }, acquireWaitCount: { r: 115, w: 1, W: 22 }, timeAcquiringMicros: { r: 15661, w: 7420, W: 5681 } }, Collection: { acquireCount: { r: 717, w: 235 } }, Metadata: { acquireCount: { w: 83, W: 494 }, acquireWaitCount: { W: 8 }, timeAcquiringMicros: { W: 646 } }, oplog: { acquireCount: { r: 626, w: 39, R: 1, W: 1 } } }, network: { bytesIn: 230226, bytesOut: 1742453, numRequests: 947 }, opcounters: { insert: 6, query: 291, update: 12, delete: 0, getmore: 125, command: 532 }, opcountersRepl: { insert: 61, query: 0, update: 170, delete: 0, getmore: 0, command: 0 }, repl: { hosts: [ "mongovm16:20011", "mongovm16:20012", "mongovm16:20013" ], setName: "multidrop-configRS", setVersion: 1, ismaster: true, secondary: false, primary: "mongovm16:20012", me: "mongovm16:20012", electionId: ObjectId('7fffffff0000000000000007'), rbid: 1287542267 }, storageEngine: { name: "wiredTiger", supportsCommittedReads: true, readOnly: false, persistent: true }, tcmalloc: { generic: { current_allocated_bytes: 133864728, heap_size: 138121216 }, tcmalloc: { pageheap_free_bytes: 1286144, pageheap_unmapped_bytes: 0, max_total_thread_cache_bytes: 1073741824, current_total_thread_cache_bytes: 1846024, total_free_bytes: 2970344, central_cache_free_bytes: 189440, transfer_cache_free_bytes: 934880, thread_cache_free_bytes: 1846024, aggressive_memory_decommit: 0, size_classes: [ { bytes_per_object: 0, pages_per_span: 0, num_spans: 0, num_thread_objs: 0, num_central_objs: 0, num_transfer_objs: 0, free_bytes: 0, allocated_bytes: 0 }, { bytes_per_object: 8, pages_per_span: 2, num_spans: 2, num_thread_objs: 163, num_central_objs: 906, num_transfer_objs: 0, free_bytes: 8552, allocated_bytes: 16384 }, { bytes_per_object: 16, pages_per_span: 2, num_spans: 4, num_thread_objs: 400, num_central_objs: 587, num_transfer_objs: 0, free_bytes: 15792, allocated_bytes: 32768 }, { bytes_per_object: 32, pages_per_span: 2, num_spans: 37, num_thread_objs: 1615, num_central_objs: 111, num_transfer_objs: 1536, free_bytes: 104384, allocated_bytes: 303104 }, { bytes_per_object: 48, pages_per_span: 2, num_spans: 25, num_thread_objs: 762, num_central_objs: 107, num_transfer_objs: 340, free_bytes: 58032, allocated_bytes: 204800 }, { bytes_per_object: 64, pages_per_span: 2, num_spans: 58, num_thread_objs: 521, num_central_objs: 103, num_transfer_objs: 5632, free_bytes: 400384, allocated_bytes: 475136 }, { bytes_per_object: 80, pages_per_span: 2, num_spans: 35, num_thread_objs: 497, num_central_objs: 37, num_transfer_objs: 1938, free_bytes: 197760, allocated_bytes: 286720 }, { bytes_per_object: 96, pages .......... cheSetFilter: { failed: 0, total: 0 }, profile: { failed: 0, total: 0 }, reIndex: { failed: 0, total: 0 }, renameCollection: { failed: 0, total: 0 }, repairCursor: { failed: 0, total: 0 }, repairDatabase: { failed: 0, total: 0 }, replSetDeclareElectionWinner: { failed: 0, total: 0 }, replSetElect: { failed: 0, total: 0 }, replSetFreeze: { failed: 0, total: 0 }, replSetFresh: { failed: 0, total: 0 }, replSetGetConfig: { failed: 0, total: 0 }, replSetGetRBID: { failed: 0, total: 2 }, replSetGetStatus: { failed: 0, total: 0 }, replSetHeartbeat: { failed: 0, total: 80 }, replSetInitiate: { failed: 0, total: 0 }, replSetMaintenance: { failed: 0, total: 0 }, replSetReconfig: { failed: 0, total: 0 }, replSetRequestVotes: { failed: 0, total: 8 }, replSetStepDown: { failed: 0, total: 1 }, replSetSyncFrom: { failed: 0, total: 0 }, replSetTest: { failed: 0, total: 0 }, replSetUpdatePosition: { failed: 0, total: 140 }, resetError: { failed: 0, total: 0 }, resync: { failed: 0, total: 0 }, revokePrivilegesFromRole: { failed: 0, total: 0 }, revokeRolesFromRole: { failed: 0, total: 0 }, revokeRolesFromUser: { failed: 0, total: 0 }, rolesInfo: { failed: 0, total: 0 }, saslContinue: { failed: 0, total: 0 }, saslStart: { failed: 0, total: 0 }, serverStatus: { failed: 0, total: 54 }, setCommittedSnapshot: { failed: 0, total: 0 }, setParameter: { failed: 0, total: 0 }, setShardVersion: { failed: 0, total: 0 }, shardConnPoolStats: { failed: 0, total: 0 }, shardingState: { failed: 0, total: 0 }, shutdown: { failed: 0, total: 0 }, sleep: { failed: 0, total: 0 }, splitChunk: { failed: 0, total: 0 }, splitVector: { failed: 0, total: 0 }, stageDebug: { failed: 0, total: 0 }, top: { failed: 0, total: 0 }, touch: { failed: 0, total: 0 }, unsetSharding: { failed: 0, total: 0 }, update: { failed: 0, total: 12 }, updateRole: { failed: 0, total: 0 }, updateUser: { failed: 0, total: 0 }, usersInfo: { failed: 0, total: 0 }, validate: { failed: 0, total: 0 }, whatsmyuri: { failed: 0, total: 0 }, writebacklisten: { failed: 0, total: 0 } }, cursor: { timedOut: 0, open: { noTimeout: 0, pinned: 2, total: 2 } }, document: { deleted: 0, inserted: 12, returned: 460, updated: 24 }, getLastError: { wtime: { num: 36, totalMillis: 5786 }, wtimeouts: 0 }, operation: { fastmod: 0, idhack: 129, scanAndOrder: 0, writeConflicts: 0 }, queryExecutor: { scanned: 295, scannedObjects: 428 }, record: { moves: 0 }, repl: { executor: { counters: { eventCreated: 14, eventWait: 14, cancels: 471, waits: 1773, scheduledNetCmd: 99, scheduledDBWork: 3, scheduledXclWork: 0, scheduledWorkAt: 560, scheduledWork: 1940, schedulingFailures: 0 }, queues: { networkInProgress: 0, dbWorkInProgress: 0, exclusiveInProgress: 0, sleepers: 3, ready: 0, free: 30 }, unsignaledEvents: 3, eventWaiters: 0, shuttingDown: false, networkInterface: " [js_test:multi_coll_drop] 2016-04-06T02:54:12.562-0500 s20014| NetworkInterfaceASIO Operations' Diagnostic: [js_test:multi_coll_drop] 2016-04-06T02:54:12.563-0500 s20014| Operation: Count: [js_test:multi_coll_drop] 2016-04-06T02:54:12.567-0500 c20012| 2016-04-06T02:53:48.988-0500 D REPL [conn45] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929228000|2, t: 7 } and is durable through: { ts: Timestamp 1459929228000|1, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.567-0500 s20014| Connecting 0 [js_test:multi_coll_drop] 2016-04-06T02:54:12.569-0500 s20014| In Progress 0 [js_test:multi_coll_drop] 2016-04-06T02:54:12.569-0500 s20014| Succeeded 88 [js_test:multi_coll_drop] 2016-04-06T02:54:12.572-0500 s20014| Canceled..." }, apply: { batches: { num: 168, totalMillis: 0 }, ops: 196 }, buffer: { count: 0, maxSizeBytes: 268435456, sizeBytes: 0 }, network: { bytes: 67253, getmores: { num: 266, totalMillis: 15808 }, ops: 206, readersCreated: 1 }, preload: { docs: { num: 0, totalMillis: 0 }, indexes: { num: 0, totalMillis: 0 } } }, storage: { freelist: { search: { bucketExhausted: 0, requests: 0, scanned: 0 } } }, ttl: { deletedDocuments: 0, passes: 1 } }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.574-0500 s20014| 2016-04-06T02:53:47.834-0500 D SHARDING [conn1] checking last ping for lock 'multidrop.coll' against last seen process mongovm16:20010:1459929128:185613966 and ping 2016-04-06T02:53:11.721-0500 [js_test:multi_coll_drop] 2016-04-06T02:54:12.575-0500 s20014| 2016-04-06T02:53:47.834-0500 D SHARDING [conn1] could not force lock 'multidrop.coll' because elapsed time 7102 < takeover time 900000 ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.577-0500 s20014| 2016-04-06T02:53:47.834-0500 D SHARDING [conn1] distributed lock 'multidrop.coll' was not acquired. [js_test:multi_coll_drop] 2016-04-06T02:54:12.578-0500 s20014| 2016-04-06T02:53:48.334-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c08c06c33406d4d9c0d4, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:12.585-0500 s20014| 2016-04-06T02:53:48.334-0500 D ASIO [conn1] startCommand: RemoteCommand 916 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:18.334-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08c06c33406d4d9c0d4'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929228334), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.586-0500 s20014| 2016-04-06T02:53:48.334-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 916 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:12.587-0500 s20014| 2016-04-06T02:53:48.335-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 916 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.594-0500 s20014| 2016-04-06T02:53:48.335-0500 D ASIO [conn1] startCommand: RemoteCommand 918 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:18.335-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.596-0500 s20015| 2016-04-06T02:53:59.145-0500 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:12.598-0500 s20014| 2016-04-06T02:53:48.335-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 918 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:12.602-0500 s20014| 2016-04-06T02:53:48.336-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 918 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.608-0500 s20014| 2016-04-06T02:53:48.336-0500 D ASIO [conn1] startCommand: RemoteCommand 920 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:18.336-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.610-0500 s20014| 2016-04-06T02:53:48.336-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 920 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:12.619-0500 s20014| 2016-04-06T02:53:48.336-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 920 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929191721) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.622-0500 s20014| 2016-04-06T02:53:48.336-0500 D ASIO [conn1] startCommand: RemoteCommand 922 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:54:18.336-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.627-0500 s20014| 2016-04-06T02:53:48.337-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 922 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:12.671-0500 s20014| 2016-04-06T02:53:48.338-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] warning: log line attempted (22kB) over max size (10kB), printing beginning and end ... Request 922 finished with response: { host: "mongovm16:20012", advisoryHostFQDNs: [], version: "3.3.4-37-g36f3ff8", process: "mongod", pid: 65723, uptime: 111.0, uptimeMillis: 111197, uptimeEstimate: 97.0, localTime: new Date(1459929228337), asserts: { regular: 0, warning: 0, msg: 0, user: 56, rollovers: 0 }, connections: { current: 17, available: 51183, totalCreated: 48 }, extra_info: { note: "fields vary by platform", heap_usage_bytes: 133863464, page_faults: 0 }, globalLock: { totalTime: 111194000, currentQueue: { total: 0, readers: 0, writers: 0 }, activeClients: { total: 34, readers: 0, writers: 0 } }, locks: { Global: { acquireCount: { r: 3982, w: 823, R: 172, W: 342 }, acquireWaitCount: { r: 18, w: 2, W: 9 }, timeAcquiringMicros: { r: 79690, w: 22138, W: 3261 } }, Database: { acquireCount: { r: 1332, w: 268, W: 555 }, acquireWaitCount: { r: 115, w: 1, W: 22 }, timeAcquiringMicros: { r: 15661, w: 7420, W: 5681 } }, Collection: { acquireCount: { r: 719, w: 236 } }, Metadata: { acquireCount: { w: 83, W: 494 }, acquireWaitCount: { W: 8 }, timeAcquiringMicros: { W: 646 } }, oplog: { acquireCount: { r: 627, w: 39, R: 1, W: 1 } } }, network: { bytesIn: 231359, bytesOut: 1770006, numRequests: 952 }, opcounters: { insert: 6, query: 293, update: 12, delete: 0, getmore: 125, command: 535 }, opcountersRepl: { insert: 61, query: 0, update: 170, delete: 0, getmore: 0, command: 0 }, repl: { hosts: [ "mongovm16:20011", "mongovm16:20012", "mongovm16:20013" ], setName: "multidrop-configRS", setVersion: 1, ismaster: true, secondary: false, primary: "mongovm16:20012", me: "mongovm16:20012", electionId: ObjectId('7fffffff0000000000000007'), rbid: 1287542267 }, storageEngine: { name: "wiredTiger", supportsCommittedReads: true, readOnly: false, persistent: true }, tcmalloc: { generic: { current_allocated_bytes: 133864984, heap_size: 138121216 }, tcmalloc: { pageheap_free_bytes: 1286144, pageheap_unmapped_bytes: 0, max_total_thread_cache_bytes: 1073741824, current_total_thread_cache_bytes: 1844792, total_free_bytes: 2970088, central_cache_free_bytes: 190416, transfer_cache_free_bytes: 934880, thread_cache_free_bytes: 1844792, aggressive_memory_decommit: 0, size_classes: [ { bytes_per_object: 0, pages_per_span: 0, num_spans: 0, num_thread_objs: 0, num_central_objs: 0, num_transfer_objs: 0, free_bytes: 0, allocated_bytes: 0 }, { bytes_per_object: 8, pages_per_span: 2, num_spans: 2, num_thread_objs: 163, num_central_objs: 906, num_transfer_objs: 0, free_bytes: 8552, allocated_bytes: 16384 }, { bytes_per_object: 16, pages_per_span: 2, num_spans: 4, num_thread_objs: 400, num_central_objs: 587, num_transfer_objs: 0, free_bytes: 15792, allocated_bytes: 32768 }, { bytes_per_object: 32, pages_per_span: 2, num_spans: 37, num_thread_objs: 1677, num_central_objs: 48, num_transfer_objs: 1536, free_bytes: 104352, allocated_bytes: 303104 }, { bytes_per_object: 48, pages_per_span: 2, num_spans: 25, num_thread_objs: 783, num_central_objs: 86, num_transfer_objs: 340, free_bytes: 58032, allocated_bytes: 204800 }, { bytes_per_object: 64, pages_per_span: 2, num_spans: 58, num_thread_objs: 521, num_central_objs: 103, num_transfer_objs: 5632, free_bytes: 400384, allocated_bytes: 475136 }, { bytes_per_object: 80, pages_per_span: 2, num_spans: 35, num_thread_objs: 497, num_central_objs: 37, num_transfer_objs: 1938, free_bytes: 197760, allocated_bytes: 286720 }, { bytes_per_object: 96, pages_p .......... cheSetFilter: { failed: 0, total: 0 }, profile: { failed: 0, total: 0 }, reIndex: { failed: 0, total: 0 }, renameCollection: { failed: 0, total: 0 }, repairCursor: { failed: 0, total: 0 }, repairDatabase: { failed: 0, total: 0 }, replSetDeclareElectionWinner: { failed: 0, total: 0 }, replSetElect: { failed: 0, total: 0 }, replSetFreeze: { failed: 0, total: 0 }, replSetFresh: { failed: 0, total: 0 }, replSetGetConfig: { failed: 0, total: 0 }, replSetGetRBID: { failed: 0, total: 2 }, replSetGetStatus: { failed: 0, total: 0 }, replSetHeartbeat: { failed: 0, total: 81 }, replSetInitiate: { failed: 0, total: 0 }, replSetMaintenance: { failed: 0, total: 0 }, replSetReconfig: { failed: 0, total: 0 }, replSetRequestVotes: { failed: 0, total: 8 }, replSetStepDown: { failed: 0, total: 1 }, replSetSyncFrom: { failed: 0, total: 0 }, replSetTest: { failed: 0, total: 0 }, replSetUpdatePosition: { failed: 0, total: 140 }, resetError: { failed: 0, total: 0 }, resync: { failed: 0, total: 0 }, revokePrivilegesFromRole: { failed: 0, total: 0 }, revokeRolesFromRole: { failed: 0, total: 0 }, revokeRolesFromUser: { failed: 0, total: 0 }, rolesInfo: { failed: 0, total: 0 }, saslContinue: { failed: 0, total: 0 }, saslStart: { failed: 0, total: 0 }, serverStatus: { failed: 0, total: 55 }, setCommittedSnapshot: { failed: 0, total: 0 }, setParameter: { failed: 0, total: 0 }, setShardVersion: { failed: 0, total: 0 }, shardConnPoolStats: { failed: 0, total: 0 }, shardingState: { failed: 0, total: 0 }, shutdown: { failed: 0, total: 0 }, sleep: { failed: 0, total: 0 }, splitChunk: { failed: 0, total: 0 }, splitVector: { failed: 0, total: 0 }, stageDebug: { failed: 0, total: 0 }, top: { failed: 0, total: 0 }, touch: { failed: 0, total: 0 }, unsetSharding: { failed: 0, total: 0 }, update: { failed: 0, total: 12 }, updateRole: { failed: 0, total: 0 }, updateUser: { failed: 0, total: 0 }, usersInfo: { failed: 0, total: 0 }, validate: { failed: 0, total: 0 }, whatsmyuri: { failed: 0, total: 0 }, writebacklisten: { failed: 0, total: 0 } }, cursor: { timedOut: 0, open: { noTimeout: 0, pinned: 2, total: 2 } }, document: { deleted: 0, inserted: 12, returned: 462, updated: 24 }, getLastError: { wtime: { num: 36, totalMillis: 5786 }, wtimeouts: 0 }, operation: { fastmod: 0, idhack: 131, scanAndOrder: 0, writeConflicts: 0 }, queryExecutor: { scanned: 297, scannedObjects: 430 }, record: { moves: 0 }, repl: { executor: { counters: { eventCreated: 14, eventWait: 14, cancels: 471, waits: 1781, scheduledNetCmd: 99, scheduledDBWork: 3, scheduledXclWork: 0, scheduledWorkAt: 560, scheduledWork: 1948, schedulingFailures: 0 }, queues: { networkInProgress: 0, dbWorkInProgress: 0, exclusiveInProgress: 0, sleepers: 3, ready: 0, free: 30 }, unsignaledEvents: 3, eventWaiters: 0, shuttingDown: false, networkInterface: " [js_test:multi_coll_drop] 2016-04-06T02:54:12.671-0500 s20014| NetworkInterfaceASIO Operations' Diagnostic: [js_test:multi_coll_drop] 2016-04-06T02:54:12.672-0500 s20014| Operation: Count: [js_test:multi_coll_drop] 2016-04-06T02:54:12.672-0500 s20014| Connecting 0 [js_test:multi_coll_drop] 2016-04-06T02:54:12.676-0500 c20012| 2016-04-06T02:53:48.988-0500 D REPL [conn45] Required snapshot optime: { ts: Timestamp 1459929228000|2, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929228000|1, t: 7 }, name-id: "267" } [js_test:multi_coll_drop] 2016-04-06T02:54:12.677-0500 c20012| 2016-04-06T02:53:48.989-0500 D REPL [conn45] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929226000|2, t: 7 } and is durable through: { ts: Timestamp 1459929226000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.683-0500 c20012| 2016-04-06T02:53:48.989-0500 I COMMAND [conn45] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929228000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929228000|2, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.686-0500 c20012| 2016-04-06T02:53:48.990-0500 D COMMAND [conn46] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929228000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929228000|1, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:12.688-0500 c20012| 2016-04-06T02:53:48.990-0500 D COMMAND [conn46] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:12.689-0500 s20014| In Progress 0 [js_test:multi_coll_drop] 2016-04-06T02:54:12.689-0500 s20014| Succeeded 88 [js_test:multi_coll_drop] 2016-04-06T02:54:12.976-0500 s20014| Canceled..." }, apply: { batches: { num: 168, totalMillis: 0 }, ops: 196 }, buffer: { count: 0, maxSizeBytes: 268435456, sizeBytes: 0 }, network: { bytes: 67253, getmores: { num: 266, totalMillis: 15808 }, ops: 206, readersCreated: 1 }, preload: { docs: { num: 0, totalMillis: 0 }, indexes: { num: 0, totalMillis: 0 } } }, storage: { freelist: { search: { bucketExhausted: 0, requests: 0, scanned: 0 } } }, ttl: { deletedDocuments: 0, passes: 1 } }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:12.987-0500 s20014| 2016-04-06T02:53:48.338-0500 D SHARDING [conn1] checking last ping for lock 'multidrop.coll' against last seen process mongovm16:20010:1459929128:185613966 and ping 2016-04-06T02:53:11.721-0500 [js_test:multi_coll_drop] 2016-04-06T02:54:12.990-0500 s20014| 2016-04-06T02:53:48.338-0500 D SHARDING [conn1] could not force lock 'multidrop.coll' because elapsed time 7607 < takeover time 900000 ms [js_test:multi_coll_drop] 2016-04-06T02:54:12.990-0500 s20014| 2016-04-06T02:53:48.338-0500 D SHARDING [conn1] distributed lock 'multidrop.coll' was not acquired. [js_test:multi_coll_drop] 2016-04-06T02:54:12.995-0500 c20012| 2016-04-06T02:53:48.990-0500 D REPL [conn46] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929226000|2, t: 7 } and is durable through: { ts: Timestamp 1459929226000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.025-0500 c20012| 2016-04-06T02:53:48.990-0500 D REPL [conn46] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929228000|1, t: 7 } and is durable through: { ts: Timestamp 1459929228000|1, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.046-0500 c20012| 2016-04-06T02:53:48.990-0500 D REPL [conn46] Required snapshot optime: { ts: Timestamp 1459929228000|2, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929228000|1, t: 7 }, name-id: "267" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.059-0500 c20012| 2016-04-06T02:53:48.990-0500 I COMMAND [conn46] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929228000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929228000|1, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.064-0500 c20012| 2016-04-06T02:53:48.991-0500 D COMMAND [conn38] run command config.$cmd { findAndModify: "lockpings", query: { _id: "mongovm16:20010:1459929128:185613966" }, update: { $set: { ping: new Date(1459929228990) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.066-0500 c20012| 2016-04-06T02:53:48.991-0500 D QUERY [conn38] Using idhack: { _id: "mongovm16:20010:1459929128:185613966" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.068-0500 c20012| 2016-04-06T02:53:48.991-0500 D REPL [conn38] Required snapshot optime: { ts: Timestamp 1459929228000|2, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929228000|1, t: 7 }, name-id: "267" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.070-0500 c20012| 2016-04-06T02:53:48.991-0500 I COMMAND [conn47] command local.oplog.rs command: getMore { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929228000|1, t: 7 } } cursorid:22842679084 numYields:0 nreturned:1 reslen:524 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.073-0500 c20012| 2016-04-06T02:53:48.991-0500 I COMMAND [conn40] command local.oplog.rs command: getMore { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929228000|1, t: 7 } } cursorid:23538204668 numYields:0 nreturned:1 reslen:524 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.080-0500 c20012| 2016-04-06T02:53:48.993-0500 D COMMAND [conn46] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929228000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929228000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:13.081-0500 c20012| 2016-04-06T02:53:48.993-0500 D COMMAND [conn46] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:13.083-0500 c20012| 2016-04-06T02:53:48.993-0500 D REPL [conn46] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929226000|2, t: 7 } and is durable through: { ts: Timestamp 1459929226000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.095-0500 c20012| 2016-04-06T02:53:48.993-0500 D REPL [conn46] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929228000|2, t: 7 } and is durable through: { ts: Timestamp 1459929228000|1, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.097-0500 c20012| 2016-04-06T02:53:48.993-0500 D REPL [conn46] Required snapshot optime: { ts: Timestamp 1459929228000|2, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929228000|1, t: 7 }, name-id: "267" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.103-0500 c20012| 2016-04-06T02:53:48.993-0500 I COMMAND [conn46] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929228000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929228000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.104-0500 c20012| 2016-04-06T02:53:48.997-0500 D COMMAND [conn40] run command local.$cmd { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929228000|1, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:13.108-0500 c20012| 2016-04-06T02:53:48.997-0500 D COMMAND [conn47] run command local.$cmd { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929228000|1, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:13.120-0500 c20012| 2016-04-06T02:53:48.999-0500 D COMMAND [conn45] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929228000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929228000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:13.120-0500 c20012| 2016-04-06T02:53:48.999-0500 D COMMAND [conn45] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:13.121-0500 c20012| 2016-04-06T02:53:48.999-0500 D REPL [conn45] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929228000|3, t: 7 } and is durable through: { ts: Timestamp 1459929228000|1, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.122-0500 c20012| 2016-04-06T02:53:48.999-0500 D REPL [conn45] Required snapshot optime: { ts: Timestamp 1459929228000|2, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929228000|1, t: 7 }, name-id: "267" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.124-0500 c20012| 2016-04-06T02:53:48.999-0500 D REPL [conn45] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929226000|2, t: 7 } and is durable through: { ts: Timestamp 1459929226000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.146-0500 c20012| 2016-04-06T02:53:48.999-0500 I COMMAND [conn45] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929228000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929228000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.157-0500 c20012| 2016-04-06T02:53:49.034-0500 D COMMAND [conn46] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929228000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929228000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:13.158-0500 c20012| 2016-04-06T02:53:49.034-0500 D COMMAND [conn46] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:13.162-0500 c20012| 2016-04-06T02:53:49.034-0500 D REPL [conn46] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929226000|2, t: 7 } and is durable through: { ts: Timestamp 1459929226000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.163-0500 c20012| 2016-04-06T02:53:49.034-0500 D REPL [conn46] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929228000|3, t: 7 } and is durable through: { ts: Timestamp 1459929228000|1, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.165-0500 c20012| 2016-04-06T02:53:49.034-0500 D REPL [conn46] Required snapshot optime: { ts: Timestamp 1459929228000|2, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929228000|1, t: 7 }, name-id: "267" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.168-0500 c20012| 2016-04-06T02:53:49.034-0500 I COMMAND [conn46] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929228000|1, t: 7 }, appliedOpTime: { ts: Timestamp 1459929228000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.169-0500 c20012| 2016-04-06T02:53:49.038-0500 D REPL [conn38] Required snapshot optime: { ts: Timestamp 1459929228000|2, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929228000|1, t: 7 }, name-id: "267" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.174-0500 c20012| 2016-04-06T02:53:49.038-0500 D REPL [conn38] Required snapshot optime: { ts: Timestamp 1459929228000|3, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929228000|1, t: 7 }, name-id: "267" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.176-0500 s20014| 2016-04-06T02:53:48.838-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c08c06c33406d4d9c0d5, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:13.182-0500 s20014| 2016-04-06T02:53:48.838-0500 D ASIO [conn1] startCommand: RemoteCommand 924 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:18.838-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08c06c33406d4d9c0d5'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929228838), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.184-0500 s20014| 2016-04-06T02:53:48.839-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 924 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:13.187-0500 s20014| 2016-04-06T02:53:48.840-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 924 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.190-0500 c20012| 2016-04-06T02:53:49.039-0500 D COMMAND [conn46] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929228000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929228000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:13.190-0500 c20012| 2016-04-06T02:53:49.039-0500 D COMMAND [conn46] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:13.191-0500 c20012| 2016-04-06T02:53:49.039-0500 D REPL [conn46] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929226000|2, t: 7 } and is durable through: { ts: Timestamp 1459929226000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.194-0500 c20012| 2016-04-06T02:53:49.039-0500 D REPL [conn46] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929228000|3, t: 7 } and is durable through: { ts: Timestamp 1459929228000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.195-0500 c20012| 2016-04-06T02:53:49.039-0500 D REPL [conn46] Updating _lastCommittedOpTime to { ts: Timestamp 1459929228000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.198-0500 c20012| 2016-04-06T02:53:49.039-0500 D REPL [conn46] Required snapshot optime: { ts: Timestamp 1459929228000|3, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929228000|2, t: 7 }, name-id: "268" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.210-0500 c20012| 2016-04-06T02:53:49.039-0500 D REPL [conn46] Required snapshot optime: { ts: Timestamp 1459929228000|3, t: 7 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929228000|2, t: 7 }, name-id: "268" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.213-0500 c20012| 2016-04-06T02:53:49.039-0500 I COMMAND [conn46] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929228000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929228000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.216-0500 c20012| 2016-04-06T02:53:49.039-0500 I COMMAND [conn42] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "mongovm16:20014:1459929123:-665935931" }, update: { $set: { ping: new Date(1459929228970) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ping: new Date(1459929228970) } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:428 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 66ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.221-0500 c20012| 2016-04-06T02:53:49.040-0500 D COMMAND [conn46] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929228000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929228000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:13.222-0500 c20012| 2016-04-06T02:53:49.040-0500 I COMMAND [conn40] command local.oplog.rs command: getMore { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929228000|1, t: 7 } } cursorid:23538204668 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 43ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.223-0500 c20012| 2016-04-06T02:53:49.040-0500 D COMMAND [conn46] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:13.224-0500 c20012| 2016-04-06T02:53:49.040-0500 D REPL [conn46] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929226000|2, t: 7 } and is durable through: { ts: Timestamp 1459929226000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.225-0500 c20012| 2016-04-06T02:53:49.040-0500 D REPL [conn46] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929228000|3, t: 7 } and is durable through: { ts: Timestamp 1459929228000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.226-0500 c20012| 2016-04-06T02:53:49.040-0500 D REPL [conn46] Updating _lastCommittedOpTime to { ts: Timestamp 1459929228000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.228-0500 c20012| 2016-04-06T02:53:49.040-0500 I COMMAND [conn46] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929228000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929228000|3, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.230-0500 c20013| 2016-04-06T02:52:43.299-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:13.232-0500 c20013| 2016-04-06T02:52:43.299-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:13.232-0500 c20013| 2016-04-06T02:52:43.299-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:13.233-0500 c20013| 2016-04-06T02:52:43.299-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:13.234-0500 c20013| 2016-04-06T02:52:43.299-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:13.236-0500 c20013| 2016-04-06T02:52:43.299-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:13.237-0500 c20013| 2016-04-06T02:52:43.299-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.238-0500 c20013| 2016-04-06T02:52:43.299-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:13.239-0500 c20013| 2016-04-06T02:52:43.300-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:13.240-0500 c20013| 2016-04-06T02:52:43.300-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:13.241-0500 c20013| 2016-04-06T02:52:43.300-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:13.242-0500 c20013| 2016-04-06T02:52:43.300-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:13.243-0500 c20013| 2016-04-06T02:52:43.300-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:13.244-0500 c20013| 2016-04-06T02:52:43.300-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:13.246-0500 s20014| 2016-04-06T02:53:48.840-0500 D ASIO [conn1] startCommand: RemoteCommand 926 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:18.840-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.248-0500 s20014| 2016-04-06T02:53:48.840-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 926 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:13.251-0500 s20014| 2016-04-06T02:53:48.840-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 926 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.253-0500 s20014| 2016-04-06T02:53:48.840-0500 D ASIO [conn1] startCommand: RemoteCommand 928 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:18.840-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929226000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.254-0500 s20014| 2016-04-06T02:53:48.841-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 928 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:13.255-0500 s20014| 2016-04-06T02:53:48.841-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 928 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929191721) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.257-0500 s20014| 2016-04-06T02:53:48.841-0500 D ASIO [conn1] startCommand: RemoteCommand 930 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:54:18.841-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.258-0500 s20014| 2016-04-06T02:53:48.842-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 930 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:13.269-0500 s20014| 2016-04-06T02:53:48.843-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] warning: log line attempted (22kB) over max size (10kB), printing beginning and end ... Request 930 finished with response: { host: "mongovm16:20012", advisoryHostFQDNs: [], version: "3.3.4-37-g36f3ff8", process: "mongod", pid: 65723, uptime: 111.0, uptimeMillis: 111702, uptimeEstimate: 97.0, localTime: new Date(1459929228842), asserts: { regular: 0, warning: 0, msg: 0, user: 57, rollovers: 0 }, connections: { current: 17, available: 51183, totalCreated: 48 }, extra_info: { note: "fields vary by platform", heap_usage_bytes: 133863720, page_faults: 0 }, globalLock: { totalTime: 111699000, currentQueue: { total: 0, readers: 0, writers: 0 }, activeClients: { total: 34, readers: 0, writers: 0 } }, locks: { Global: { acquireCount: { r: 3987, w: 824, R: 172, W: 342 }, acquireWaitCount: { r: 18, w: 2, W: 9 }, timeAcquiringMicros: { r: 79690, w: 22138, W: 3261 } }, Database: { acquireCount: { r: 1334, w: 269, W: 555 }, acquireWaitCount: { r: 115, w: 1, W: 22 }, timeAcquiringMicros: { r: 15661, w: 7420, W: 5681 } }, Collection: { acquireCount: { r: 721, w: 237 } }, Metadata: { acquireCount: { w: 83, W: 494 }, acquireWaitCount: { W: 8 }, timeAcquiringMicros: { W: 646 } }, oplog: { acquireCount: { r: 627, w: 39, R: 1, W: 1 } } }, network: { bytesIn: 232492, bytesOut: 1797559, numRequests: 957 }, opcounters: { insert: 6, query: 295, update: 12, delete: 0, getmore: 125, command: 538 }, opcountersRepl: { insert: 61, query: 0, update: 170, delete: 0, getmore: 0, command: 0 }, repl: { hosts: [ "mongovm16:20011", "mongovm16:20012", "mongovm16:20013" ], setName: "multidrop-configRS", setVersion: 1, ismaster: true, secondary: false, primary: "mongovm16:20012", me: "mongovm16:20012", electionId: ObjectId('7fffffff0000000000000007'), rbid: 1287542267 }, storageEngine: { name: "wiredTiger", supportsCommittedReads: true, readOnly: false, persistent: true }, tcmalloc: { generic: { current_allocated_bytes: 133865240, heap_size: 138121216 }, tcmalloc: { pageheap_free_bytes: 1286144, pageheap_unmapped_bytes: 0, max_total_thread_cache_bytes: 1073741824, current_total_thread_cache_bytes: 1848536, total_free_bytes: 2969832, central_cache_free_bytes: 186416, transfer_cache_free_bytes: 934880, thread_cache_free_bytes: 1848536, aggressive_memory_decommit: 0, size_classes: [ { bytes_per_object: 0, pages_per_span: 0, num_spans: 0, num_thread_objs: 0, num_central_objs: 0, num_transfer_objs: 0, free_bytes: 0, allocated_bytes: 0 }, { bytes_per_object: 8, pages_per_span: 2, num_spans: 2, num_thread_objs: 163, num_central_objs: 906, num_transfer_objs: 0, free_bytes: 8552, allocated_bytes: 16384 }, { bytes_per_object: 16, pages_per_span: 2, num_spans: 4, num_thread_objs: 400, num_central_objs: 587, num_transfer_objs: 0, free_bytes: 15792, allocated_bytes: 32768 }, { bytes_per_object: 32, pages_per_span: 2, num_spans: 37, num_thread_objs: 1676, num_central_objs: 48, num_transfer_objs: 1536, free_bytes: 104320, allocated_bytes: 303104 }, { bytes_per_object: 48, pages_per_span: 2, num_spans: 25, num_thread_objs: 783, num_central_objs: 86, num_transfer_objs: 340, free_bytes: 58032, allocated_bytes: 204800 }, { bytes_per_object: 64, pages_per_span: 2, num_spans: 58, num_thread_objs: 521, num_central_objs: 103, num_transfer_objs: 5632, free_bytes: 400384, allocated_bytes: 475136 }, { bytes_per_object: 80, pages_per_span: 2, num_spans: 35, num_thread_objs: 497, num_central_objs: 37, num_transfer_objs: 1938, free_bytes: 197760, allocated_bytes: 286720 }, { bytes_per_object: 96, pages_p .......... heSetFilter: { failed: 0, total: 0 }, profile: { failed: 0, total: 0 }, reIndex: { failed: 0, total: 0 }, renameCollection: { failed: 0, total: 0 }, repairCursor: { failed: 0, total: 0 }, repairDatabase: { failed: 0, total: 0 }, replSetDeclareElectionWinner: { failed: 0, total: 0 }, replSetElect: { failed: 0, total: 0 }, replSetFreeze: { failed: 0, total: 0 }, replSetFresh: { failed: 0, total: 0 }, replSetGetConfig: { failed: 0, total: 0 }, replSetGetRBID: { failed: 0, total: 2 }, replSetGetStatus: { failed: 0, total: 0 }, replSetHeartbeat: { failed: 0, total: 82 }, replSetInitiate: { failed: 0, total: 0 }, replSetMaintenance: { failed: 0, total: 0 }, replSetReconfig: { failed: 0, total: 0 }, replSetRequestVotes: { failed: 0, total: 8 }, replSetStepDown: { failed: 0, total: 1 }, replSetSyncFrom: { failed: 0, total: 0 }, replSetTest: { failed: 0, total: 0 }, replSetUpdatePosition: { failed: 0, total: 140 }, resetError: { failed: 0, total: 0 }, resync: { failed: 0, total: 0 }, revokePrivilegesFromRole: { failed: 0, total: 0 }, revokeRolesFromRole: { failed: 0, total: 0 }, revokeRolesFromUser: { failed: 0, total: 0 }, rolesInfo: { failed: 0, total: 0 }, saslContinue: { failed: 0, total: 0 }, saslStart: { failed: 0, total: 0 }, serverStatus: { failed: 0, total: 56 }, setCommittedSnapshot: { failed: 0, total: 0 }, setParameter: { failed: 0, total: 0 }, setShardVersion: { failed: 0, total: 0 }, shardConnPoolStats: { failed: 0, total: 0 }, shardingState: { failed: 0, total: 0 }, shutdown: { failed: 0, total: 0 }, sleep: { failed: 0, total: 0 }, splitChunk: { failed: 0, total: 0 }, splitVector: { failed: 0, total: 0 }, stageDebug: { failed: 0, total: 0 }, top: { failed: 0, total: 0 }, touch: { failed: 0, total: 0 }, unsetSharding: { failed: 0, total: 0 }, update: { failed: 0, total: 12 }, updateRole: { failed: 0, total: 0 }, updateUser: { failed: 0, total: 0 }, usersInfo: { failed: 0, total: 0 }, validate: { failed: 0, total: 0 }, whatsmyuri: { failed: 0, total: 0 }, writebacklisten: { failed: 0, total: 0 } }, cursor: { timedOut: 0, open: { noTimeout: 0, pinned: 2, total: 2 } }, document: { deleted: 0, inserted: 12, returned: 464, updated: 24 }, getLastError: { wtime: { num: 36, totalMillis: 5786 }, wtimeouts: 0 }, operation: { fastmod: 0, idhack: 133, scanAndOrder: 0, writeConflicts: 0 }, queryExecutor: { scanned: 299, scannedObjects: 432 }, record: { moves: 0 }, repl: { executor: { counters: { eventCreated: 14, eventWait: 14, cancels: 471, waits: 1787, scheduledNetCmd: 100, scheduledDBWork: 3, scheduledXclWork: 0, scheduledWorkAt: 561, scheduledWork: 1954, schedulingFailures: 0 }, queues: { networkInProgress: 0, dbWorkInProgress: 0, exclusiveInProgress: 0, sleepers: 3, ready: 0, free: 30 }, unsignaledEvents: 3, eventWaiters: 0, shuttingDown: false, networkInterface: " [js_test:multi_coll_drop] 2016-04-06T02:54:13.270-0500 s20014| NetworkInterfaceASIO Operations' Diagnostic: [js_test:multi_coll_drop] 2016-04-06T02:54:13.273-0500 s20014| Operation: Count: [js_test:multi_coll_drop] 2016-04-06T02:54:13.273-0500 s20014| Connecting 0 [js_test:multi_coll_drop] 2016-04-06T02:54:13.273-0500 s20014| In Progress 0 [js_test:multi_coll_drop] 2016-04-06T02:54:13.274-0500 s20014| Succeeded 89 [js_test:multi_coll_drop] 2016-04-06T02:54:13.280-0500 s20014| Canceled..." }, apply: { batches: { num: 168, totalMillis: 0 }, ops: 196 }, buffer: { count: 0, maxSizeBytes: 268435456, sizeBytes: 0 }, network: { bytes: 67253, getmores: { num: 266, totalMillis: 15808 }, ops: 206, readersCreated: 1 }, preload: { docs: { num: 0, totalMillis: 0 }, indexes: { num: 0, totalMillis: 0 } } }, storage: { freelist: { search: { bucketExhausted: 0, requests: 0, scanned: 0 } } }, ttl: { deletedDocuments: 0, passes: 1 } }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.286-0500 s20014| 2016-04-06T02:53:48.848-0500 D SHARDING [conn1] checking last ping for lock 'multidrop.coll' against last seen process mongovm16:20010:1459929128:185613966 and ping 2016-04-06T02:53:11.721-0500 [js_test:multi_coll_drop] 2016-04-06T02:54:13.290-0500 s20014| 2016-04-06T02:53:48.848-0500 D SHARDING [conn1] could not force lock 'multidrop.coll' because elapsed time 8109 < takeover time 900000 ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.291-0500 s20014| 2016-04-06T02:53:48.848-0500 D SHARDING [conn1] distributed lock 'multidrop.coll' was not acquired. [js_test:multi_coll_drop] 2016-04-06T02:54:13.295-0500 s20014| 2016-04-06T02:53:48.970-0500 D ASIO [replSetDistLockPinger] startCommand: RemoteCommand 932 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:18.970-0500 cmd:{ findAndModify: "lockpings", query: { _id: "mongovm16:20014:1459929123:-665935931" }, update: { $set: { ping: new Date(1459929228970) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.298-0500 s20014| 2016-04-06T02:53:48.973-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 932 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:13.300-0500 s20014| 2016-04-06T02:53:49.039-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 932 finished with response: { lastErrorObject: { updatedExisting: true, n: 1 }, value: { _id: "mongovm16:20014:1459929123:-665935931", ping: new Date(1459929123720) }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.303-0500 s20014| 2016-04-06T02:53:49.348-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c08d06c33406d4d9c0d6, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:13.309-0500 s20014| 2016-04-06T02:53:49.348-0500 D ASIO [conn1] startCommand: RemoteCommand 934 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:19.348-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08d06c33406d4d9c0d6'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929229348), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.310-0500 s20014| 2016-04-06T02:53:49.349-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 934 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:13.312-0500 s20014| 2016-04-06T02:53:49.350-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 934 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.313-0500 s20014| 2016-04-06T02:53:49.350-0500 D ASIO [conn1] startCommand: RemoteCommand 936 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:19.350-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929228000|2, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.316-0500 s20014| 2016-04-06T02:53:49.350-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 936 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:13.326-0500 s20014| 2016-04-06T02:53:49.350-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 936 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.328-0500 s20014| 2016-04-06T02:53:49.350-0500 D ASIO [conn1] startCommand: RemoteCommand 938 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:19.350-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929228000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.330-0500 s20014| 2016-04-06T02:53:49.351-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 938 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:13.333-0500 s20014| 2016-04-06T02:53:49.351-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 938 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929228990) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.336-0500 s20014| 2016-04-06T02:53:49.351-0500 D ASIO [conn1] startCommand: RemoteCommand 940 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:54:19.351-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.339-0500 s20014| 2016-04-06T02:53:49.351-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 940 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:13.349-0500 s20014| 2016-04-06T02:53:49.352-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] warning: log line attempted (22kB) over max size (10kB), printing beginning and end ... Request 940 finished with response: { host: "mongovm16:20012", advisoryHostFQDNs: [], version: "3.3.4-37-g36f3ff8", process: "mongod", pid: 65723, uptime: 112.0, uptimeMillis: 112211, uptimeEstimate: 98.0, localTime: new Date(1459929229351), asserts: { regular: 0, warning: 0, msg: 0, user: 58, rollovers: 0 }, connections: { current: 17, available: 51183, totalCreated: 48 }, extra_info: { note: "fields vary by platform", heap_usage_bytes: 133874928, page_faults: 0 }, globalLock: { totalTime: 112208000, currentQueue: { total: 0, readers: 0, writers: 0 }, activeClients: { total: 34, readers: 0, writers: 0 } }, locks: { Global: { acquireCount: { r: 4092, w: 831, R: 172, W: 342 }, acquireWaitCount: { r: 18, w: 2, W: 9 }, timeAcquiringMicros: { r: 79690, w: 22138, W: 3261 } }, Database: { acquireCount: { r: 1380, w: 276, W: 555 }, acquireWaitCount: { r: 115, w: 1, W: 22 }, timeAcquiringMicros: { r: 15661, w: 7420, W: 5681 } }, Collection: { acquireCount: { r: 739, w: 241 } }, Metadata: { acquireCount: { w: 86, W: 500 }, acquireWaitCount: { W: 9 }, timeAcquiringMicros: { W: 696 } }, oplog: { acquireCount: { r: 655, w: 42, R: 1, W: 1 } } }, network: { bytesIn: 241986, bytesOut: 1833156, numRequests: 989 }, opcounters: { insert: 6, query: 297, update: 12, delete: 0, getmore: 136, command: 557 }, opcountersRepl: { insert: 61, query: 0, update: 170, delete: 0, getmore: 0, command: 0 }, repl: { hosts: [ "mongovm16:20011", "mongovm16:20012", "mongovm16:20013" ], setName: "multidrop-configRS", setVersion: 1, ismaster: true, secondary: false, primary: "mongovm16:20012", me: "mongovm16:20012", electionId: ObjectId('7fffffff0000000000000007'), rbid: 1287542267 }, storageEngine: { name: "wiredTiger", supportsCommittedReads: true, readOnly: false, persistent: true }, tcmalloc: { generic: { current_allocated_bytes: 133876448, heap_size: 138121216 }, tcmalloc: { pageheap_free_bytes: 1241088, pageheap_unmapped_bytes: 0, max_total_thread_cache_bytes: 1073741824, current_total_thread_cache_bytes: 1850672, total_free_bytes: 3003680, central_cache_free_bytes: 209936, transfer_cache_free_bytes: 943072, thread_cache_free_bytes: 1850672, aggressive_memory_decommit: 0, size_classes: [ { bytes_per_object: 0, pages_per_span: 0, num_spans: 0, num_thread_objs: 0, num_central_objs: 0, num_transfer_objs: 0, free_bytes: 0, allocated_bytes: 0 }, { bytes_per_object: 8, pages_per_span: 2, num_spans: 2, num_thread_objs: 164, num_central_objs: 892, num_transfer_objs: 0, free_bytes: 8448, allocated_bytes: 16384 }, { bytes_per_object: 16, pages_per_span: 2, num_spans: 4, num_thread_objs: 390, num_central_objs: 591, num_transfer_objs: 0, free_bytes: 15696, allocated_bytes: 32768 }, { bytes_per_object: 32, pages_per_span: 2, num_spans: 38, num_thread_objs: 1871, num_central_objs: 99, num_transfer_objs: 1536, free_bytes: 112192, allocated_bytes: 311296 }, { bytes_per_object: 48, pages_per_span: 2, num_spans: 25, num_thread_objs: 751, num_central_objs: 90, num_transfer_objs: 340, free_bytes: 56688, allocated_bytes: 204800 }, { bytes_per_object: 64, pages_per_span: 2, num_spans: 58, num_thread_objs: 517, num_central_objs: 103, num_transfer_objs: 5632, free_bytes: 400128, allocated_bytes: 475136 }, { bytes_per_object: 80, pages_per_span: 2, num_spans: 35, num_thread_objs: 534, num_central_objs: 0, num_transfer_objs: 1938, free_bytes: 197760, allocated_bytes: 286720 }, { bytes_per_object: 96, pages_pe .......... heSetFilter: { failed: 0, total: 0 }, profile: { failed: 0, total: 0 }, reIndex: { failed: 0, total: 0 }, renameCollection: { failed: 0, total: 0 }, repairCursor: { failed: 0, total: 0 }, repairDatabase: { failed: 0, total: 0 }, replSetDeclareElectionWinner: { failed: 0, total: 0 }, replSetElect: { failed: 0, total: 0 }, replSetFreeze: { failed: 0, total: 0 }, replSetFresh: { failed: 0, total: 0 }, replSetGetConfig: { failed: 0, total: 0 }, replSetGetRBID: { failed: 0, total: 2 }, replSetGetStatus: { failed: 0, total: 0 }, replSetHeartbeat: { failed: 0, total: 82 }, replSetInitiate: { failed: 0, total: 0 }, replSetMaintenance: { failed: 0, total: 0 }, replSetReconfig: { failed: 0, total: 0 }, replSetRequestVotes: { failed: 0, total: 8 }, replSetStepDown: { failed: 0, total: 1 }, replSetSyncFrom: { failed: 0, total: 0 }, replSetTest: { failed: 0, total: 0 }, replSetUpdatePosition: { failed: 0, total: 152 }, resetError: { failed: 0, total: 0 }, resync: { failed: 0, total: 0 }, revokePrivilegesFromRole: { failed: 0, total: 0 }, revokeRolesFromRole: { failed: 0, total: 0 }, revokeRolesFromUser: { failed: 0, total: 0 }, rolesInfo: { failed: 0, total: 0 }, saslContinue: { failed: 0, total: 0 }, saslStart: { failed: 0, total: 0 }, serverStatus: { failed: 0, total: 57 }, setCommittedSnapshot: { failed: 0, total: 0 }, setParameter: { failed: 0, total: 0 }, setShardVersion: { failed: 0, total: 0 }, shardConnPoolStats: { failed: 0, total: 0 }, shardingState: { failed: 0, total: 0 }, shutdown: { failed: 0, total: 0 }, sleep: { failed: 0, total: 0 }, splitChunk: { failed: 0, total: 0 }, splitVector: { failed: 0, total: 0 }, stageDebug: { failed: 0, total: 0 }, top: { failed: 0, total: 0 }, touch: { failed: 0, total: 0 }, unsetSharding: { failed: 0, total: 0 }, update: { failed: 0, total: 12 }, updateRole: { failed: 0, total: 0 }, updateUser: { failed: 0, total: 0 }, usersInfo: { failed: 0, total: 0 }, validate: { failed: 0, total: 0 }, whatsmyuri: { failed: 0, total: 0 }, writebacklisten: { failed: 0, total: 0 } }, cursor: { timedOut: 0, open: { noTimeout: 0, pinned: 2, total: 2 } }, document: { deleted: 0, inserted: 12, returned: 472, updated: 27 }, getLastError: { wtime: { num: 39, totalMillis: 5858 }, wtimeouts: 0 }, operation: { fastmod: 0, idhack: 135, scanAndOrder: 0, writeConflicts: 0 }, queryExecutor: { scanned: 304, scannedObjects: 437 }, record: { moves: 0 }, repl: { executor: { counters: { eventCreated: 14, eventWait: 14, cancels: 483, waits: 1820, scheduledNetCmd: 100, scheduledDBWork: 3, scheduledXclWork: 0, scheduledWorkAt: 573, scheduledWork: 1999, schedulingFailures: 0 }, queues: { networkInProgress: 0, dbWorkInProgress: 0, exclusiveInProgress: 0, sleepers: 3, ready: 0, free: 30 }, unsignaledEvents: 3, eventWaiters: 0, shuttingDown: false, networkInterface: " [js_test:multi_coll_drop] 2016-04-06T02:54:13.349-0500 s20014| NetworkInterfaceASIO Operations' Diagnostic: [js_test:multi_coll_drop] 2016-04-06T02:54:13.349-0500 s20014| Operation: Count: [js_test:multi_coll_drop] 2016-04-06T02:54:13.349-0500 s20014| Connecting 0 [js_test:multi_coll_drop] 2016-04-06T02:54:13.350-0500 s20014| In Progress 0 [js_test:multi_coll_drop] 2016-04-06T02:54:13.351-0500 s20014| Succeeded 89 [js_test:multi_coll_drop] 2016-04-06T02:54:13.354-0500 s20014| Canceled..." }, apply: { batches: { num: 168, totalMillis: 0 }, ops: 196 }, buffer: { count: 0, maxSizeBytes: 268435456, sizeBytes: 0 }, network: { bytes: 67253, getmores: { num: 266, totalMillis: 15808 }, ops: 206, readersCreated: 1 }, preload: { docs: { num: 0, totalMillis: 0 }, indexes: { num: 0, totalMillis: 0 } } }, storage: { freelist: { search: { bucketExhausted: 0, requests: 0, scanned: 0 } } }, ttl: { deletedDocuments: 0, passes: 1 } }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.359-0500 s20014| 2016-04-06T02:53:49.352-0500 D SHARDING [conn1] checking last ping for lock 'multidrop.coll' against last seen process mongovm16:20010:1459929128:185613966 and ping 2016-04-06T02:53:11.721-0500 [js_test:multi_coll_drop] 2016-04-06T02:54:13.360-0500 s20014| 2016-04-06T02:53:49.352-0500 D SHARDING [conn1] distributed lock 'multidrop.coll' was not acquired. [js_test:multi_coll_drop] 2016-04-06T02:54:13.362-0500 s20014| 2016-04-06T02:53:49.852-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c08d06c33406d4d9c0d7, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:13.366-0500 s20014| 2016-04-06T02:53:49.853-0500 D ASIO [conn1] startCommand: RemoteCommand 942 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:19.853-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08d06c33406d4d9c0d7'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929229853), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.366-0500 s20014| 2016-04-06T02:53:49.853-0500 I ASIO [conn1] dropping unhealthy pooled connection to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:13.367-0500 s20014| 2016-04-06T02:53:49.853-0500 I ASIO [conn1] dropping unhealthy pooled connection to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:13.367-0500 s20014| 2016-04-06T02:53:49.853-0500 I ASIO [conn1] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:54:13.368-0500 s20014| 2016-04-06T02:53:49.853-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:13.368-0500 s20014| 2016-04-06T02:53:49.853-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 943 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:13.369-0500 s20014| 2016-04-06T02:53:49.854-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:13.370-0500 s20014| 2016-04-06T02:53:49.854-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 943 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:54:13.371-0500 s20014| 2016-04-06T02:53:49.854-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 942 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:13.371-0500 s20014| 2016-04-06T02:53:49.854-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 942 finished with response: { ok: 0.0, errmsg: "not master", code: 10107 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.372-0500 s20014| 2016-04-06T02:53:49.854-0500 D NETWORK [conn1] Marking host mongovm16:20012 as failed [js_test:multi_coll_drop] 2016-04-06T02:54:13.374-0500 s20014| 2016-04-06T02:53:49.854-0500 D SHARDING [conn1] Command failed with retriable error and will be retried :: caused by :: NotMaster: not master [js_test:multi_coll_drop] 2016-04-06T02:54:13.374-0500 s20014| 2016-04-06T02:53:49.854-0500 D NETWORK [conn1] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:13.375-0500 s20014| 2016-04-06T02:53:49.854-0500 D NETWORK [conn1] polling for status of connection to 192.168.100.28:20012, event detected [js_test:multi_coll_drop] 2016-04-06T02:54:13.377-0500 s20014| 2016-04-06T02:53:49.854-0500 I NETWORK [conn1] Socket closed remotely, no longer connected (idle 15 secs, remote host 192.168.100.28:20012) [js_test:multi_coll_drop] 2016-04-06T02:54:13.378-0500 s20014| 2016-04-06T02:53:49.854-0500 D NETWORK [conn1] creating new connection to:mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:13.379-0500 s20014| 2016-04-06T02:53:49.854-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:multi_coll_drop] 2016-04-06T02:54:13.381-0500 s20014| 2016-04-06T02:53:49.855-0500 D NETWORK [conn1] connected to server mongovm16:20012 (192.168.100.28) [js_test:multi_coll_drop] 2016-04-06T02:54:13.384-0500 s20014| 2016-04-06T02:53:49.855-0500 D NETWORK [conn1] connected connection! [js_test:multi_coll_drop] 2016-04-06T02:54:13.385-0500 s20014| 2016-04-06T02:53:49.856-0500 D NETWORK [conn1] polling for status of connection to 192.168.100.28:20011, no events [js_test:multi_coll_drop] 2016-04-06T02:54:13.385-0500 s20014| 2016-04-06T02:53:49.856-0500 W NETWORK [conn1] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:13.386-0500 s20014| 2016-04-06T02:53:50.357-0500 D NETWORK [conn1] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:13.387-0500 s20014| 2016-04-06T02:53:50.362-0500 W NETWORK [conn1] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:13.388-0500 s20014| 2016-04-06T02:53:50.862-0500 D NETWORK [conn1] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:13.389-0500 s20014| 2016-04-06T02:53:50.863-0500 W NETWORK [conn1] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:13.390-0500 s20014| 2016-04-06T02:53:51.364-0500 D NETWORK [conn1] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:13.391-0500 s20014| 2016-04-06T02:53:51.364-0500 D NETWORK [conn1] polling for status of connection to 192.168.100.28:20013, no events [js_test:multi_coll_drop] 2016-04-06T02:54:13.393-0500 s20014| 2016-04-06T02:53:51.365-0500 W NETWORK [conn1] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:13.395-0500 s20014| 2016-04-06T02:53:51.865-0500 D NETWORK [conn1] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:13.396-0500 s20014| 2016-04-06T02:53:51.866-0500 W NETWORK [conn1] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:13.397-0500 s20014| 2016-04-06T02:53:52.366-0500 D NETWORK [conn1] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:13.398-0500 s20014| 2016-04-06T02:53:52.367-0500 W NETWORK [conn1] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:13.398-0500 s20014| 2016-04-06T02:53:52.867-0500 D NETWORK [conn1] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:13.399-0500 s20014| 2016-04-06T02:53:52.870-0500 W NETWORK [conn1] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:13.399-0500 s20014| 2016-04-06T02:53:53.374-0500 D NETWORK [conn1] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:13.400-0500 s20014| 2016-04-06T02:53:53.375-0500 W NETWORK [conn1] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:13.400-0500 s20014| 2016-04-06T02:53:53.876-0500 D NETWORK [conn1] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:13.401-0500 s20014| 2016-04-06T02:53:53.877-0500 W NETWORK [conn1] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:13.402-0500 s20014| 2016-04-06T02:53:54.381-0500 D NETWORK [conn1] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:13.402-0500 s20014| 2016-04-06T02:53:54.381-0500 D NETWORK [conn1] polling for status of connection to 192.168.100.28:20012, no events [js_test:multi_coll_drop] 2016-04-06T02:54:13.403-0500 s20014| 2016-04-06T02:53:54.382-0500 D NETWORK [conn1] polling for status of connection to 192.168.100.28:20011, no events [js_test:multi_coll_drop] 2016-04-06T02:54:13.404-0500 s20014| 2016-04-06T02:53:54.383-0500 W NETWORK [conn1] No primary detected for set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:13.406-0500 s20014| 2016-04-06T02:53:54.883-0500 D NETWORK [conn1] Starting new refresh of replica set multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:13.409-0500 s20014| 2016-04-06T02:53:54.884-0500 D ASIO [conn1] startCommand: RemoteCommand 945 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:24.884-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08d06c33406d4d9c0d7'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929229853), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.412-0500 s20014| 2016-04-06T02:53:54.884-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Connecting to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:13.413-0500 s20014| 2016-04-06T02:53:54.886-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 946 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:13.415-0500 s20014| 2016-04-06T02:53:54.886-0500 I ASIO [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:13.415-0500 s20014| 2016-04-06T02:53:54.886-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 946 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:54:13.417-0500 s20014| 2016-04-06T02:53:54.887-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 945 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:13.418-0500 s20014| 2016-04-06T02:53:54.887-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 945 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.420-0500 s20014| 2016-04-06T02:53:54.887-0500 D ASIO [conn1] startCommand: RemoteCommand 948 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:24.887-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929228000|3, t: 7 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.421-0500 s20014| 2016-04-06T02:53:54.887-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 948 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:13.422-0500 s20014| 2016-04-06T02:53:54.888-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 948 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.423-0500 s20014| 2016-04-06T02:53:54.888-0500 D ASIO [conn1] startCommand: RemoteCommand 950 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:24.888-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929234000|3, t: 8 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.424-0500 s20014| 2016-04-06T02:53:54.888-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 950 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:13.426-0500 s20014| 2016-04-06T02:53:56.106-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 950 finished with response: { waitedMS: 1217, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929228990) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.429-0500 s20014| 2016-04-06T02:53:56.106-0500 D ASIO [conn1] startCommand: RemoteCommand 952 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:54:26.106-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.430-0500 s20014| 2016-04-06T02:53:56.106-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 952 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:13.439-0500 s20014| 2016-04-06T02:53:56.119-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] warning: log line attempted (22kB) over max size (10kB), printing beginning and end ... Request 952 finished with response: { host: "mongovm16:20013", advisoryHostFQDNs: [], version: "3.3.4-37-g36f3ff8", process: "mongod", pid: 66033, uptime: 119.0, uptimeMillis: 118767, uptimeEstimate: 83.0, localTime: new Date(1459929236107), asserts: { regular: 0, warning: 0, msg: 0, user: 25, rollovers: 0 }, connections: { current: 13, available: 51187, totalCreated: 71 }, extra_info: { note: "fields vary by platform", heap_usage_bytes: 66340816, page_faults: 0 }, globalLock: { totalTime: 118763000, currentQueue: { total: 0, readers: 0, writers: 0 }, activeClients: { total: 30, readers: 0, writers: 0 } }, locks: { Global: { acquireCount: { r: 3612, w: 902, R: 212, W: 393 }, acquireWaitCount: { r: 23, w: 1, W: 10 }, timeAcquiringMicros: { r: 85380, w: 28554, W: 4350 } }, Database: { acquireCount: { r: 1070, w: 224, W: 678 }, acquireWaitCount: { r: 136, W: 5 }, timeAcquiringMicros: { r: 15600, W: 2901 } }, Collection: { acquireCount: { r: 602, w: 207 } }, Metadata: { acquireCount: { w: 68, W: 546 }, acquireWaitCount: { W: 6 }, timeAcquiringMicros: { W: 406 } }, oplog: { acquireCount: { r: 482, w: 24, R: 1, W: 1 } } }, network: { bytesIn: 141863, bytesOut: 834789, numRequests: 667 }, opcounters: { insert: 3, query: 145, update: 7, delete: 0, getmore: 40, command: 492 }, opcountersRepl: { insert: 64, query: 0, update: 184, delete: 0, getmore: 0, command: 0 }, repl: { hosts: [ "mongovm16:20011", "mongovm16:20012", "mongovm16:20013" ], setName: "multidrop-configRS", setVersion: 1, ismaster: true, secondary: false, primary: "mongovm16:20013", me: "mongovm16:20013", electionId: ObjectId('7fffffff0000000000000008'), rbid: 1885590396 }, storageEngine: { name: "wiredTiger", supportsCommittedReads: true, readOnly: false, persistent: true }, tcmalloc: { generic: { current_allocated_bytes: 99904848, heap_size: 137072640 }, tcmalloc: { pageheap_free_bytes: 34250752, pageheap_unmapped_bytes: 0, max_total_thread_cache_bytes: 1073741824, current_total_thread_cache_bytes: 1587352, total_free_bytes: 2917040, central_cache_free_bytes: 280984, transfer_cache_free_bytes: 1048704, thread_cache_free_bytes: 1587352, aggressive_memory_decommit: 0, size_classes: [ { bytes_per_object: 0, pages_per_span: 0, num_spans: 0, num_thread_objs: 0, num_central_objs: 0, num_transfer_objs: 0, free_bytes: 0, allocated_bytes: 0 }, { bytes_per_object: 8, pages_per_span: 2, num_spans: 2, num_thread_objs: 117, num_central_objs: 1003, num_transfer_objs: 0, free_bytes: 8960, allocated_bytes: 16384 }, { bytes_per_object: 16, pages_per_span: 2, num_spans: 4, num_thread_objs: 581, num_central_objs: 501, num_transfer_objs: 0, free_bytes: 17312, allocated_bytes: 32768 }, { bytes_per_object: 32, pages_per_span: 2, num_spans: 36, num_thread_objs: 1622, num_central_objs: 204, num_transfer_objs: 1280, free_bytes: 99392, allocated_bytes: 294912 }, { bytes_per_object: 48, pages_per_span: 2, num_spans: 22, num_thread_objs: 872, num_central_objs: 69, num_transfer_objs: 0, free_bytes: 45168, allocated_bytes: 180224 }, { bytes_per_object: 64, pages_per_span: 2, num_spans: 62, num_thread_objs: 649, num_central_objs: 173, num_transfer_objs: 6016, free_bytes: 437632, allocated_bytes: 507904 }, { bytes_per_object: 80, pages_per_span: 2, num_spans: 37, num_thread_objs: 496, num_central_objs: 78, num_transfer_objs: 2142, free_bytes: 217280, allocated_bytes: 303104 }, { bytes_per_object: 96, pages_per_span: 2, num_spa .......... acheSetFilter: { failed: 0, total: 0 }, profile: { failed: 0, total: 0 }, reIndex: { failed: 0, total: 0 }, renameCollection: { failed: 0, total: 0 }, repairCursor: { failed: 0, total: 0 }, repairDatabase: { failed: 0, total: 0 }, replSetDeclareElectionWinner: { failed: 0, total: 0 }, replSetElect: { failed: 0, total: 0 }, replSetFreeze: { failed: 0, total: 0 }, replSetFresh: { failed: 0, total: 0 }, replSetGetConfig: { failed: 0, total: 0 }, replSetGetRBID: { failed: 0, total: 2 }, replSetGetStatus: { failed: 0, total: 0 }, replSetHeartbeat: { failed: 0, total: 82 }, replSetInitiate: { failed: 0, total: 0 }, replSetMaintenance: { failed: 0, total: 0 }, replSetReconfig: { failed: 0, total: 0 }, replSetRequestVotes: { failed: 0, total: 6 }, replSetStepDown: { failed: 0, total: 0 }, replSetSyncFrom: { failed: 0, total: 0 }, replSetTest: { failed: 0, total: 0 }, replSetUpdatePosition: { failed: 0, total: 106 }, resetError: { failed: 0, total: 0 }, resync: { failed: 0, total: 0 }, revokePrivilegesFromRole: { failed: 0, total: 0 }, revokeRolesFromRole: { failed: 0, total: 0 }, revokeRolesFromUser: { failed: 0, total: 0 }, rolesInfo: { failed: 0, total: 0 }, saslContinue: { failed: 0, total: 0 }, saslStart: { failed: 0, total: 0 }, serverStatus: { failed: 0, total: 23 }, setCommittedSnapshot: { failed: 0, total: 0 }, setParameter: { failed: 0, total: 0 }, setShardVersion: { failed: 0, total: 0 }, shardConnPoolStats: { failed: 0, total: 0 }, shardingState: { failed: 0, total: 0 }, shutdown: { failed: 0, total: 0 }, sleep: { failed: 0, total: 0 }, splitChunk: { failed: 0, total: 0 }, splitVector: { failed: 0, total: 0 }, stageDebug: { failed: 0, total: 0 }, top: { failed: 0, total: 0 }, touch: { failed: 0, total: 0 }, unsetSharding: { failed: 0, total: 0 }, update: { failed: 0, total: 8 }, updateRole: { failed: 0, total: 0 }, updateUser: { failed: 0, total: 0 }, usersInfo: { failed: 0, total: 0 }, validate: { failed: 0, total: 0 }, whatsmyuri: { failed: 0, total: 0 }, writebacklisten: { failed: 0, total: 0 } }, cursor: { timedOut: 0, open: { noTimeout: 0, pinned: 1, total: 3 } }, document: { deleted: 0, inserted: 6, returned: 388, updated: 14 }, getLastError: { wtime: { num: 20, totalMillis: 20180 }, wtimeouts: 0 }, operation: { fastmod: 0, idhack: 68, scanAndOrder: 0, writeConflicts: 0 }, queryExecutor: { scanned: 185, scannedObjects: 384 }, record: { moves: 0 }, repl: { executor: { counters: { eventCreated: 23, eventWait: 23, cancels: 576, waits: 1704, scheduledNetCmd: 103, scheduledDBWork: 4, scheduledXclWork: 6, scheduledWorkAt: 661, scheduledWork: 1822, schedulingFailures: 0 }, queues: { networkInProgress: 0, dbWorkInProgress: 0, exclusiveInProgress: 0, sleepers: 3, ready: 0, free: 15 }, unsignaledEvents: 3, eventWaiters: 0, shuttingDown: false, networkInterface: " [js_test:multi_coll_drop] 2016-04-06T02:54:13.440-0500 s20014| NetworkInterfaceASIO Operations' Diagnostic: [js_test:multi_coll_drop] 2016-04-06T02:54:13.441-0500 s20014| Operation: Count: [js_test:multi_coll_drop] 2016-04-06T02:54:13.441-0500 s20014| Connecting 0 [js_test:multi_coll_drop] 2016-04-06T02:54:13.442-0500 s20014| In Progress 0 [js_test:multi_coll_drop] 2016-04-06T02:54:13.443-0500 s20014| Succeeded 93 [js_test:multi_coll_drop] 2016-04-06T02:54:13.446-0500 s20014| Canceled..." }, apply: { batches: { num: 209, totalMillis: 0 }, ops: 216 }, buffer: { count: 0, maxSizeBytes: 268435456, sizeBytes: 0 }, network: { bytes: 73272, getmores: { num: 399, totalMillis: 33888 }, ops: 226, readersCreated: 1 }, preload: { docs: { num: 0, totalMillis: 0 }, indexes: { num: 0, totalMillis: 0 } } }, storage: { freelist: { search: { bucketExhausted: 0, requests: 0, scanned: 0 } } }, ttl: { deletedDocuments: 0, passes: 1 } }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.450-0500 s20014| 2016-04-06T02:53:56.119-0500 D SHARDING [conn1] checking last ping for lock 'multidrop.coll' against last seen process mongovm16:20010:1459929128:185613966 and ping 2016-04-06T02:53:48.990-0500 [js_test:multi_coll_drop] 2016-04-06T02:54:13.465-0500 s20014| 2016-04-06T02:53:56.119-0500 D SHARDING [conn1] distributed lock 'multidrop.coll' was not acquired. [js_test:multi_coll_drop] 2016-04-06T02:54:13.466-0500 s20014| 2016-04-06T02:53:56.119-0500 I SHARDING [conn1] waited 15s for distributed lock multidrop.coll for drop [js_test:multi_coll_drop] 2016-04-06T02:54:13.468-0500 s20014| 2016-04-06T02:53:56.395-0500 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: multidrop-configRS [js_test:multi_coll_drop] 2016-04-06T02:54:13.470-0500 s20014| 2016-04-06T02:53:56.619-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c09406c33406d4d9c0d8, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:13.472-0500 s20014| 2016-04-06T02:53:56.619-0500 D ASIO [conn1] startCommand: RemoteCommand 954 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:26.619-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c09406c33406d4d9c0d8'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929236619), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.472-0500 s20014| 2016-04-06T02:53:56.619-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 954 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:13.474-0500 s20014| 2016-04-06T02:53:56.620-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 954 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.478-0500 s20014| 2016-04-06T02:53:56.620-0500 D ASIO [conn1] startCommand: RemoteCommand 956 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:26.620-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929234000|3, t: 8 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.479-0500 s20014| 2016-04-06T02:53:56.620-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 956 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:13.481-0500 s20014| 2016-04-06T02:53:56.622-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 956 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.483-0500 s20014| 2016-04-06T02:53:56.622-0500 D ASIO [conn1] startCommand: RemoteCommand 958 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:26.622-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929236000|1, t: 8 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.484-0500 s20014| 2016-04-06T02:53:56.622-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 958 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:13.486-0500 s20014| 2016-04-06T02:53:56.622-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 958 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929228990) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.487-0500 s20014| 2016-04-06T02:53:56.622-0500 D ASIO [conn1] startCommand: RemoteCommand 960 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:54:26.622-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.488-0500 s20014| 2016-04-06T02:53:56.622-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 960 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:13.496-0500 s20014| 2016-04-06T02:53:56.625-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] warning: log line attempted (22kB) over max size (10kB), printing beginning and end ... Request 960 finished with response: { host: "mongovm16:20013", advisoryHostFQDNs: [], version: "3.3.4-37-g36f3ff8", process: "mongod", pid: 66033, uptime: 119.0, uptimeMillis: 119282, uptimeEstimate: 83.0, localTime: new Date(1459929236622), asserts: { regular: 0, warning: 0, msg: 0, user: 26, rollovers: 0 }, connections: { current: 13, available: 51187, totalCreated: 71 }, extra_info: { note: "fields vary by platform", heap_usage_bytes: 99905728, page_faults: 0 }, globalLock: { totalTime: 119279000, currentQueue: { total: 0, readers: 0, writers: 0 }, activeClients: { total: 30, readers: 0, writers: 0 } }, locks: { Global: { acquireCount: { r: 3635, w: 905, R: 212, W: 393 }, acquireWaitCount: { r: 23, w: 1, W: 10 }, timeAcquiringMicros: { r: 85380, w: 28554, W: 4350 } }, Database: { acquireCount: { r: 1079, w: 227, W: 678 }, acquireWaitCount: { r: 136, W: 5 }, timeAcquiringMicros: { r: 15600, W: 2901 } }, Collection: { acquireCount: { r: 605, w: 209 } }, Metadata: { acquireCount: { w: 69, W: 548 }, acquireWaitCount: { W: 7 }, timeAcquiringMicros: { W: 441 } }, oplog: { acquireCount: { r: 488, w: 25, R: 1, W: 1 } } }, network: { bytesIn: 144701, bytesOut: 863752, numRequests: 677 }, opcounters: { insert: 3, query: 148, update: 8, delete: 0, getmore: 43, command: 496 }, opcountersRepl: { insert: 64, query: 0, update: 184, delete: 0, getmore: 0, command: 0 }, repl: { hosts: [ "mongovm16:20011", "mongovm16:20012", "mongovm16:20013" ], setName: "multidrop-configRS", setVersion: 1, ismaster: true, secondary: false, primary: "mongovm16:20013", me: "mongovm16:20013", electionId: ObjectId('7fffffff0000000000000008'), rbid: 1885590396 }, storageEngine: { name: "wiredTiger", supportsCommittedReads: true, readOnly: false, persistent: true }, tcmalloc: { generic: { current_allocated_bytes: 99907248, heap_size: 137072640 }, tcmalloc: { pageheap_free_bytes: 34201600, pageheap_unmapped_bytes: 0, max_total_thread_cache_bytes: 1073741824, current_total_thread_cache_bytes: 1627984, total_free_bytes: 2963792, central_cache_free_bytes: 303488, transfer_cache_free_bytes: 1032320, thread_cache_free_bytes: 1627984, aggressive_memory_decommit: 0, size_classes: [ { bytes_per_object: 0, pages_per_span: 0, num_spans: 0, num_thread_objs: 0, num_central_objs: 0, num_transfer_objs: 0, free_bytes: 0, allocated_bytes: 0 }, { bytes_per_object: 8, pages_per_span: 2, num_spans: 2, num_thread_objs: 126, num_central_objs: 994, num_transfer_objs: 0, free_bytes: 8960, allocated_bytes: 16384 }, { bytes_per_object: 16, pages_per_span: 2, num_spans: 4, num_thread_objs: 579, num_central_objs: 503, num_transfer_objs: 0, free_bytes: 17312, allocated_bytes: 32768 }, { bytes_per_object: 32, pages_per_span: 2, num_spans: 36, num_thread_objs: 1656, num_central_objs: 169, num_transfer_objs: 1280, free_bytes: 99360, allocated_bytes: 294912 }, { bytes_per_object: 48, pages_per_span: 2, num_spans: 22, num_thread_objs: 880, num_central_objs: 61, num_transfer_objs: 0, free_bytes: 45168, allocated_bytes: 180224 }, { bytes_per_object: 64, pages_per_span: 2, num_spans: 62, num_thread_objs: 648, num_central_objs: 173, num_transfer_objs: 6016, free_bytes: 437568, allocated_bytes: 507904 }, { bytes_per_object: 80, pages_per_span: 2, num_spans: 37, num_thread_objs: 507, num_central_objs: 67, num_transfer_objs: 2142, free_bytes: 217280, allocated_bytes: 303104 }, { bytes_per_object: 96, pages_per_span: 2, num_span .......... acheSetFilter: { failed: 0, total: 0 }, profile: { failed: 0, total: 0 }, reIndex: { failed: 0, total: 0 }, renameCollection: { failed: 0, total: 0 }, repairCursor: { failed: 0, total: 0 }, repairDatabase: { failed: 0, total: 0 }, replSetDeclareElectionWinner: { failed: 0, total: 0 }, replSetElect: { failed: 0, total: 0 }, replSetFreeze: { failed: 0, total: 0 }, replSetFresh: { failed: 0, total: 0 }, replSetGetConfig: { failed: 0, total: 0 }, replSetGetRBID: { failed: 0, total: 2 }, replSetGetStatus: { failed: 0, total: 0 }, replSetHeartbeat: { failed: 0, total: 82 }, replSetInitiate: { failed: 0, total: 0 }, replSetMaintenance: { failed: 0, total: 0 }, replSetReconfig: { failed: 0, total: 0 }, replSetRequestVotes: { failed: 0, total: 6 }, replSetStepDown: { failed: 0, total: 0 }, replSetSyncFrom: { failed: 0, total: 0 }, replSetTest: { failed: 0, total: 0 }, replSetUpdatePosition: { failed: 0, total: 108 }, resetError: { failed: 0, total: 0 }, resync: { failed: 0, total: 0 }, revokePrivilegesFromRole: { failed: 0, total: 0 }, revokeRolesFromRole: { failed: 0, total: 0 }, revokeRolesFromUser: { failed: 0, total: 0 }, rolesInfo: { failed: 0, total: 0 }, saslContinue: { failed: 0, total: 0 }, saslStart: { failed: 0, total: 0 }, serverStatus: { failed: 0, total: 24 }, setCommittedSnapshot: { failed: 0, total: 0 }, setParameter: { failed: 0, total: 0 }, setShardVersion: { failed: 0, total: 0 }, shardConnPoolStats: { failed: 0, total: 0 }, shardingState: { failed: 0, total: 0 }, shutdown: { failed: 0, total: 0 }, sleep: { failed: 0, total: 0 }, splitChunk: { failed: 0, total: 0 }, splitVector: { failed: 0, total: 0 }, stageDebug: { failed: 0, total: 0 }, top: { failed: 0, total: 0 }, touch: { failed: 0, total: 0 }, unsetSharding: { failed: 0, total: 0 }, update: { failed: 0, total: 8 }, updateRole: { failed: 0, total: 0 }, updateUser: { failed: 0, total: 0 }, usersInfo: { failed: 0, total: 0 }, validate: { failed: 0, total: 0 }, whatsmyuri: { failed: 0, total: 0 }, writebacklisten: { failed: 0, total: 0 } }, cursor: { timedOut: 0, open: { noTimeout: 0, pinned: 1, total: 3 } }, document: { deleted: 0, inserted: 6, returned: 391, updated: 15 }, getLastError: { wtime: { num: 21, totalMillis: 20189 }, wtimeouts: 0 }, operation: { fastmod: 0, idhack: 70, scanAndOrder: 0, writeConflicts: 0 }, queryExecutor: { scanned: 188, scannedObjects: 387 }, record: { moves: 0 }, repl: { executor: { counters: { eventCreated: 23, eventWait: 23, cancels: 578, waits: 1713, scheduledNetCmd: 105, scheduledDBWork: 4, scheduledXclWork: 6, scheduledWorkAt: 664, scheduledWork: 1833, schedulingFailures: 0 }, queues: { networkInProgress: 1, dbWorkInProgress: 0, exclusiveInProgress: 0, sleepers: 2, ready: 0, free: 15 }, unsignaledEvents: 3, eventWaiters: 0, shuttingDown: false, networkInterface: " [js_test:multi_coll_drop] 2016-04-06T02:54:13.497-0500 s20014| NetworkInterfaceASIO Operations' Diagnostic: [js_test:multi_coll_drop] 2016-04-06T02:54:13.497-0500 s20014| Operation: Count: [js_test:multi_coll_drop] 2016-04-06T02:54:13.498-0500 s20014| Connecting 0 [js_test:multi_coll_drop] 2016-04-06T02:54:13.498-0500 s20014| In Progress 1 [js_test:multi_coll_drop] 2016-04-06T02:54:13.498-0500 s20014| Succeeded 94 [js_test:multi_coll_drop] 2016-04-06T02:54:13.501-0500 s20014| Canceled..." }, apply: { batches: { num: 209, totalMillis: 0 }, ops: 216 }, buffer: { count: 0, maxSizeBytes: 268435456, sizeBytes: 0 }, network: { bytes: 73272, getmores: { num: 399, totalMillis: 33888 }, ops: 226, readersCreated: 1 }, preload: { docs: { num: 0, totalMillis: 0 }, indexes: { num: 0, totalMillis: 0 } } }, storage: { freelist: { search: { bucketExhausted: 0, requests: 0, scanned: 0 } } }, ttl: { deletedDocuments: 0, passes: 1 } }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.502-0500 s20014| 2016-04-06T02:53:56.626-0500 D SHARDING [conn1] checking last ping for lock 'multidrop.coll' against last seen process mongovm16:20010:1459929128:185613966 and ping 2016-04-06T02:53:48.990-0500 [js_test:multi_coll_drop] 2016-04-06T02:54:13.502-0500 s20014| 2016-04-06T02:53:56.626-0500 D SHARDING [conn1] could not force lock 'multidrop.coll' because elapsed time 520 < takeover time 900000 ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.503-0500 s20014| 2016-04-06T02:53:56.626-0500 D SHARDING [conn1] distributed lock 'multidrop.coll' was not acquired. [js_test:multi_coll_drop] 2016-04-06T02:54:13.505-0500 s20014| 2016-04-06T02:53:56.977-0500 D ASIO [Balancer] startCommand: RemoteCommand 962 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:26.976-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929236976), up: 109, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.507-0500 s20014| 2016-04-06T02:53:56.977-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 962 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:13.508-0500 s20014| 2016-04-06T02:53:56.998-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 962 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929236000|2, t: 8 }, electionId: ObjectId('7fffffff0000000000000008') } [js_test:multi_coll_drop] 2016-04-06T02:54:13.510-0500 s20014| 2016-04-06T02:53:57.010-0500 D ASIO [Balancer] startCommand: RemoteCommand 964 -- target:mongovm16:20011 db:config expDate:2016-04-06T02:54:27.010-0500 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929236000|2, t: 8 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.510-0500 s20014| 2016-04-06T02:53:57.013-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 964 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:13.511-0500 s20014| 2016-04-06T02:53:57.016-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 964 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "shard0000", host: "mongovm16:20010" } ], id: 0, ns: "config.shards" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.511-0500 s20014| 2016-04-06T02:53:57.016-0500 D SHARDING [Balancer] found 1 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp 1459929236000|2, t: 8 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.512-0500 s20014| 2016-04-06T02:53:57.016-0500 D ASIO [Balancer] startCommand: RemoteCommand 966 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:27.016-0500 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929236000|2, t: 8 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.513-0500 s20014| 2016-04-06T02:53:57.016-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 966 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:13.513-0500 s20014| 2016-04-06T02:53:57.027-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 966 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "chunksize", value: 50 } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.513-0500 s20014| 2016-04-06T02:53:57.028-0500 D SHARDING [Balancer] Refreshing MaxChunkSize: 50MB [js_test:multi_coll_drop] 2016-04-06T02:54:13.515-0500 s20014| 2016-04-06T02:53:57.028-0500 D ASIO [Balancer] startCommand: RemoteCommand 968 -- target:mongovm16:20012 db:config expDate:2016-04-06T02:54:27.028-0500 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929236000|2, t: 8 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.515-0500 s20014| 2016-04-06T02:53:57.028-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 968 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:13.517-0500 s20014| 2016-04-06T02:53:57.032-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 968 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "balancer", stopped: true } ], id: 0, ns: "config.settings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.518-0500 s20014| 2016-04-06T02:53:57.032-0500 D SHARDING [Balancer] skipping balancing round because balancing is disabled [js_test:multi_coll_drop] 2016-04-06T02:54:13.521-0500 s20014| 2016-04-06T02:53:57.032-0500 D ASIO [Balancer] startCommand: RemoteCommand 970 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:27.032-0500 cmd:{ update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929237032), up: 110, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.523-0500 s20014| 2016-04-06T02:53:57.032-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 970 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:13.525-0500 s20014| 2016-04-06T02:53:57.057-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 970 finished with response: { ok: 1, nModified: 1, n: 1, opTime: { ts: Timestamp 1459929237000|1, t: 8 }, electionId: ObjectId('7fffffff0000000000000008') } [js_test:multi_coll_drop] 2016-04-06T02:54:13.528-0500 s20014| 2016-04-06T02:53:57.126-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c09506c33406d4d9c0d9, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:13.530-0500 s20014| 2016-04-06T02:53:57.126-0500 D ASIO [conn1] startCommand: RemoteCommand 972 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:27.126-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c09506c33406d4d9c0d9'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929237126), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.531-0500 s20014| 2016-04-06T02:53:57.126-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 972 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:13.532-0500 s20014| 2016-04-06T02:53:57.127-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 972 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.535-0500 s20014| 2016-04-06T02:53:57.127-0500 D ASIO [conn1] startCommand: RemoteCommand 974 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:27.127-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929237000|1, t: 8 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.536-0500 s20014| 2016-04-06T02:53:57.127-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 974 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:13.540-0500 s20014| 2016-04-06T02:53:57.128-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 974 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.543-0500 s20014| 2016-04-06T02:53:57.128-0500 D ASIO [conn1] startCommand: RemoteCommand 976 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:27.128-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929237000|1, t: 8 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.543-0500 s20014| 2016-04-06T02:53:57.128-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 976 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:13.546-0500 s20014| 2016-04-06T02:53:57.129-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 976 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929228990) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.547-0500 s20014| 2016-04-06T02:53:57.129-0500 D ASIO [conn1] startCommand: RemoteCommand 978 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:54:27.129-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.549-0500 s20014| 2016-04-06T02:53:57.130-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 978 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:13.561-0500 s20014| 2016-04-06T02:53:57.131-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] warning: log line attempted (22kB) over max size (10kB), printing beginning and end ... Request 978 finished with response: { host: "mongovm16:20013", advisoryHostFQDNs: [], version: "3.3.4-37-g36f3ff8", process: "mongod", pid: 66033, uptime: 120.0, uptimeMillis: 119790, uptimeEstimate: 84.0, localTime: new Date(1459929237130), asserts: { regular: 0, warning: 0, msg: 0, user: 27, rollovers: 0 }, connections: { current: 13, available: 51187, totalCreated: 71 }, extra_info: { note: "fields vary by platform", heap_usage_bytes: 133466768, page_faults: 0 }, globalLock: { totalTime: 119786000, currentQueue: { total: 0, readers: 0, writers: 0 }, activeClients: { total: 30, readers: 0, writers: 0 } }, locks: { Global: { acquireCount: { r: 3704, w: 910, R: 212, W: 393 }, acquireWaitCount: { r: 23, w: 1, W: 10 }, timeAcquiringMicros: { r: 85380, w: 28554, W: 4350 } }, Database: { acquireCount: { r: 1109, w: 232, W: 678 }, acquireWaitCount: { r: 136, W: 5 }, timeAcquiringMicros: { r: 15600, W: 2901 } }, Collection: { acquireCount: { r: 608, w: 212 } }, Metadata: { acquireCount: { w: 71, W: 552 }, acquireWaitCount: { W: 8 }, timeAcquiringMicros: { W: 616 } }, oplog: { acquireCount: { r: 515, w: 27, R: 1, W: 1 } } }, network: { bytesIn: 154467, bytesOut: 898506, numRequests: 707 }, opcounters: { insert: 3, query: 151, update: 10, delete: 0, getmore: 53, command: 512 }, opcountersRepl: { insert: 64, query: 0, update: 184, delete: 0, getmore: 0, command: 0 }, repl: { hosts: [ "mongovm16:20011", "mongovm16:20012", "mongovm16:20013" ], setName: "multidrop-configRS", setVersion: 1, ismaster: true, secondary: false, primary: "mongovm16:20013", me: "mongovm16:20013", electionId: ObjectId('7fffffff0000000000000008'), rbid: 1885590396 }, storageEngine: { name: "wiredTiger", supportsCommittedReads: true, readOnly: false, persistent: true }, tcmalloc: { generic: { current_allocated_bytes: 133468288, heap_size: 137072640 }, tcmalloc: { pageheap_free_bytes: 647168, pageheap_unmapped_bytes: 0, max_total_thread_cache_bytes: 1073741824, current_total_thread_cache_bytes: 1627128, total_free_bytes: 2957184, central_cache_free_bytes: 281480, transfer_cache_free_bytes: 1048576, thread_cache_free_bytes: 1627128, aggressive_memory_decommit: 0, size_classes: [ { bytes_per_object: 0, pages_per_span: 0, num_spans: 0, num_thread_objs: 0, num_central_objs: 0, num_transfer_objs: 0, free_bytes: 0, allocated_bytes: 0 }, { bytes_per_object: 8, pages_per_span: 2, num_spans: 2, num_thread_objs: 129, num_central_objs: 991, num_transfer_objs: 0, free_bytes: 8960, allocated_bytes: 16384 }, { bytes_per_object: 16, pages_per_span: 2, num_spans: 4, num_thread_objs: 583, num_central_objs: 495, num_transfer_objs: 0, free_bytes: 17248, allocated_bytes: 32768 }, { bytes_per_object: 32, pages_per_span: 2, num_spans: 36, num_thread_objs: 1495, num_central_objs: 72, num_transfer_objs: 1536, free_bytes: 99296, allocated_bytes: 294912 }, { bytes_per_object: 48, pages_per_span: 2, num_spans: 22, num_thread_objs: 870, num_central_objs: 69, num_transfer_objs: 0, free_bytes: 45072, allocated_bytes: 180224 }, { bytes_per_object: 64, pages_per_span: 2, num_spans: 62, num_thread_objs: 649, num_central_objs: 173, num_transfer_objs: 6016, free_bytes: 437632, allocated_bytes: 507904 }, { bytes_per_object: 80, pages_per_span: 2, num_spans: 37, num_thread_objs: 544, num_central_objs: 31, num_transfer_objs: 2142, free_bytes: 217360, allocated_bytes: 303104 }, { bytes_per_object: 96, pages_per_span: 2, num_span .......... cheSetFilter: { failed: 0, total: 0 }, profile: { failed: 0, total: 0 }, reIndex: { failed: 0, total: 0 }, renameCollection: { failed: 0, total: 0 }, repairCursor: { failed: 0, total: 0 }, repairDatabase: { failed: 0, total: 0 }, replSetDeclareElectionWinner: { failed: 0, total: 0 }, replSetElect: { failed: 0, total: 0 }, replSetFreeze: { failed: 0, total: 0 }, replSetFresh: { failed: 0, total: 0 }, replSetGetConfig: { failed: 0, total: 0 }, replSetGetRBID: { failed: 0, total: 2 }, replSetGetStatus: { failed: 0, total: 0 }, replSetHeartbeat: { failed: 0, total: 83 }, replSetInitiate: { failed: 0, total: 0 }, replSetMaintenance: { failed: 0, total: 0 }, replSetReconfig: { failed: 0, total: 0 }, replSetRequestVotes: { failed: 0, total: 6 }, replSetStepDown: { failed: 0, total: 0 }, replSetSyncFrom: { failed: 0, total: 0 }, replSetTest: { failed: 0, total: 0 }, replSetUpdatePosition: { failed: 0, total: 121 }, resetError: { failed: 0, total: 0 }, resync: { failed: 0, total: 0 }, revokePrivilegesFromRole: { failed: 0, total: 0 }, revokeRolesFromRole: { failed: 0, total: 0 }, revokeRolesFromUser: { failed: 0, total: 0 }, rolesInfo: { failed: 0, total: 0 }, saslContinue: { failed: 0, total: 0 }, saslStart: { failed: 0, total: 0 }, serverStatus: { failed: 0, total: 25 }, setCommittedSnapshot: { failed: 0, total: 0 }, setParameter: { failed: 0, total: 0 }, setShardVersion: { failed: 0, total: 0 }, shardConnPoolStats: { failed: 0, total: 0 }, shardingState: { failed: 0, total: 0 }, shutdown: { failed: 0, total: 0 }, sleep: { failed: 0, total: 0 }, splitChunk: { failed: 0, total: 0 }, splitVector: { failed: 0, total: 0 }, stageDebug: { failed: 0, total: 0 }, top: { failed: 0, total: 0 }, touch: { failed: 0, total: 0 }, unsetSharding: { failed: 0, total: 0 }, update: { failed: 0, total: 10 }, updateRole: { failed: 0, total: 0 }, updateUser: { failed: 0, total: 0 }, usersInfo: { failed: 0, total: 0 }, validate: { failed: 0, total: 0 }, whatsmyuri: { failed: 0, total: 0 }, writebacklisten: { failed: 0, total: 0 } }, cursor: { timedOut: 0, open: { noTimeout: 0, pinned: 2, total: 3 } }, document: { deleted: 0, inserted: 6, returned: 399, updated: 17 }, getLastError: { wtime: { num: 23, totalMillis: 20204 }, wtimeouts: 0 }, operation: { fastmod: 0, idhack: 73, scanAndOrder: 0, writeConflicts: 0 }, queryExecutor: { scanned: 193, scannedObjects: 392 }, record: { moves: 0 }, repl: { executor: { counters: { eventCreated: 23, eventWait: 23, cancels: 591, waits: 1743, scheduledNetCmd: 105, scheduledDBWork: 4, scheduledXclWork: 6, scheduledWorkAt: 678, scheduledWork: 1876, schedulingFailures: 0 }, queues: { networkInProgress: 0, dbWorkInProgress: 0, exclusiveInProgress: 0, sleepers: 3, ready: 0, free: 15 }, unsignaledEvents: 3, eventWaiters: 0, shuttingDown: false, networkInterface: " [js_test:multi_coll_drop] 2016-04-06T02:54:13.562-0500 s20014| NetworkInterfaceASIO Operations' Diagnostic: [js_test:multi_coll_drop] 2016-04-06T02:54:13.563-0500 s20014| Operation: Count: [js_test:multi_coll_drop] 2016-04-06T02:54:13.563-0500 s20014| Connecting 0 [js_test:multi_coll_drop] 2016-04-06T02:54:13.563-0500 s20014| In Progress 0 [js_test:multi_coll_drop] 2016-04-06T02:54:13.565-0500 s20014| Succeeded 95 [js_test:multi_coll_drop] 2016-04-06T02:54:13.568-0500 s20014| Canceled..." }, apply: { batches: { num: 209, totalMillis: 0 }, ops: 216 }, buffer: { count: 0, maxSizeBytes: 268435456, sizeBytes: 0 }, network: { bytes: 73272, getmores: { num: 399, totalMillis: 33888 }, ops: 226, readersCreated: 1 }, preload: { docs: { num: 0, totalMillis: 0 }, indexes: { num: 0, totalMillis: 0 } } }, storage: { freelist: { search: { bucketExhausted: 0, requests: 0, scanned: 0 } } }, ttl: { deletedDocuments: 0, passes: 1 } }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.568-0500 s20014| 2016-04-06T02:53:57.131-0500 D SHARDING [conn1] checking last ping for lock 'multidrop.coll' against last seen process mongovm16:20010:1459929128:185613966 and ping 2016-04-06T02:53:48.990-0500 [js_test:multi_coll_drop] 2016-04-06T02:54:13.569-0500 s20014| 2016-04-06T02:53:57.131-0500 D SHARDING [conn1] could not force lock 'multidrop.coll' because elapsed time 1028 < takeover time 900000 ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.570-0500 s20014| 2016-04-06T02:53:57.131-0500 D SHARDING [conn1] distributed lock 'multidrop.coll' was not acquired. [js_test:multi_coll_drop] 2016-04-06T02:54:13.573-0500 s20014| 2016-04-06T02:53:57.632-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c09506c33406d4d9c0da, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:13.581-0500 s20014| 2016-04-06T02:53:57.632-0500 D ASIO [conn1] startCommand: RemoteCommand 980 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:27.632-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c09506c33406d4d9c0da'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929237632), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.582-0500 s20014| 2016-04-06T02:53:57.633-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 980 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:13.584-0500 s20014| 2016-04-06T02:53:57.634-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 980 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.585-0500 s20014| 2016-04-06T02:53:57.634-0500 D ASIO [conn1] startCommand: RemoteCommand 982 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:27.634-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929237000|1, t: 8 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.586-0500 s20014| 2016-04-06T02:53:57.634-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 982 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:13.589-0500 s20014| 2016-04-06T02:53:57.634-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 982 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.592-0500 s20014| 2016-04-06T02:53:57.635-0500 D ASIO [conn1] startCommand: RemoteCommand 984 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:27.635-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929237000|1, t: 8 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.593-0500 s20014| 2016-04-06T02:53:57.635-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 984 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:13.596-0500 s20014| 2016-04-06T02:53:57.635-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 984 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929228990) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.600-0500 s20014| 2016-04-06T02:53:57.635-0500 D ASIO [conn1] startCommand: RemoteCommand 986 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:54:27.635-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.619-0500 s20014| 2016-04-06T02:53:57.635-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 986 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:13.639-0500 s20014| 2016-04-06T02:53:57.636-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] warning: log line attempted (22kB) over max size (10kB), printing beginning and end ... Request 986 finished with response: { host: "mongovm16:20013", advisoryHostFQDNs: [], version: "3.3.4-37-g36f3ff8", process: "mongod", pid: 66033, uptime: 120.0, uptimeMillis: 120296, uptimeEstimate: 84.0, localTime: new Date(1459929237636), asserts: { regular: 0, warning: 0, msg: 0, user: 28, rollovers: 0 }, connections: { current: 13, available: 51187, totalCreated: 71 }, extra_info: { note: "fields vary by platform", heap_usage_bytes: 133471744, page_faults: 0 }, globalLock: { totalTime: 120292000, currentQueue: { total: 0, readers: 0, writers: 0 }, activeClients: { total: 30, readers: 0, writers: 0 } }, locks: { Global: { acquireCount: { r: 3747, w: 911, R: 212, W: 393 }, acquireWaitCount: { r: 23, w: 1, W: 10 }, timeAcquiringMicros: { r: 85380, w: 28554, W: 4350 } }, Database: { acquireCount: { r: 1130, w: 233, W: 678 }, acquireWaitCount: { r: 136, W: 5 }, timeAcquiringMicros: { r: 15600, W: 2901 } }, Collection: { acquireCount: { r: 642, w: 213 } }, Metadata: { acquireCount: { w: 71, W: 552 }, acquireWaitCount: { W: 8 }, timeAcquiringMicros: { W: 616 } }, oplog: { acquireCount: { r: 517, w: 27, R: 1, W: 1 } } }, network: { bytesIn: 155428, bytesOut: 925521, numRequests: 711 }, opcounters: { insert: 3, query: 153, update: 10, delete: 0, getmore: 53, command: 514 }, opcountersRepl: { insert: 64, query: 0, update: 184, delete: 0, getmore: 0, command: 0 }, repl: { hosts: [ "mongovm16:20011", "mongovm16:20012", "mongovm16:20013" ], setName: "multidrop-configRS", setVersion: 1, ismaster: true, secondary: false, primary: "mongovm16:20013", me: "mongovm16:20013", electionId: ObjectId('7fffffff0000000000000008'), rbid: 1885590396 }, storageEngine: { name: "wiredTiger", supportsCommittedReads: true, readOnly: false, persistent: true }, tcmalloc: { generic: { current_allocated_bytes: 133473264, heap_size: 137072640 }, tcmalloc: { pageheap_free_bytes: 647168, pageheap_unmapped_bytes: 0, max_total_thread_cache_bytes: 1073741824, current_total_thread_cache_bytes: 1638456, total_free_bytes: 2952208, central_cache_free_bytes: 273240, transfer_cache_free_bytes: 1040512, thread_cache_free_bytes: 1638456, aggressive_memory_decommit: 0, size_classes: [ { bytes_per_object: 0, pages_per_span: 0, num_spans: 0, num_thread_objs: 0, num_central_objs: 0, num_transfer_objs: 0, free_bytes: 0, allocated_bytes: 0 }, { bytes_per_object: 8, pages_per_span: 2, num_spans: 2, num_thread_objs: 129, num_central_objs: 993, num_transfer_objs: 0, free_bytes: 8976, allocated_bytes: 16384 }, { bytes_per_object: 16, pages_per_span: 2, num_spans: 4, num_thread_objs: 581, num_central_objs: 476, num_transfer_objs: 0, free_bytes: 16912, allocated_bytes: 32768 }, { bytes_per_object: 32, pages_per_span: 2, num_spans: 36, num_thread_objs: 1458, num_central_objs: 61, num_transfer_objs: 1536, free_bytes: 97760, allocated_bytes: 294912 }, { bytes_per_object: 48, pages_per_span: 2, num_spans: 22, num_thread_objs: 901, num_central_objs: 36, num_transfer_objs: 0, free_bytes: 44976, allocated_bytes: 180224 }, { bytes_per_object: 64, pages_per_span: 2, num_spans: 62, num_thread_objs: 654, num_central_objs: 155, num_transfer_objs: 6016, free_bytes: 436800, allocated_bytes: 507904 }, { bytes_per_object: 80, pages_per_span: 2, num_spans: 37, num_thread_objs: 543, num_central_objs: 21, num_transfer_objs: 2142, free_bytes: 216480, allocated_bytes: 303104 }, { bytes_per_object: 96, pages_per_span: 2, num_span .......... cheSetFilter: { failed: 0, total: 0 }, profile: { failed: 0, total: 0 }, reIndex: { failed: 0, total: 0 }, renameCollection: { failed: 0, total: 0 }, repairCursor: { failed: 0, total: 0 }, repairDatabase: { failed: 0, total: 0 }, replSetDeclareElectionWinner: { failed: 0, total: 0 }, replSetElect: { failed: 0, total: 0 }, replSetFreeze: { failed: 0, total: 0 }, replSetFresh: { failed: 0, total: 0 }, replSetGetConfig: { failed: 0, total: 0 }, replSetGetRBID: { failed: 0, total: 2 }, replSetGetStatus: { failed: 0, total: 0 }, replSetHeartbeat: { failed: 0, total: 83 }, replSetInitiate: { failed: 0, total: 0 }, replSetMaintenance: { failed: 0, total: 0 }, replSetReconfig: { failed: 0, total: 0 }, replSetRequestVotes: { failed: 0, total: 6 }, replSetStepDown: { failed: 0, total: 0 }, replSetSyncFrom: { failed: 0, total: 0 }, replSetTest: { failed: 0, total: 0 }, replSetUpdatePosition: { failed: 0, total: 121 }, resetError: { failed: 0, total: 0 }, resync: { failed: 0, total: 0 }, revokePrivilegesFromRole: { failed: 0, total: 0 }, revokeRolesFromRole: { failed: 0, total: 0 }, revokeRolesFromUser: { failed: 0, total: 0 }, rolesInfo: { failed: 0, total: 0 }, saslContinue: { failed: 0, total: 0 }, saslStart: { failed: 0, total: 0 }, serverStatus: { failed: 0, total: 26 }, setCommittedSnapshot: { failed: 0, total: 0 }, setParameter: { failed: 0, total: 0 }, setShardVersion: { failed: 0, total: 0 }, shardConnPoolStats: { failed: 0, total: 0 }, shardingState: { failed: 0, total: 0 }, shutdown: { failed: 0, total: 0 }, sleep: { failed: 0, total: 0 }, splitChunk: { failed: 0, total: 0 }, splitVector: { failed: 0, total: 0 }, stageDebug: { failed: 0, total: 0 }, top: { failed: 0, total: 0 }, touch: { failed: 0, total: 0 }, unsetSharding: { failed: 0, total: 0 }, update: { failed: 0, total: 10 }, updateRole: { failed: 0, total: 0 }, updateUser: { failed: 0, total: 0 }, usersInfo: { failed: 0, total: 0 }, validate: { failed: 0, total: 0 }, whatsmyuri: { failed: 0, total: 0 }, writebacklisten: { failed: 0, total: 0 } }, cursor: { timedOut: 0, open: { noTimeout: 0, pinned: 2, total: 3 } }, document: { deleted: 0, inserted: 6, returned: 401, updated: 17 }, getLastError: { wtime: { num: 23, totalMillis: 20204 }, wtimeouts: 0 }, operation: { fastmod: 0, idhack: 75, scanAndOrder: 0, writeConflicts: 0 }, queryExecutor: { scanned: 195, scannedObjects: 394 }, record: { moves: 0 }, repl: { executor: { counters: { eventCreated: 23, eventWait: 23, cancels: 591, waits: 1747, scheduledNetCmd: 105, scheduledDBWork: 4, scheduledXclWork: 6, scheduledWorkAt: 678, scheduledWork: 1880, schedulingFailures: 0 }, queues: { networkInProgress: 0, dbWorkInProgress: 0, exclusiveInProgress: 0, sleepers: 3, ready: 0, free: 15 }, unsignaledEvents: 3, eventWaiters: 0, shuttingDown: false, networkInterface: " [js_test:multi_coll_drop] 2016-04-06T02:54:13.640-0500 s20014| NetworkInterfaceASIO Operations' Diagnostic: [js_test:multi_coll_drop] 2016-04-06T02:54:13.640-0500 s20014| Operation: Count: [js_test:multi_coll_drop] 2016-04-06T02:54:13.641-0500 s20014| Connecting 0 [js_test:multi_coll_drop] 2016-04-06T02:54:13.642-0500 s20014| In Progress 0 [js_test:multi_coll_drop] 2016-04-06T02:54:13.642-0500 s20014| Succeeded 95 [js_test:multi_coll_drop] 2016-04-06T02:54:13.645-0500 s20014| Canceled..." }, apply: { batches: { num: 209, totalMillis: 0 }, ops: 216 }, buffer: { count: 0, maxSizeBytes: 268435456, sizeBytes: 0 }, network: { bytes: 73272, getmores: { num: 399, totalMillis: 33888 }, ops: 226, readersCreated: 1 }, preload: { docs: { num: 0, totalMillis: 0 }, indexes: { num: 0, totalMillis: 0 } } }, storage: { freelist: { search: { bucketExhausted: 0, requests: 0, scanned: 0 } } }, ttl: { deletedDocuments: 0, passes: 2 } }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.646-0500 s20014| 2016-04-06T02:53:57.647-0500 D SHARDING [conn1] checking last ping for lock 'multidrop.coll' against last seen process mongovm16:20010:1459929128:185613966 and ping 2016-04-06T02:53:48.990-0500 [js_test:multi_coll_drop] 2016-04-06T02:54:13.647-0500 s20014| 2016-04-06T02:53:57.647-0500 D SHARDING [conn1] could not force lock 'multidrop.coll' because elapsed time 1530 < takeover time 900000 ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.647-0500 s20014| 2016-04-06T02:53:57.647-0500 D SHARDING [conn1] distributed lock 'multidrop.coll' was not acquired. [js_test:multi_coll_drop] 2016-04-06T02:54:13.649-0500 s20014| 2016-04-06T02:53:58.159-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c09606c33406d4d9c0db, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:13.654-0500 c20012| 2016-04-06T02:53:49.040-0500 I COMMAND [conn38] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "mongovm16:20010:1459929128:185613966" }, update: { $set: { ping: new Date(1459929228990) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ping: new Date(1459929228990) } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:427 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 49ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.656-0500 c20012| 2016-04-06T02:53:49.041-0500 D COMMAND [conn40] run command local.$cmd { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929228000|2, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:13.660-0500 c20012| 2016-04-06T02:53:49.041-0500 I COMMAND [conn40] command local.oplog.rs command: getMore { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929228000|2, t: 7 } } cursorid:23538204668 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.664-0500 c20012| 2016-04-06T02:53:49.041-0500 D COMMAND [conn40] run command local.$cmd { getMore: 23538204668, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929228000|3, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:13.665-0500 c20011| 2016-04-06T02:53:18.986-0500 I COMMAND [conn58] command local.oplog.rs command: getMore { getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929194000|2, t: 5 } } cursorid:19461455963 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.673-0500 c20011| 2016-04-06T02:53:18.986-0500 I COMMAND [conn55] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929198271), up: 71, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:386 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 16ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.674-0500 c20011| 2016-04-06T02:53:18.986-0500 D REPL [conn62] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29989870μs [js_test:multi_coll_drop] 2016-04-06T02:54:13.675-0500 c20011| 2016-04-06T02:53:18.986-0500 D COMMAND [conn55] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|1, t: 5 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.678-0500 c20011| 2016-04-06T02:53:18.986-0500 D COMMAND [conn55] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:13.678-0500 c20011| 2016-04-06T02:53:18.986-0500 D COMMAND [conn55] Using 'committed' snapshot. { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|1, t: 5 } }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.679-0500 c20011| 2016-04-06T02:53:18.986-0500 D QUERY [conn55] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:13.680-0500 c20011| 2016-04-06T02:53:18.986-0500 D COMMAND [conn58] run command local.$cmd { getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929198000|1, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:54:13.684-0500 c20011| 2016-04-06T02:53:18.987-0500 I COMMAND [conn55] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|1, t: 5 } }, maxTimeMS: 30000 } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:443 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.686-0500 c20011| 2016-04-06T02:53:18.987-0500 D COMMAND [conn55] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.689-0500 c20011| 2016-04-06T02:53:18.987-0500 D COMMAND [conn55] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:13.690-0500 c20011| 2016-04-06T02:53:18.987-0500 D COMMAND [conn55] Using 'committed' snapshot. { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.691-0500 c20011| 2016-04-06T02:53:18.987-0500 D QUERY [conn55] Using idhack: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:13.693-0500 c20011| 2016-04-06T02:53:18.987-0500 I COMMAND [conn55] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:434 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.693-0500 c20011| 2016-04-06T02:53:18.987-0500 D COMMAND [conn55] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.697-0500 c20011| 2016-04-06T02:53:18.987-0500 D COMMAND [conn55] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:13.698-0500 c20011| 2016-04-06T02:53:18.987-0500 D COMMAND [conn55] Using 'committed' snapshot. { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.700-0500 c20011| 2016-04-06T02:53:18.987-0500 D QUERY [conn55] Using idhack: query: { _id: "balancer" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:13.702-0500 c20011| 2016-04-06T02:53:18.987-0500 I COMMAND [conn55] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:428 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.706-0500 c20011| 2016-04-06T02:53:18.987-0500 D COMMAND [conn55] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929198987), up: 71, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.707-0500 c20011| 2016-04-06T02:53:18.988-0500 D QUERY [conn55] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.707-0500 c20011| 2016-04-06T02:53:18.988-0500 D REPL [conn55] Required snapshot optime: { ts: Timestamp 1459929198000|2, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929198000|1, t: 5 }, name-id: "270" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.708-0500 c20011| 2016-04-06T02:53:18.988-0500 D REPL [conn55] Required snapshot optime: { ts: Timestamp 1459929198000|3, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929198000|1, t: 5 }, name-id: "270" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.710-0500 c20011| 2016-04-06T02:53:18.988-0500 I WRITE [conn55] update config.mongos query: { _id: "mongovm16:20015" } update: { $set: { _id: "mongovm16:20015", ping: new Date(1459929198987), up: 71, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.712-0500 c20011| 2016-04-06T02:53:18.988-0500 I COMMAND [conn58] command local.oplog.rs command: getMore { getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929198000|1, t: 5 } } cursorid:19461455963 numYields:0 nreturned:1 reslen:510 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.713-0500 c20011| 2016-04-06T02:53:18.989-0500 D COMMAND [conn59] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|3, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|3, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:13.714-0500 c20011| 2016-04-06T02:53:18.989-0500 D COMMAND [conn59] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:13.715-0500 c20011| 2016-04-06T02:53:18.989-0500 D REPL [conn59] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929198000|3, t: 5 } and is durable through: { ts: Timestamp 1459929198000|3, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.715-0500 c20011| 2016-04-06T02:53:18.989-0500 D REPL [conn59] Updating _lastCommittedOpTime to { ts: Timestamp 1459929198000|3, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.717-0500 c20011| 2016-04-06T02:53:18.989-0500 D REPL [conn59] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929198000|2, t: 4 } and is durable through: { ts: Timestamp 1459929198000|2, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.719-0500 c20011| 2016-04-06T02:53:18.989-0500 I COMMAND [conn59] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|3, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|3, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.721-0500 c20011| 2016-04-06T02:53:18.990-0500 I COMMAND [conn56] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "mongovm16:20010:1459929128:185613966" }, update: { $set: { ping: new Date(1459929191721) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ping: new Date(1459929191721) } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 reslen:427 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 16ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.723-0500 c20011| 2016-04-06T02:53:18.990-0500 D COMMAND [conn62] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|2, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:13.725-0500 c20011| 2016-04-06T02:53:18.990-0500 D COMMAND [conn62] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|2, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.726-0500 c20011| 2016-04-06T02:53:18.990-0500 D QUERY [conn62] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:13.730-0500 c20011| 2016-04-06T02:53:18.990-0500 I COMMAND [conn54] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929198273), up: 71, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:386 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 12ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.731-0500 c20011| 2016-04-06T02:53:18.990-0500 I COMMAND [conn62] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|2, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 13ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.732-0500 c20011| 2016-04-06T02:53:18.990-0500 D COMMAND [conn62] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.734-0500 c20011| 2016-04-06T02:53:18.990-0500 D REPL [conn55] Required snapshot optime: { ts: Timestamp 1459929198000|4, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929198000|3, t: 5 }, name-id: "272" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.735-0500 c20011| 2016-04-06T02:53:18.991-0500 I COMMAND [conn62] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25720 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.744-0500 c20011| 2016-04-06T02:53:18.993-0500 D COMMAND [conn59] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|3, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:13.745-0500 c20011| 2016-04-06T02:53:18.993-0500 D COMMAND [conn59] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:13.747-0500 c20011| 2016-04-06T02:53:18.993-0500 D REPL [conn59] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929198000|4, t: 5 } and is durable through: { ts: Timestamp 1459929198000|3, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.753-0500 c20011| 2016-04-06T02:53:18.993-0500 D REPL [conn59] Required snapshot optime: { ts: Timestamp 1459929198000|4, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929198000|3, t: 5 }, name-id: "272" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.757-0500 c20011| 2016-04-06T02:53:18.993-0500 D REPL [conn59] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929198000|2, t: 4 } and is durable through: { ts: Timestamp 1459929198000|2, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.763-0500 c20011| 2016-04-06T02:53:18.993-0500 I COMMAND [conn59] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|3, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.764-0500 c20011| 2016-04-06T02:53:18.993-0500 D COMMAND [conn58] run command local.$cmd { getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929198000|1, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:54:13.769-0500 c20011| 2016-04-06T02:53:18.994-0500 I COMMAND [conn58] command local.oplog.rs command: getMore { getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929198000|1, t: 5 } } cursorid:19461455963 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.774-0500 c20011| 2016-04-06T02:53:18.995-0500 D COMMAND [conn59] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:13.776-0500 c20011| 2016-04-06T02:53:18.995-0500 D COMMAND [conn59] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:13.778-0500 c20011| 2016-04-06T02:53:18.995-0500 D REPL [conn59] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929198000|4, t: 5 } and is durable through: { ts: Timestamp 1459929198000|4, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.779-0500 c20011| 2016-04-06T02:53:18.995-0500 D REPL [conn59] Updating _lastCommittedOpTime to { ts: Timestamp 1459929198000|4, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.782-0500 c20011| 2016-04-06T02:53:18.995-0500 D REPL [conn59] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929198000|2, t: 4 } and is durable through: { ts: Timestamp 1459929198000|2, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.785-0500 c20011| 2016-04-06T02:53:18.995-0500 I COMMAND [conn59] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.790-0500 c20011| 2016-04-06T02:53:18.995-0500 I COMMAND [conn55] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929198987), up: 71, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:386 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.792-0500 c20011| 2016-04-06T02:53:18.995-0500 D COMMAND [conn58] run command local.$cmd { getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929198000|3, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:54:13.804-0500 c20011| 2016-04-06T02:53:18.996-0500 I COMMAND [conn58] command local.oplog.rs command: getMore { getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929198000|3, t: 5 } } cursorid:19461455963 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.807-0500 c20011| 2016-04-06T02:53:18.996-0500 D COMMAND [conn62] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c06e65c17830b843f1ce'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929198996), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.810-0500 c20011| 2016-04-06T02:53:18.996-0500 D QUERY [conn62] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.812-0500 c20011| 2016-04-06T02:53:18.996-0500 D QUERY [conn62] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.813-0500 c20011| 2016-04-06T02:53:18.996-0500 D QUERY [conn62] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.815-0500 c20011| 2016-04-06T02:53:18.996-0500 D - [conn62] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.815-0500 c20011| 2016-04-06T02:53:18.996-0500 D STORAGE [conn62] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:13.816-0500 c20011| 2016-04-06T02:53:18.996-0500 D STORAGE [conn62] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:13.819-0500 c20011| 2016-04-06T02:53:18.996-0500 D COMMAND [conn58] run command local.$cmd { getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929198000|4, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:54:13.822-0500 c20011| 2016-04-06T02:53:18.996-0500 D COMMAND [conn62] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c06e65c17830b843f1ce'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929198996), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.828-0500 c20011| 2016-04-06T02:53:18.996-0500 I COMMAND [conn62] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c06e65c17830b843f1ce'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929198996), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c06e65c17830b843f1ce'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929198996), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.830-0500 c20011| 2016-04-06T02:53:18.997-0500 D COMMAND [conn62] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|3, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.831-0500 c20011| 2016-04-06T02:53:18.997-0500 D COMMAND [conn62] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|3, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:13.834-0500 c20011| 2016-04-06T02:53:18.997-0500 D COMMAND [conn62] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|3, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.879-0500 c20011| 2016-04-06T02:53:18.997-0500 D QUERY [conn62] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:13.881-0500 c20011| 2016-04-06T02:53:18.997-0500 I COMMAND [conn62] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|3, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.896-0500 c20011| 2016-04-06T02:53:18.997-0500 D COMMAND [conn62] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.897-0500 c20011| 2016-04-06T02:53:18.997-0500 D COMMAND [conn62] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:13.898-0500 c20011| 2016-04-06T02:53:18.997-0500 D COMMAND [conn62] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.899-0500 c20011| 2016-04-06T02:53:18.997-0500 D QUERY [conn62] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:13.914-0500 c20011| 2016-04-06T02:53:18.997-0500 I COMMAND [conn62] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.914-0500 c20011| 2016-04-06T02:53:18.997-0500 D COMMAND [conn62] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.918-0500 c20011| 2016-04-06T02:53:18.998-0500 I COMMAND [conn62] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25720 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.922-0500 c20011| 2016-04-06T02:53:19.019-0500 D COMMAND [conn62] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c06f65c17830b843f1cf'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929199016), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.924-0500 c20011| 2016-04-06T02:53:19.019-0500 D QUERY [conn62] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.925-0500 c20011| 2016-04-06T02:53:19.019-0500 D QUERY [conn62] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.926-0500 c20011| 2016-04-06T02:53:19.019-0500 D QUERY [conn62] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.930-0500 c20011| 2016-04-06T02:53:19.019-0500 D - [conn62] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.931-0500 c20011| 2016-04-06T02:53:19.019-0500 D STORAGE [conn62] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:13.932-0500 c20011| 2016-04-06T02:53:19.019-0500 D STORAGE [conn62] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:13.937-0500 2016-04-06T02:54:00.729-0500c20011| 2016-04-06T02:53:19.019-0500 D COMMAND [conn62] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c06f65c17830b843f1cf'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929199016), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.941-0500 c20011| 2016-04-06T02:53:19.020-0500 I COMMAND [conn62] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c06f65c17830b843f1cf'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929199016), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c06f65c17830b843f1cf'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929199016), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.955-0500 c20011| 2016-04-06T02:53:19.027-0500 D COMMAND [conn62] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.965-0500 c20011| 2016-04-06T02:53:19.027-0500 D COMMAND [conn62] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:13.970-0500 c20011| 2016-04-06T02:53:19.027-0500 D COMMAND [conn62] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.974-0500 c20011| 2016-04-06T02:53:19.027-0500 D QUERY [conn62] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:13.976-0500 c20011| 2016-04-06T02:53:19.027-0500 I COMMAND [conn62] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.977-0500 c20011| 2016-04-06T02:53:19.028-0500 D COMMAND [conn62] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.978-0500 c20011| 2016-04-06T02:53:19.028-0500 D COMMAND [conn62] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:13.982-0500 c20011| 2016-04-06T02:53:19.028-0500 D COMMAND [conn62] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.983-0500 c20011| 2016-04-06T02:53:19.028-0500 D QUERY [conn62] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:13.986-0500 c20011| 2016-04-06T02:53:19.030-0500 I COMMAND [conn62] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.988-0500 c20011| 2016-04-06T02:53:19.031-0500 D COMMAND [conn62] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.989-0500 c20011| 2016-04-06T02:53:19.034-0500 I COMMAND [conn62] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25720 locks:{} protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:54:13.990-0500 c20011| 2016-04-06T02:53:19.036-0500 D COMMAND [conn62] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c06f65c17830b843f1d0'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929199036), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.991-0500 c20011| 2016-04-06T02:53:19.037-0500 D QUERY [conn62] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.992-0500 c20011| 2016-04-06T02:53:19.037-0500 D QUERY [conn62] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.993-0500 c20011| 2016-04-06T02:53:19.037-0500 D QUERY [conn62] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:13.997-0500 c20011| 2016-04-06T02:53:19.037-0500 D - [conn62] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:13.998-0500 c20011| 2016-04-06T02:53:19.037-0500 D STORAGE [conn62] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:13.999-0500 c20011| 2016-04-06T02:53:19.037-0500 D STORAGE [conn62] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:14.018-0500 c20011| 2016-04-06T02:53:19.037-0500 D COMMAND [conn62] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c06f65c17830b843f1d0'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929199036), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.023-0500 c20011| 2016-04-06T02:53:19.037-0500 I COMMAND [conn62] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c06f65c17830b843f1d0'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929199036), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c06f65c17830b843f1d0'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929199036), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.025-0500 c20011| 2016-04-06T02:53:19.037-0500 D COMMAND [conn62] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.031-0500 c20011| 2016-04-06T02:53:19.038-0500 D COMMAND [conn62] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:14.042-0500 c20011| 2016-04-06T02:53:19.038-0500 D COMMAND [conn62] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.044-0500 c20011| 2016-04-06T02:53:19.038-0500 D QUERY [conn62] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:14.048-0500 c20011| 2016-04-06T02:53:19.038-0500 I COMMAND [conn62] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.055-0500 c20011| 2016-04-06T02:53:19.039-0500 D COMMAND [conn62] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.057-0500 c20011| 2016-04-06T02:53:19.039-0500 D COMMAND [conn62] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:14.062-0500 c20011| 2016-04-06T02:53:19.039-0500 D COMMAND [conn62] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.062-0500 c20011| 2016-04-06T02:53:19.039-0500 D QUERY [conn62] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:14.066-0500 c20011| 2016-04-06T02:53:19.039-0500 I COMMAND [conn62] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.067-0500 c20011| 2016-04-06T02:53:19.039-0500 D COMMAND [conn62] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.069-0500 c20011| 2016-04-06T02:53:19.040-0500 I COMMAND [conn62] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25720 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.073-0500 c20011| 2016-04-06T02:53:20.210-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 497 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:53:30.210-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.074-0500 c20011| 2016-04-06T02:53:20.210-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 497 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:14.078-0500 c20011| 2016-04-06T02:53:20.211-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 497 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20011", term: 5, primaryId: 0, durableOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, opTime: { ts: Timestamp 1459929198000|4, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:54:14.079-0500 c20011| 2016-04-06T02:53:20.211-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:53:22.211Z [js_test:multi_coll_drop] 2016-04-06T02:54:14.079-0500 c20011| 2016-04-06T02:53:20.733-0500 D COMMAND [conn52] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.081-0500 c20011| 2016-04-06T02:53:20.733-0500 I COMMAND [conn52] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.086-0500 c20011| 2016-04-06T02:53:20.811-0500 D COMMAND [conn53] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.087-0500 c20011| 2016-04-06T02:53:20.811-0500 D COMMAND [conn53] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:54:14.088-0500 c20011| 2016-04-06T02:53:20.811-0500 I COMMAND [conn53] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 5 } numYields:0 reslen:480 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.090-0500 c20011| 2016-04-06T02:53:20.968-0500 D COMMAND [conn51] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.091-0500 c20011| 2016-04-06T02:53:20.969-0500 D COMMAND [conn51] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:54:14.093-0500 c20011| 2016-04-06T02:53:20.969-0500 I COMMAND [conn51] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } numYields:0 reslen:480 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.094-0500 c20011| 2016-04-06T02:53:20.976-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 499 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:53:30.976-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.098-0500 c20011| 2016-04-06T02:53:20.976-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 499 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:14.101-0500 c20011| 2016-04-06T02:53:20.977-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 499 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 5, primaryId: 0, durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, opTime: { ts: Timestamp 1459929198000|2, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:54:14.103-0500 c20011| 2016-04-06T02:53:20.977-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20013 at 2016-04-06T07:53:22.977Z [js_test:multi_coll_drop] 2016-04-06T02:54:14.107-0500 c20011| 2016-04-06T02:53:21.495-0500 D COMMAND [conn59] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:14.108-0500 c20011| 2016-04-06T02:53:21.495-0500 D COMMAND [conn59] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:14.110-0500 c20011| 2016-04-06T02:53:21.495-0500 D REPL [conn59] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929198000|4, t: 5 } and is durable through: { ts: Timestamp 1459929198000|4, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.113-0500 c20011| 2016-04-06T02:53:21.495-0500 D REPL [conn59] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929198000|2, t: 4 } and is durable through: { ts: Timestamp 1459929198000|2, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.118-0500 c20011| 2016-04-06T02:53:21.495-0500 I COMMAND [conn59] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.120-0500 c20011| 2016-04-06T02:53:21.497-0500 I COMMAND [conn58] command local.oplog.rs command: getMore { getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929198000|4, t: 5 } } cursorid:19461455963 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 2500ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.122-0500 c20011| 2016-04-06T02:53:21.498-0500 D COMMAND [conn58] run command local.$cmd { getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929198000|4, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:54:14.123-0500 c20011| 2016-04-06T02:53:21.968-0500 D COMMAND [conn51] run command local.$cmd { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:54:14.128-0500 c20011| 2016-04-06T02:53:21.968-0500 D QUERY [conn51] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: 1 } projection: {} limit: 1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:14.133-0500 c20011| 2016-04-06T02:53:21.968-0500 I COMMAND [conn51] command local.oplog.rs command: find { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:254 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.134-0500 c20011| 2016-04-06T02:53:21.969-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:34621 #64 (14 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:14.145-0500 c20011| 2016-04-06T02:53:21.970-0500 D COMMAND [conn64] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20013" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.147-0500 c20011| 2016-04-06T02:53:21.972-0500 I COMMAND [conn64] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20013" } numYields:0 reslen:482 locks:{} protocol:op_query 2ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.151-0500 c20011| 2016-04-06T02:53:21.972-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:34622 #65 (15 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:14.156-0500 c20011| 2016-04-06T02:53:21.972-0500 D COMMAND [conn65] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20013" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.162-0500 c20011| 2016-04-06T02:53:21.972-0500 D COMMAND [conn64] run command local.$cmd { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929198000|2 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.164-0500 c20011| 2016-04-06T02:53:21.972-0500 I COMMAND [conn65] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20013" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.169-0500 c20011| 2016-04-06T02:53:21.973-0500 D COMMAND [conn65] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:14.169-0500 c20011| 2016-04-06T02:53:21.973-0500 D COMMAND [conn65] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:14.172-0500 c20011| 2016-04-06T02:53:21.973-0500 D REPL [conn65] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929198000|4, t: 5 } and is durable through: { ts: Timestamp 1459929198000|4, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.176-0500 c20011| 2016-04-06T02:53:21.973-0500 D REPL [conn65] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929198000|2, t: 4 } and is durable through: { ts: Timestamp 1459929198000|2, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.181-0500 c20011| 2016-04-06T02:53:21.973-0500 I COMMAND [conn65] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.186-0500 c20011| 2016-04-06T02:53:21.973-0500 I COMMAND [conn64] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929198000|2 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 5 } planSummary: COLLSCAN cursorid:20009485564 keysExamined:0 docsExamined:3 numYields:0 nreturned:3 reslen:871 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.188-0500 c20011| 2016-04-06T02:53:21.974-0500 D COMMAND [conn64] run command local.$cmd { killCursors: "oplog.rs", cursors: [ 20009485564 ] } [js_test:multi_coll_drop] 2016-04-06T02:54:14.190-0500 c20011| 2016-04-06T02:53:21.974-0500 I COMMAND [conn64] command local.oplog.rs command: killCursors { killCursors: "oplog.rs", cursors: [ 20009485564 ] } numYields:0 reslen:175 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.192-0500 E c20011| 2016-04-06T02:53:21.975-0500 D COMMAND [conn54] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.193-0500 c20011| 2016-04-06T02:53:21.975-0500 D COMMAND [conn54] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:14.195-0500 c20011| 2016-04-06T02:53:21.975-0500 D COMMAND [conn54] Using 'committed' snapshot. { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.196-0500 QUERY c20011| 2016-04-06T02:53:21.975-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:34623 #66 (16 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:14.196-0500 c20011| 2016-04-06T02:53:21.975-0500 D QUERY [conn54] Using idhack: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:14.196-0500 c20011| 2016-04-06T02:53:21.975-0500 D COMMAND [conn66] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20013" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.200-0500 [thread1] c20011| 2016-04-06T02:53:21.975-0500 I COMMAND [conn54] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:434 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.202-0500 c20011| 2016-04-06T02:53:21.975-0500 I COMMAND [conn66] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20013" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.203-0500 Error: drop failed: { "code" : 46, "ok" : 0, "errmsg" : "timed out waiting for multidrop.coll" } : [js_test:multi_coll_drop] 2016-04-06T02:54:14.204-0500 _getErrorWithCode@src/mongo/shell/utils.js:25:13 [js_test:multi_coll_drop] 2016-04-06T02:54:14.205-0500 DBCollection.prototype.drop@src/mongo/shell/collection.js:740:1 [js_test:multi_coll_drop] 2016-04-06T02:54:14.214-0500 @jstests/sharding/multi_coll_drop.js:28:5 [js_test:multi_coll_drop] 2016-04-06T02:54:14.215-0500 @jstests/sharding/multi_coll_drop.js:2:2 [js_test:multi_coll_drop] 2016-04-06T02:54:14.215-0500 [js_test:multi_coll_drop] 2016-04-06T02:54:14.216-0500 c20011| 2016-04-06T02:53:21.975-0500 D COMMAND [conn66] run command admin.$cmd { replSetGetRBID: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.217-0500 c20011| 2016-04-06T02:53:21.975-0500 D COMMAND [conn66] command: replSetGetRBID [js_test:multi_coll_drop] 2016-04-06T02:54:14.217-0500 c20011| 2016-04-06T02:53:21.975-0500 I COMMAND [conn66] command admin.$cmd command: replSetGetRBID { replSetGetRBID: 1 } numYields:0 reslen:92 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.218-0500 c20011| 2016-04-06T02:53:21.975-0500 D QUERY [conn66] Running query: query: {} sort: { $natural: -1 } projection: { ts: 1, h: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.220-0500 c20011| 2016-04-06T02:53:21.975-0500 D QUERY [conn66] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: -1 } projection: { ts: 1, h: 1 }, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:14.222-0500 c20011| 2016-04-06T02:53:21.976-0500 I COMMAND [conn66] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN cursorid:17725538875 ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:101 numYields:0 nreturned:101 reslen:2848 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.223-0500 c20011| 2016-04-06T02:53:21.976-0500 D COMMAND [conn66] killcursors: found 1 of 1 [js_test:multi_coll_drop] 2016-04-06T02:54:14.224-0500 c20011| 2016-04-06T02:53:21.976-0500 I COMMAND [conn66] killcursors local.oplog.rs numYields:0 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.228-0500 c20011| 2016-04-06T02:53:21.976-0500 D QUERY [conn66] Running query: query: { _id: "mongovm16:20014:1459929123:-665935931" } sort: {} projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:54:14.230-0500 c20011| 2016-04-06T02:53:21.976-0500 D QUERY [conn66] Using idhack: query: { _id: "mongovm16:20014:1459929123:-665935931" } sort: {} projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:54:14.238-0500 failed to load: c20011| 2016-04-06T02:53:21.976-0500 I COMMAND [conn66] query config.lockpings query: { _id: "mongovm16:20014:1459929123:-665935931" } planSummary: IDHACK ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:86 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.251-0500 c20011| 2016-04-06T02:53:21.976-0500 D QUERY [conn66] Running query: query: { _id: "mongovm16:20015:1459929127:-1485108316" } sort: {} projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:54:14.253-0500 c20011| 2016-04-06T02:53:21.976-0500 D QUERY [conn66] Using idhack: query: { _id: "mongovm16:20015:1459929127:-1485108316" } sort: {} projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:54:14.257-0500 c20011| 2016-04-06T02:53:21.976-0500 I COMMAND [conn66] query config.lockpings query: { _id: "mongovm16:20015:1459929127:-1485108316" } planSummary: IDHACK ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:87 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.259-0500 c20011| 2016-04-06T02:53:21.976-0500 D QUERY [conn66] Running query: query: { _id: "multidrop.coll" } sort: {} projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:54:14.262-0500 jstests/sharding/multi_coll_drop.js [js_test:multi_coll_drop] 2016-04-06T02:54:14.266-0500 c20011| 2016-04-06T02:53:21.976-0500 D QUERY [conn66] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:54:14.271-0500 c20011| 2016-04-06T02:53:21.976-0500 I COMMAND [conn66] query config.locks query: { _id: "multidrop.coll" } planSummary: IDHACK ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:269 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.272-0500 c20011| 2016-04-06T02:53:21.976-0500 D QUERY [conn66] Running query: query: {} sort: { $natural: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:54:14.292-0500 c20011| 2016-04-06T02:53:21.976-0500 D QUERY [conn66] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: -1 } projection: {} ntoreturn=1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:14.297-0500 c20011| 2016-04-06T02:53:21.976-0500 I COMMAND [conn66] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:175 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.298-0500 c20011| 2016-04-06T02:53:21.976-0500 D COMMAND [conn66] run command admin.$cmd { replSetGetRBID: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.298-0500 c20011| 2016-04-06T02:53:21.976-0500 D COMMAND [conn66] command: replSetGetRBID [js_test:multi_coll_drop] 2016-04-06T02:54:14.300-0500 c20011| 2016-04-06T02:53:21.976-0500 I COMMAND [conn66] command admin.$cmd command: replSetGetRBID { replSetGetRBID: 1 } numYields:0 reslen:92 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.302-0500 c20011| 2016-04-06T02:53:21.977-0500 D COMMAND [conn62] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07165c17830b843f1d1'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929201977), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.303-0500 c20011| 2016-04-06T02:53:21.977-0500 D QUERY [conn62] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.306-0500 c20011| 2016-04-06T02:53:21.977-0500 D QUERY [conn62] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.310-0500 c20011| 2016-04-06T02:53:21.978-0500 D COMMAND [conn65] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:14.313-0500 c20011| 2016-04-06T02:53:21.978-0500 D QUERY [conn62] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.315-0500 c20011| 2016-04-06T02:53:21.978-0500 D COMMAND [conn65] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:14.321-0500 c20011| 2016-04-06T02:53:21.978-0500 D REPL [conn65] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929198000|4, t: 5 } and is durable through: { ts: Timestamp 1459929198000|4, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.322-0500 c20011| 2016-04-06T02:53:21.978-0500 D REPL [conn65] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929188000|11, t: 4 } and is durable through: { ts: Timestamp 1459929188000|11, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.325-0500 c20011| 2016-04-06T02:53:21.978-0500 I COMMAND [conn65] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.326-0500 c20011| 2016-04-06T02:53:21.978-0500 D - [conn62] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.327-0500 c20011| 2016-04-06T02:53:21.978-0500 D STORAGE [conn62] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:14.327-0500 c20011| 2016-04-06T02:53:21.978-0500 D STORAGE [conn62] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:14.334-0500 c20011| 2016-04-06T02:53:21.978-0500 D COMMAND [conn62] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07165c17830b843f1d1'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929201977), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.339-0500 c20011| 2016-04-06T02:53:21.978-0500 I COMMAND [conn62] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07165c17830b843f1d1'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929201977), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c07165c17830b843f1d1'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929201977), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.342-0500 c20011| 2016-04-06T02:53:21.978-0500 D COMMAND [conn62] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.344-0500 c20011| 2016-04-06T02:53:21.978-0500 D COMMAND [conn62] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:14.349-0500 c20011| 2016-04-06T02:53:21.978-0500 D COMMAND [conn62] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.351-0500 c20011| 2016-04-06T02:53:21.978-0500 D QUERY [conn62] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:14.369-0500 c20011| 2016-04-06T02:53:21.979-0500 I COMMAND [conn62] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.373-0500 c20011| 2016-04-06T02:53:21.979-0500 D COMMAND [conn62] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.375-0500 c20011| 2016-04-06T02:53:21.979-0500 D COMMAND [conn62] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:14.378-0500 c20011| 2016-04-06T02:53:21.979-0500 D COMMAND [conn62] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.379-0500 c20011| 2016-04-06T02:53:21.979-0500 D QUERY [conn62] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:14.385-0500 c20011| 2016-04-06T02:53:21.980-0500 I COMMAND [conn62] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.386-0500 c20011| 2016-04-06T02:53:21.980-0500 D COMMAND [conn62] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.386-0500 c20011| 2016-04-06T02:53:21.983-0500 I COMMAND [conn62] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25720 locks:{} protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.390-0500 c20011| 2016-04-06T02:53:21.986-0500 D COMMAND [conn54] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929201977), up: 74, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.391-0500 c20011| 2016-04-06T02:53:21.986-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:34624 #67 (17 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:14.393-0500 c20011| 2016-04-06T02:53:21.986-0500 D QUERY [conn54] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.393-0500 c20011| 2016-04-06T02:53:21.986-0500 D COMMAND [conn67] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.399-0500 c20011| 2016-04-06T02:53:21.986-0500 I WRITE [conn54] update config.mongos query: { _id: "mongovm16:20014" } update: { $set: { _id: "mongovm16:20014", ping: new Date(1459929201977), up: 74, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.403-0500 c20011| 2016-04-06T02:53:21.986-0500 I COMMAND [conn58] command local.oplog.rs command: getMore { getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929198000|4, t: 5 } } cursorid:19461455963 numYields:1 nreturned:1 reslen:522 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 487ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.410-0500 c20011| 2016-04-06T02:53:21.986-0500 I COMMAND [conn67] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20014" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.415-0500 c20011| 2016-04-06T02:53:21.986-0500 D COMMAND [conn67] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.417-0500 c20011| 2016-04-06T02:53:21.986-0500 D COMMAND [conn67] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:14.419-0500 c20011| 2016-04-06T02:53:21.986-0500 D COMMAND [conn67] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.425-0500 c20011| 2016-04-06T02:53:21.987-0500 D QUERY [conn67] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:14.428-0500 c20011| 2016-04-06T02:53:21.987-0500 I COMMAND [conn67] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.431-0500 c20011| 2016-04-06T02:53:21.987-0500 D COMMAND [conn62] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07165c17830b843f1d2'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929201987), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.435-0500 c20011| 2016-04-06T02:53:21.987-0500 D QUERY [conn62] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.438-0500 c20011| 2016-04-06T02:53:21.987-0500 D QUERY [conn62] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.441-0500 c20011| 2016-04-06T02:53:21.988-0500 D QUERY [conn62] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.442-0500 c20011| 2016-04-06T02:53:21.988-0500 D - [conn62] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.443-0500 c20011| 2016-04-06T02:53:21.988-0500 D STORAGE [conn62] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:14.445-0500 c20011| 2016-04-06T02:53:21.988-0500 D STORAGE [conn62] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:14.450-0500 c20011| 2016-04-06T02:53:21.988-0500 D COMMAND [conn62] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07165c17830b843f1d2'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929201987), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.458-0500 c20011| 2016-04-06T02:53:21.988-0500 I COMMAND [conn62] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07165c17830b843f1d2'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929201987), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c07165c17830b843f1d2'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929201987), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.459-0500 c20011| 2016-04-06T02:53:21.988-0500 D NETWORK [conn66] SocketException: remote: 192.168.100.28:34623 error: 9001 socket exception [CLOSED] server [192.168.100.28:34623] [js_test:multi_coll_drop] 2016-04-06T02:54:14.460-0500 c20011| 2016-04-06T02:53:21.988-0500 I NETWORK [conn66] end connection 192.168.100.28:34623 (16 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:14.464-0500 c20011| 2016-04-06T02:53:21.988-0500 D COMMAND [conn62] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.466-0500 c20011| 2016-04-06T02:53:21.988-0500 D COMMAND [conn62] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:14.468-0500 c20011| 2016-04-06T02:53:21.988-0500 D COMMAND [conn62] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.470-0500 c20011| 2016-04-06T02:53:21.988-0500 D QUERY [conn62] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:14.475-0500 c20011| 2016-04-06T02:53:21.989-0500 I COMMAND [conn62] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929198000|4, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.476-0500 c20011| 2016-04-06T02:53:21.989-0500 D COMMAND [conn62] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.482-0500 c20011| 2016-04-06T02:53:21.989-0500 D REPL [conn62] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929201000|1, t: 5 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929198000|4, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.492-0500 c20011| 2016-04-06T02:53:21.989-0500 D REPL [conn62] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999977μs [js_test:multi_coll_drop] 2016-04-06T02:54:14.493-0500 c20011| 2016-04-06T02:53:21.989-0500 D COMMAND [conn51] run command local.$cmd { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:54:14.503-0500 c20011| 2016-04-06T02:53:21.989-0500 D COMMAND [conn58] run command local.$cmd { getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929198000|4, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:54:14.507-0500 c20011| 2016-04-06T02:53:21.989-0500 D QUERY [conn51] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: 1 } projection: {} limit: 1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:14.511-0500 c20011| 2016-04-06T02:53:21.989-0500 I COMMAND [conn51] command local.oplog.rs command: find { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:254 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.514-0500 c20011| 2016-04-06T02:53:21.989-0500 D COMMAND [conn64] run command local.$cmd { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929188000|11 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.521-0500 c20011| 2016-04-06T02:53:21.989-0500 D COMMAND [conn65] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:14.522-0500 c20011| 2016-04-06T02:53:21.989-0500 D COMMAND [conn65] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:14.537-0500 c20011| 2016-04-06T02:53:21.989-0500 D REPL [conn65] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929198000|4, t: 5 } and is durable through: { ts: Timestamp 1459929198000|4, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.539-0500 c20011| 2016-04-06T02:53:21.989-0500 D REPL [conn65] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929188000|11, t: 4 } and is durable through: { ts: Timestamp 1459929188000|11, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.564-0500 c20011| 2016-04-06T02:53:21.990-0500 I COMMAND [conn65] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.576-0500 c20011| 2016-04-06T02:53:21.990-0500 I COMMAND [conn64] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929188000|11 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 5 } planSummary: COLLSCAN cursorid:21041390287 keysExamined:0 docsExamined:7 numYields:0 nreturned:7 reslen:1843 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.590-0500 c20011| 2016-04-06T02:53:21.990-0500 D REPL [conn54] Required snapshot optime: { ts: Timestamp 1459929201000|1, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929198000|4, t: 5 }, name-id: "273" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.605-0500 c20011| 2016-04-06T02:53:21.990-0500 D COMMAND [conn59] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:14.615-0500 c20011| 2016-04-06T02:53:21.990-0500 D COMMAND [conn59] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:14.620-0500 c20011| 2016-04-06T02:53:21.990-0500 D REPL [conn59] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929201000|1, t: 5 } and is durable through: { ts: Timestamp 1459929198000|4, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.623-0500 c20011| 2016-04-06T02:53:21.990-0500 D REPL [conn59] Required snapshot optime: { ts: Timestamp 1459929201000|1, t: 5 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929198000|4, t: 5 }, name-id: "273" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.626-0500 c20011| 2016-04-06T02:53:21.990-0500 D REPL [conn59] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929198000|2, t: 4 } and is durable through: { ts: Timestamp 1459929198000|2, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.635-0500 c20011| 2016-04-06T02:53:21.990-0500 I COMMAND [conn59] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.640-0500 c20011| 2016-04-06T02:53:21.992-0500 D COMMAND [conn59] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:14.641-0500 c20011| 2016-04-06T02:53:21.992-0500 D COMMAND [conn59] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:14.654-0500 c20011| 2016-04-06T02:53:21.992-0500 D REPL [conn59] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929201000|1, t: 5 } and is durable through: { ts: Timestamp 1459929201000|1, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.661-0500 c20011| 2016-04-06T02:53:21.992-0500 D REPL [conn59] Updating _lastCommittedOpTime to { ts: Timestamp 1459929201000|1, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.667-0500 c20011| 2016-04-06T02:53:21.992-0500 D REPL [conn59] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929198000|2, t: 4 } and is durable through: { ts: Timestamp 1459929198000|2, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.681-0500 c20011| 2016-04-06T02:53:21.992-0500 I COMMAND [conn59] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929198000|2, t: 4 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.683-0500 c20011| 2016-04-06T02:53:21.992-0500 D COMMAND [conn62] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:14.687-0500 c20011| 2016-04-06T02:53:21.992-0500 D COMMAND [conn62] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.688-0500 c20011| 2016-04-06T02:53:21.992-0500 D QUERY [conn62] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:14.691-0500 c20011| 2016-04-06T02:53:21.992-0500 I COMMAND [conn54] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929201977), up: 74, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:386 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.695-0500 c20011| 2016-04-06T02:53:21.992-0500 I COMMAND [conn58] command local.oplog.rs command: getMore { getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929198000|4, t: 5 } } cursorid:19461455963 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.699-0500 c20011| 2016-04-06T02:53:21.993-0500 I COMMAND [conn62] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.700-0500 c20011| 2016-04-06T02:53:21.993-0500 D COMMAND [conn64] run command local.$cmd { getMore: 21041390287, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929198000|4, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:54:14.700-0500 c20011| 2016-04-06T02:53:21.993-0500 D COMMAND [conn62] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.702-0500 c20011| 2016-04-06T02:53:21.993-0500 D COMMAND [conn58] run command local.$cmd { getMore: 19461455963, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:54:14.709-0500 c20011| 2016-04-06T02:53:21.993-0500 I COMMAND [conn64] command local.oplog.rs command: getMore { getMore: 21041390287, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929198000|4, t: 5 } } cursorid:21041390287 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.710-0500 c20011| 2016-04-06T02:53:21.994-0500 D COMMAND [conn64] run command local.$cmd { getMore: 21041390287, collection: "oplog.rs", maxTimeMS: 2500, term: 5, lastKnownCommittedOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } [js_test:multi_coll_drop] 2016-04-06T02:54:14.713-0500 c20011| 2016-04-06T02:53:21.994-0500 I COMMAND [conn62] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25720 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.717-0500 c20011| 2016-04-06T02:53:21.994-0500 D COMMAND [conn65] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:14.721-0500 c20011| 2016-04-06T02:53:21.994-0500 D COMMAND [conn65] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:14.722-0500 c20011| 2016-04-06T02:53:21.994-0500 D REPL [conn65] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929198000|4, t: 5 } and is durable through: { ts: Timestamp 1459929198000|4, t: 5 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.723-0500 c20011| 2016-04-06T02:53:21.994-0500 D REPL [conn65] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929201000|1, t: 5 } and is durable through: { ts: Timestamp 1459929188000|11, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.724-0500 c20011| 2016-04-06T02:53:21.994-0500 D COMMAND [conn54] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.731-0500 c20011| 2016-04-06T02:53:21.994-0500 I COMMAND [conn65] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, appliedOpTime: { ts: Timestamp 1459929198000|4, t: 5 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929188000|11, t: 4 }, appliedOpTime: { ts: Timestamp 1459929201000|1, t: 5 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.747-0500 c20011| 2016-04-06T02:53:21.994-0500 D COMMAND [conn54] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:14.749-0500 c20011| 2016-04-06T02:53:21.994-0500 D COMMAND [conn54] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.750-0500 c20011| 2016-04-06T02:53:21.994-0500 D QUERY [conn54] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:14.756-0500 c20011| 2016-04-06T02:53:21.994-0500 I COMMAND [conn54] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.760-0500 c20011| 2016-04-06T02:53:21.995-0500 D COMMAND [conn62] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07165c17830b843f1d3'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929201995), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.763-0500 c20011| 2016-04-06T02:53:21.995-0500 D QUERY [conn62] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.765-0500 c20011| 2016-04-06T02:53:21.995-0500 D QUERY [conn62] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.767-0500 c20011| 2016-04-06T02:53:21.995-0500 D QUERY [conn62] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.769-0500 c20011| 2016-04-06T02:53:21.995-0500 D - [conn62] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.769-0500 c20011| 2016-04-06T02:53:21.995-0500 D STORAGE [conn62] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:14.771-0500 c20011| 2016-04-06T02:53:21.995-0500 D STORAGE [conn62] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:14.774-0500 c20011| 2016-04-06T02:53:21.995-0500 D COMMAND [conn62] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07165c17830b843f1d3'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929201995), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.785-0500 c20011| 2016-04-06T02:53:21.995-0500 I COMMAND [conn62] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07165c17830b843f1d3'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929201995), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c07165c17830b843f1d3'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929201995), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.788-0500 c20011| 2016-04-06T02:53:21.995-0500 D COMMAND [conn62] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.790-0500 c20011| 2016-04-06T02:53:21.995-0500 D COMMAND [conn62] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:14.792-0500 c20011| 2016-04-06T02:53:21.995-0500 D COMMAND [conn62] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.799-0500 c20011| 2016-04-06T02:53:21.995-0500 D QUERY [conn62] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:14.806-0500 c20011| 2016-04-06T02:53:21.995-0500 I COMMAND [conn62] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.808-0500 c20011| 2016-04-06T02:53:21.995-0500 D COMMAND [conn62] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.813-0500 c20011| 2016-04-06T02:53:21.995-0500 D COMMAND [conn62] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:14.815-0500 c20011| 2016-04-06T02:53:21.995-0500 D COMMAND [conn62] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.816-0500 c20011| 2016-04-06T02:53:21.995-0500 D QUERY [conn62] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:14.819-0500 c20011| 2016-04-06T02:53:21.996-0500 I COMMAND [conn62] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.820-0500 c20011| 2016-04-06T02:53:21.996-0500 D COMMAND [conn62] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.822-0500 c20011| 2016-04-06T02:53:21.997-0500 I COMMAND [conn62] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25720 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.824-0500 c20011| 2016-04-06T02:53:21.998-0500 D COMMAND [conn54] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.828-0500 c20011| 2016-04-06T02:53:21.998-0500 D COMMAND [conn54] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:14.830-0500 c20011| 2016-04-06T02:53:21.998-0500 D COMMAND [conn54] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.833-0500 c20011| 2016-04-06T02:53:21.998-0500 D QUERY [conn54] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:14.837-0500 c20011| 2016-04-06T02:53:21.998-0500 I COMMAND [conn54] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.846-0500 c20011| 2016-04-06T02:53:21.999-0500 D COMMAND [conn62] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07165c17830b843f1d4'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929201999), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.848-0500 c20011| 2016-04-06T02:53:21.999-0500 D QUERY [conn62] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.850-0500 c20011| 2016-04-06T02:53:21.999-0500 D QUERY [conn62] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.853-0500 c20011| 2016-04-06T02:53:21.999-0500 D QUERY [conn62] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.865-0500 c20011| 2016-04-06T02:53:21.999-0500 D - [conn62] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.871-0500 c20011| 2016-04-06T02:53:21.999-0500 D STORAGE [conn62] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:14.873-0500 c20011| 2016-04-06T02:53:21.999-0500 D STORAGE [conn62] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:14.884-0500 c20011| 2016-04-06T02:53:21.999-0500 D COMMAND [conn62] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07165c17830b843f1d4'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929201999), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.889-0500 c20011| 2016-04-06T02:53:21.999-0500 I COMMAND [conn62] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07165c17830b843f1d4'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929201999), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c07165c17830b843f1d4'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929201999), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.891-0500 c20011| 2016-04-06T02:53:21.999-0500 D COMMAND [conn62] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.893-0500 c20011| 2016-04-06T02:53:21.999-0500 D COMMAND [conn62] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:14.896-0500 c20011| 2016-04-06T02:53:21.999-0500 D COMMAND [conn62] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.897-0500 c20011| 2016-04-06T02:53:21.999-0500 D QUERY [conn62] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:14.902-0500 c20011| 2016-04-06T02:53:22.000-0500 I COMMAND [conn62] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.904-0500 c20011| 2016-04-06T02:53:22.012-0500 D COMMAND [conn62] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.905-0500 c20011| 2016-04-06T02:53:22.013-0500 D COMMAND [conn62] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:14.910-0500 c20011| 2016-04-06T02:53:22.013-0500 D COMMAND [conn62] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.911-0500 c20011| 2016-04-06T02:53:22.013-0500 D QUERY [conn62] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:14.914-0500 c20011| 2016-04-06T02:53:22.019-0500 I COMMAND [conn62] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.916-0500 c20011| 2016-04-06T02:53:22.019-0500 D COMMAND [conn62] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.919-0500 c20011| 2016-04-06T02:53:22.024-0500 I COMMAND [conn62] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25720 locks:{} protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.922-0500 c20011| 2016-04-06T02:53:22.025-0500 D COMMAND [conn54] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.925-0500 c20011| 2016-04-06T02:53:22.025-0500 D COMMAND [conn54] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:14.927-0500 c20011| 2016-04-06T02:53:22.025-0500 D COMMAND [conn54] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.929-0500 c20011| 2016-04-06T02:53:22.025-0500 D QUERY [conn54] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:14.935-0500 c20011| 2016-04-06T02:53:22.026-0500 I COMMAND [conn54] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.946-0500 c20011| 2016-04-06T02:53:22.026-0500 D COMMAND [conn62] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07265c17830b843f1d5'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929202026), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.963-0500 c20011| 2016-04-06T02:53:22.026-0500 D QUERY [conn62] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.964-0500 c20011| 2016-04-06T02:53:22.026-0500 D QUERY [conn62] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.967-0500 c20011| 2016-04-06T02:53:22.027-0500 D QUERY [conn62] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.969-0500 c20011| 2016-04-06T02:53:22.027-0500 D - [conn62] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.969-0500 c20011| 2016-04-06T02:53:22.027-0500 D STORAGE [conn62] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:14.970-0500 c20011| 2016-04-06T02:53:22.027-0500 D STORAGE [conn62] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:14.973-0500 c20011| 2016-04-06T02:53:22.027-0500 D COMMAND [conn62] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07265c17830b843f1d5'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929202026), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:14.977-0500 c20011| 2016-04-06T02:53:22.027-0500 I COMMAND [conn62] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07265c17830b843f1d5'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929202026), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c07265c17830b843f1d5'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929202026), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.979-0500 c20011| 2016-04-06T02:53:22.028-0500 D COMMAND [conn62] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.980-0500 c20011| 2016-04-06T02:53:22.028-0500 D COMMAND [conn62] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:14.983-0500 c20011| 2016-04-06T02:53:22.028-0500 D COMMAND [conn62] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.984-0500 c20011| 2016-04-06T02:53:22.028-0500 D QUERY [conn62] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:14.991-0500 c20011| 2016-04-06T02:53:22.028-0500 I COMMAND [conn62] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:14.996-0500 c20011| 2016-04-06T02:53:22.029-0500 D COMMAND [conn62] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:14.997-0500 c20011| 2016-04-06T02:53:22.029-0500 D COMMAND [conn62] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:14.999-0500 c20011| 2016-04-06T02:53:22.029-0500 D COMMAND [conn62] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.026-0500 c20011| 2016-04-06T02:53:22.029-0500 D QUERY [conn62] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:15.030-0500 c20011| 2016-04-06T02:53:22.029-0500 I COMMAND [conn62] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.032-0500 c20011| 2016-04-06T02:53:22.029-0500 D COMMAND [conn62] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.033-0500 c20011| 2016-04-06T02:53:22.030-0500 I COMMAND [conn62] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25720 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.039-0500 c20011| 2016-04-06T02:53:22.034-0500 D COMMAND [conn54] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.040-0500 c20011| 2016-04-06T02:53:22.035-0500 D COMMAND [conn54] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.042-0500 c20011| 2016-04-06T02:53:22.035-0500 D COMMAND [conn54] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.046-0500 c20011| 2016-04-06T02:53:22.035-0500 D QUERY [conn54] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:15.051-0500 c20011| 2016-04-06T02:53:22.035-0500 I COMMAND [conn54] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:557 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.060-0500 c20011| 2016-04-06T02:53:22.036-0500 D COMMAND [conn62] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07265c17830b843f1d6'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929202036), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.061-0500 c20011| 2016-04-06T02:53:22.036-0500 D QUERY [conn62] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:15.064-0500 c20011| 2016-04-06T02:53:22.036-0500 D QUERY [conn62] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:15.067-0500 c20011| 2016-04-06T02:53:22.036-0500 D QUERY [conn62] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.068-0500 c20011| 2016-04-06T02:53:22.036-0500 D - [conn62] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:15.079-0500 c20011| 2016-04-06T02:53:22.036-0500 D STORAGE [conn62] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:15.081-0500 c20011| 2016-04-06T02:53:22.036-0500 D STORAGE [conn62] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:15.087-0500 c20011| 2016-04-06T02:53:22.036-0500 D COMMAND [conn62] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07265c17830b843f1d6'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929202036), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:15.090-0500 c20011| 2016-04-06T02:53:22.036-0500 I COMMAND [conn62] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07265c17830b843f1d6'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929202036), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c07265c17830b843f1d6'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929202036), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.092-0500 c20011| 2016-04-06T02:53:22.037-0500 D COMMAND [conn62] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.094-0500 c20011| 2016-04-06T02:53:22.037-0500 D COMMAND [conn62] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.100-0500 c20011| 2016-04-06T02:53:22.037-0500 D COMMAND [conn62] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.102-0500 c20011| 2016-04-06T02:53:22.037-0500 D QUERY [conn62] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:15.105-0500 c20011| 2016-04-06T02:53:22.037-0500 I COMMAND [conn62] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.108-0500 c20011| 2016-04-06T02:53:22.038-0500 D COMMAND [conn62] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.110-0500 c20011| 2016-04-06T02:53:22.038-0500 D COMMAND [conn62] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.114-0500 c20011| 2016-04-06T02:53:22.038-0500 D COMMAND [conn62] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.115-0500 c20011| 2016-04-06T02:53:22.038-0500 D QUERY [conn62] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:15.119-0500 c20011| 2016-04-06T02:53:22.039-0500 I COMMAND [conn62] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.120-0500 c20011| 2016-04-06T02:53:22.039-0500 D COMMAND [conn62] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.121-0500 c20011| 2016-04-06T02:53:22.040-0500 I COMMAND [conn62] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25720 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.123-0500 c20011| 2016-04-06T02:53:22.044-0500 D COMMAND [conn62] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07265c17830b843f1d7'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929202044), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.123-0500 c20011| 2016-04-06T02:53:22.044-0500 D QUERY [conn62] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:15.125-0500 c20011| 2016-04-06T02:53:22.044-0500 D QUERY [conn62] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:15.128-0500 c20011| 2016-04-06T02:53:22.044-0500 D QUERY [conn62] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.129-0500 c20011| 2016-04-06T02:53:22.044-0500 D - [conn62] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:15.130-0500 c20011| 2016-04-06T02:53:22.044-0500 D STORAGE [conn62] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:15.131-0500 c20011| 2016-04-06T02:53:22.044-0500 D STORAGE [conn62] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:15.140-0500 c20011| 2016-04-06T02:53:22.044-0500 D COMMAND [conn62] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07265c17830b843f1d7'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929202044), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:15.147-0500 c20011| 2016-04-06T02:53:22.044-0500 I COMMAND [conn62] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07265c17830b843f1d7'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929202044), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c07265c17830b843f1d7'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929202044), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.153-0500 c20011| 2016-04-06T02:53:22.044-0500 D COMMAND [conn62] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.157-0500 c20011| 2016-04-06T02:53:22.044-0500 D COMMAND [conn62] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.163-0500 c20011| 2016-04-06T02:53:22.044-0500 D COMMAND [conn62] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.164-0500 c20011| 2016-04-06T02:53:22.044-0500 D QUERY [conn62] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:15.168-0500 c20011| 2016-04-06T02:53:22.044-0500 I COMMAND [conn62] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.171-0500 c20011| 2016-04-06T02:53:22.045-0500 D COMMAND [conn62] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.173-0500 c20011| 2016-04-06T02:53:22.045-0500 D COMMAND [conn62] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.175-0500 c20011| 2016-04-06T02:53:22.045-0500 D COMMAND [conn62] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.176-0500 c20011| 2016-04-06T02:53:22.045-0500 D QUERY [conn62] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:15.181-0500 c20011| 2016-04-06T02:53:22.045-0500 I COMMAND [conn62] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.182-0500 c20011| 2016-04-06T02:53:22.045-0500 D COMMAND [conn62] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.186-0500 c20011| 2016-04-06T02:53:22.046-0500 I COMMAND [conn62] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25720 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.189-0500 c20011| 2016-04-06T02:53:22.047-0500 D COMMAND [conn62] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07265c17830b843f1d8'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929202047), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.190-0500 c20011| 2016-04-06T02:53:22.047-0500 D QUERY [conn62] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:15.195-0500 c20011| 2016-04-06T02:53:22.047-0500 D QUERY [conn62] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:15.196-0500 *** Stepping down connection to mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:15.197-0500 c20011| 2016-04-06T02:53:22.047-0500 D QUERY [conn62] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.199-0500 c20011| 2016-04-06T02:53:22.047-0500 D - [conn62] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:15.201-0500 c20011| 2016-04-06T02:53:22.047-0500 D STORAGE [conn62] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:15.202-0500 c20011| 2016-04-06T02:53:22.047-0500 D STORAGE [conn62] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:15.207-0500 c20011| 2016-04-06T02:53:22.047-0500 D COMMAND [conn62] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07265c17830b843f1d8'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929202047), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:15.214-0500 c20011| 2016-04-06T02:53:22.047-0500 I COMMAND [conn62] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07265c17830b843f1d8'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929202047), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c07265c17830b843f1d8'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929202047), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.216-0500 c20011| 2016-04-06T02:53:22.047-0500 D COMMAND [conn62] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.219-0500 c20011| 2016-04-06T02:53:22.047-0500 D COMMAND [conn62] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.220-0500 c20011| 2016-04-06T02:53:22.047-0500 D COMMAND [conn62] Using 'committed' snapshot. { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.221-0500 c20011| 2016-04-06T02:53:22.047-0500 D QUERY [conn62] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:15.226-0500 c20011| 2016-04-06T02:53:22.047-0500 I COMMAND [conn62] command config.locks command: find { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:641 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.230-0500 c20011| 2016-04-06T02:53:22.048-0500 D COMMAND [conn62] run command config.$cmd { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.234-0500 c20011| 2016-04-06T02:53:22.048-0500 D COMMAND [conn62] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.240-0500 c20011| 2016-04-06T02:53:22.048-0500 D COMMAND [conn62] Using 'committed' snapshot. { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.243-0500 c20011| 2016-04-06T02:53:22.048-0500 D QUERY [conn62] Using idhack: query: { _id: "mongovm16:20010:1459929128:185613966" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:15.248-0500 c20011| 2016-04-06T02:53:22.048-0500 I COMMAND [conn62] command config.lockpings command: find { find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:461 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.248-0500 c20011| 2016-04-06T02:53:22.048-0500 D COMMAND [conn62] run command admin.$cmd { serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.249-0500 c20011| 2016-04-06T02:53:22.049-0500 I COMMAND [conn62] command admin.$cmd command: serverStatus { serverStatus: 1, maxTimeMS: 30000 } numYields:0 reslen:25720 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.254-0500 c20011| 2016-04-06T02:53:22.050-0500 D COMMAND [conn62] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07265c17830b843f1d9'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929202050), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.256-0500 c20011| 2016-04-06T02:53:22.050-0500 D QUERY [conn62] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:15.258-0500 c20011| 2016-04-06T02:53:22.050-0500 D QUERY [conn62] Relevant index 1 is kp: { state: 1, process: 1 } name: 'state_1_process_1' io: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:15.259-0500 c20011| 2016-04-06T02:53:22.050-0500 D QUERY [conn62] Only one plan is available; it will be run but will not be cached. query: { _id: "multidrop.coll", state: 0 } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.260-0500 c20011| 2016-04-06T02:53:22.050-0500 D - [conn62] User Assertion: 11000:E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:15.262-0500 c20011| 2016-04-06T02:53:22.050-0500 D STORAGE [conn62] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::DataSizeChange [js_test:multi_coll_drop] 2016-04-06T02:54:15.263-0500 c20011| 2016-04-06T02:53:22.050-0500 D STORAGE [conn62] CUSTOM ROLLBACK mongo::WiredTigerRecordStore::NumRecordsChange [js_test:multi_coll_drop] 2016-04-06T02:54:15.268-0500 c20011| 2016-04-06T02:53:22.051-0500 D COMMAND [conn62] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07265c17830b843f1d9'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929202050), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 }' and metadata '{ $replData: 1 }': 11000 E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:15.278-0500 c20011| 2016-04-06T02:53:22.051-0500 I COMMAND [conn62] command config.locks command: findAndModify { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c07265c17830b843f1d9'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929202050), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } update: { $set: { ts: ObjectId('5704c07265c17830b843f1d9'), state: 2, who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929202050), why: "splitting chunk [{ _id: -61.0 }, { _id: MaxKey }) in multidrop.coll" } } exception: E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" } code:11000 numYields:0 reslen:140 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.283-0500 c20011| 2016-04-06T02:53:22.051-0500 D COMMAND [conn62] run command config.$cmd { find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.284-0500 c20013| 2016-04-06T02:52:43.300-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.286-0500 c20013| 2016-04-06T02:52:43.300-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.305-0500 c20013| 2016-04-06T02:52:43.300-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.310-0500 c20013| 2016-04-06T02:52:43.300-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.312-0500 c20013| 2016-04-06T02:52:43.300-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.316-0500 c20013| 2016-04-06T02:52:43.300-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.316-0500 c20013| 2016-04-06T02:52:43.300-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.320-0500 c20013| 2016-04-06T02:52:43.300-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.322-0500 c20013| 2016-04-06T02:52:43.300-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.323-0500 c20013| 2016-04-06T02:52:43.300-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.323-0500 c20013| 2016-04-06T02:52:43.301-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:15.330-0500 c20013| 2016-04-06T02:52:43.301-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1641 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:48.301-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|6, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.342-0500 c20013| 2016-04-06T02:52:43.301-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|6, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|7, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:15.349-0500 c20013| 2016-04-06T02:52:43.301-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1642 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|6, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|7, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:15.350-0500 c20013| 2016-04-06T02:52:43.301-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1642 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:15.353-0500 c20013| 2016-04-06T02:52:43.301-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1641 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:15.355-0500 c20013| 2016-04-06T02:52:43.301-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1642 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.368-0500 c20013| 2016-04-06T02:52:43.323-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|7, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|7, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:15.372-0500 c20013| 2016-04-06T02:52:43.323-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1644 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|7, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|7, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:15.373-0500 c20013| 2016-04-06T02:52:43.323-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1644 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:15.375-0500 c20013| 2016-04-06T02:52:43.323-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1644 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.379-0500 c20013| 2016-04-06T02:52:43.324-0500 D COMMAND [conn10] run command config.$cmd { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|7, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.381-0500 c20013| 2016-04-06T02:52:43.324-0500 D REPL [conn10] waitUntilOpTime: waiting for optime:{ ts: Timestamp 1459929163000|7, t: 3 } to be in a snapshot -- current snapshot: { ts: Timestamp 1459929163000|6, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.383-0500 c20013| 2016-04-06T02:52:43.324-0500 D REPL [conn10] waitUntilOpTime: waiting for a new snapshot to occur for micros: 29999957μs [js_test:multi_coll_drop] 2016-04-06T02:54:15.386-0500 c20013| 2016-04-06T02:52:43.330-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1641 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.388-0500 c20013| 2016-04-06T02:52:43.331-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929163000|7, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.390-0500 c20013| 2016-04-06T02:52:43.331-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:15.393-0500 c20013| 2016-04-06T02:52:43.331-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1647 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:48.331-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|7, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.394-0500 c20013| 2016-04-06T02:52:43.331-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1647 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:15.400-0500 c20013| 2016-04-06T02:52:43.331-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|7, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.402-0500 c20013| 2016-04-06T02:52:43.331-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|7, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.404-0500 c20013| 2016-04-06T02:52:43.332-0500 D QUERY [conn10] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:15.420-0500 c20013| 2016-04-06T02:52:43.332-0500 I COMMAND [conn10] command config.chunks command: find { find: "chunks", filter: { ns: "multidrop.coll" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|7, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:537 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.430-0500 c20013| 2016-04-06T02:52:43.337-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1647 finished with response: { cursor: { nextBatch: [ { ts: Timestamp 1459929163000|8, t: 3, h: -788849406847319887, v: 2, op: "u", ns: "config.locks", o2: { _id: "multidrop.coll" }, o: { $set: { ts: ObjectId('5704c04b65c17830b843f1c7'), state: 2, when: new Date(1459929163335), why: "splitting chunk [{ _id: -64.0 }, { _id: MaxKey }) in multidrop.coll" } } } ], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.432-0500 c20013| 2016-04-06T02:52:43.337-0500 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1459929163000|8 and ending at ts: Timestamp 1459929163000|8 [js_test:multi_coll_drop] 2016-04-06T02:54:15.434-0500 c20013| 2016-04-06T02:52:43.338-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:15.435-0500 c20013| 2016-04-06T02:52:43.338-0500 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.435-0500 c20013| 2016-04-06T02:52:43.338-0500 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.436-0500 c20013| 2016-04-06T02:52:43.338-0500 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.438-0500 c20013| 2016-04-06T02:52:43.338-0500 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.438-0500 c20013| 2016-04-06T02:52:43.338-0500 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.441-0500 c20013| 2016-04-06T02:52:43.338-0500 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.443-0500 c20013| 2016-04-06T02:52:43.338-0500 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.444-0500 c20013| 2016-04-06T02:52:43.338-0500 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.446-0500 c20013| 2016-04-06T02:52:43.338-0500 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.447-0500 c20013| 2016-04-06T02:52:43.338-0500 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.448-0500 c20013| 2016-04-06T02:52:43.339-0500 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.449-0500 c20013| 2016-04-06T02:52:43.339-0500 D REPL [rsSync] replication batch size is 1 [js_test:multi_coll_drop] 2016-04-06T02:54:15.451-0500 c20013| 2016-04-06T02:52:43.339-0500 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.451-0500 c20013| 2016-04-06T02:52:43.339-0500 D QUERY [repl writer worker 15] Using idhack: { _id: "multidrop.coll" } [js_test:multi_coll_drop] 2016-04-06T02:54:15.451-0500 c20013| 2016-04-06T02:52:43.340-0500 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.451-0500 c20013| 2016-04-06T02:52:43.340-0500 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.452-0500 c20013| 2016-04-06T02:52:43.340-0500 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.455-0500 c20013| 2016-04-06T02:52:43.340-0500 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.458-0500 c20013| 2016-04-06T02:52:43.340-0500 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.459-0500 c20013| 2016-04-06T02:52:43.340-0500 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.459-0500 c20013| 2016-04-06T02:52:43.340-0500 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.460-0500 c20013| 2016-04-06T02:52:43.340-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1649 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:48.340-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|7, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.461-0500 c20013| 2016-04-06T02:52:43.340-0500 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.462-0500 c20013| 2016-04-06T02:52:43.340-0500 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.463-0500 c20013| 2016-04-06T02:52:43.338-0500 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.463-0500 c20013| 2016-04-06T02:52:43.340-0500 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.465-0500 c20013| 2016-04-06T02:52:43.340-0500 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.465-0500 c20013| 2016-04-06T02:52:43.340-0500 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.467-0500 c20013| 2016-04-06T02:52:43.340-0500 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.469-0500 c20013| 2016-04-06T02:52:43.340-0500 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.470-0500 c20013| 2016-04-06T02:52:43.340-0500 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.472-0500 c20013| 2016-04-06T02:52:43.340-0500 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.473-0500 c20013| 2016-04-06T02:52:43.340-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1649 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:15.482-0500 c20013| 2016-04-06T02:52:43.340-0500 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.484-0500 c20013| 2016-04-06T02:52:43.346-0500 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.488-0500 c20013| 2016-04-06T02:52:43.346-0500 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool [js_test:multi_coll_drop] 2016-04-06T02:54:15.488-0500 c20013| 2016-04-06T02:52:43.346-0500 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:15.496-0500 c20013| 2016-04-06T02:52:43.346-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|7, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:15.502-0500 c20013| 2016-04-06T02:52:43.346-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1650 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|7, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:15.503-0500 c20013| 2016-04-06T02:52:43.346-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1650 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:15.507-0500 c20013| 2016-04-06T02:52:43.347-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1650 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.514-0500 c20013| 2016-04-06T02:52:43.366-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:15.519-0500 c20013| 2016-04-06T02:52:43.366-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1652 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:15.522-0500 c20013| 2016-04-06T02:52:43.366-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1652 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:15.522-0500 c20013| 2016-04-06T02:52:43.367-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1652 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.524-0500 c20013| 2016-04-06T02:52:43.367-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1649 finished with response: { cursor: { nextBatch: [], id: 19853084149, ns: "local.oplog.rs" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.525-0500 c20013| 2016-04-06T02:52:43.367-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929163000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.527-0500 c20013| 2016-04-06T02:52:43.367-0500 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog [js_test:multi_coll_drop] 2016-04-06T02:54:15.530-0500 c20013| 2016-04-06T02:52:43.367-0500 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 1655 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:48.367-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.533-0500 c20013| 2016-04-06T02:52:43.367-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 1655 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:15.536-0500 c20013| 2016-04-06T02:52:43.367-0500 D COMMAND [conn15] run command config.$cmd { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|8, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.541-0500 c20013| 2016-04-06T02:52:43.723-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1656 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:53.723-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.544-0500 c20013| 2016-04-06T02:53:04.658-0500 D COMMAND [conn15] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|8, t: 3 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.546-0500 c20013| 2016-04-06T02:53:04.658-0500 D COMMAND [conn15] Using 'committed' snapshot. { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|8, t: 3 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.549-0500 c20013| 2016-04-06T02:53:04.658-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1655 timed out, adjusted timeout after getting connection from pool was 5000ms, op was id: 7, states: [ UNINITIALIZED, IN_PROGRESS ], start_time: 2016-04-06T02:52:43.367-0500, request: RemoteCommand 1655 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:48.367-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.554-0500 c20013| 2016-04-06T02:53:04.658-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Operation timing out; original request was: RemoteCommand 1655 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:48.367-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.561-0500 c20013| 2016-04-06T02:53:04.658-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Failed to execute command: RemoteCommand 1655 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:48.367-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|8, t: 3 } } reason: ExceededTimeLimit: Operation timed out, request was RemoteCommand 1655 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:48.367-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.564-0500 c20013| 2016-04-06T02:53:04.658-0500 D ASIO [NetworkInterfaceASIO-BGSync-0] Request 1655 finished with response: ExceededTimeLimit: Operation timed out, request was RemoteCommand 1655 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:48.367-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.565-0500 c20013| 2016-04-06T02:53:04.658-0500 D QUERY [conn15] Using idhack: query: { _id: "multidrop.coll" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:15.566-0500 c20013| 2016-04-06T02:52:43.875-0500 D COMMAND [conn11] run command admin.$cmd { ismaster: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.567-0500 c20013| 2016-04-06T02:52:44.214-0500 D COMMAND [conn7] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.572-0500 c20013| 2016-04-06T02:53:04.659-0500 D REPL [rsBackgroundSync-0] Error returned from oplog query: ExceededTimeLimit: Operation timed out, request was RemoteCommand 1655 -- target:mongovm16:20011 db:local expDate:2016-04-06T02:52:48.367-0500 cmd:{ getMore: 19853084149, collection: "oplog.rs", maxTimeMS: 2500, term: 3, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.573-0500 c20013| 2016-04-06T02:53:04.659-0500 D COMMAND [conn7] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:54:15.578-0500 c20013| 2016-04-06T02:53:04.659-0500 I ASIO [ReplicationExecutor] dropping unhealthy pooled connection to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:15.580-0500 c20013| 2016-04-06T02:53:04.659-0500 I ASIO [ReplicationExecutor] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:54:15.580-0500 c20013| 2016-04-06T02:53:04.659-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:15.581-0500 c20013| 2016-04-06T02:53:04.659-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1659 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:14.659-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.583-0500 c20013| 2016-04-06T02:53:04.659-0500 I REPL [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 5000ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.586-0500 c20013| 2016-04-06T02:53:04.659-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1659 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:15.587-0500 c20013| 2016-04-06T02:53:04.659-0500 I REPL [ReplicationExecutor] conducting a dry run election to see if we could be elected [js_test:multi_coll_drop] 2016-04-06T02:54:15.591-0500 c20013| 2016-04-06T02:53:04.659-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1660 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:09.659-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 3, candidateIndex: 2, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929163000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.595-0500 c20013| 2016-04-06T02:53:04.659-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1661 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:53:09.659-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 3, candidateIndex: 2, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929163000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.597-0500 c20013| 2016-04-06T02:53:04.659-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:15.598-0500 c20013| 2016-04-06T02:53:04.659-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:15.600-0500 c20013| 2016-04-06T02:53:04.659-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1658 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:15.601-0500 c20013| 2016-04-06T02:53:04.659-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1662 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:15.605-0500 c20013| 2016-04-06T02:53:04.659-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1663 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:15.605-0500 c20013| 2016-04-06T02:53:04.659-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:15.607-0500 c20013| 2016-04-06T02:53:04.659-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1663 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:54:15.607-0500 c20013| 2016-04-06T02:53:04.659-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:15.607-0500 c20013| 2016-04-06T02:53:04.659-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1662 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:54:15.610-0500 c20013| 2016-04-06T02:53:04.660-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:15.622-0500 c20013| 2016-04-06T02:53:04.660-0500 I COMMAND [conn15] command config.collections command: find { find: "collections", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929163000|8, t: 3 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:492 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.629-0500 c20013| 2016-04-06T02:52:45.867-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter sending slave oplog progress to upstream updater mongovm16:20011: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:15.642-0500 c20013| 2016-04-06T02:53:04.661-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] startCommand: RemoteCommand 1664 -- target:mongovm16:20011 db:admin cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|1, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|3, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:15.645-0500 c20013| 2016-04-06T02:52:48.718-0500 D COMMAND [conn9] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.649-0500 c20013| 2016-04-06T02:53:04.661-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Starting asynchronous command 1664 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:15.653-0500 c20013| 2016-04-06T02:53:04.661-0500 I COMMAND [conn7] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } numYields:0 reslen:509 locks:{} protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.655-0500 c20013| 2016-04-06T02:52:51.721-0500 D COMMAND [conn8] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.661-0500 c20013| 2016-04-06T02:52:51.721-0500 D COMMAND [conn12] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.663-0500 c20013| 2016-04-06T02:53:04.661-0500 D NETWORK [conn7] SocketException: remote: 192.168.100.28:49612 error: 9001 socket exception [CLOSED] server [192.168.100.28:49612] [js_test:multi_coll_drop] 2016-04-06T02:54:15.667-0500 c20013| 2016-04-06T02:53:04.661-0500 I NETWORK [conn7] end connection 192.168.100.28:49612 (10 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:15.669-0500 c20013| 2016-04-06T02:53:04.661-0500 I COMMAND [conn12] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.670-0500 c20013| 2016-04-06T02:53:04.661-0500 D ASIO [NetworkInterfaceASIO-SyncSourceFeedback-0] Request 1664 finished with response: { ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.671-0500 c20013| 2016-04-06T02:53:04.659-0500 D REPL [rsBackgroundSync] fetcher stopped reading remote oplog on mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:15.672-0500 c20013| 2016-04-06T02:53:04.660-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1658 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:54:15.676-0500 c20013| 2016-04-06T02:53:04.661-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1661 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:15.678-0500 c20013| 2016-04-06T02:53:04.661-0500 I COMMAND [conn8] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.679-0500 c20013| 2016-04-06T02:53:04.661-0500 D NETWORK [conn8] SocketException: remote: 192.168.100.28:49648 error: 9001 socket exception [CLOSED] server [192.168.100.28:49648] [js_test:multi_coll_drop] 2016-04-06T02:54:15.682-0500 c20013| 2016-04-06T02:53:04.661-0500 I NETWORK [conn8] end connection 192.168.100.28:49648 (9 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:15.685-0500 c20013| 2016-04-06T02:53:04.661-0500 I COMMAND [conn9] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.687-0500 c20013| 2016-04-06T02:52:51.722-0500 D COMMAND [conn13] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.689-0500 c20013| 2016-04-06T02:53:04.661-0500 D NETWORK [conn12] SocketException: remote: 192.168.100.28:50369 error: 9001 socket exception [CLOSED] server [192.168.100.28:50369] [js_test:multi_coll_drop] 2016-04-06T02:54:15.689-0500 c20013| 2016-04-06T02:53:04.661-0500 D NETWORK [conn9] SocketException: remote: 192.168.100.28:49652 error: 9001 socket exception [CLOSED] server [192.168.100.28:49652] [js_test:multi_coll_drop] 2016-04-06T02:54:15.693-0500 c20013| 2016-04-06T02:53:04.661-0500 I NETWORK [conn12] end connection 192.168.100.28:50369 (8 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:15.696-0500 c20013| 2016-04-06T02:53:04.661-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1660 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:15.699-0500 c20013| 2016-04-06T02:53:04.661-0500 I NETWORK [conn9] end connection 192.168.100.28:49652 (8 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:15.703-0500 c20013| 2016-04-06T02:53:04.661-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1656 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:15.707-0500 c20013| 2016-04-06T02:53:04.661-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1659 finished with response: { ok: 1.0, electionTime: new Date(6270347962317012993), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929171000|2, t: 3 }, opTime: { ts: Timestamp 1459929171000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.709-0500 c20013| 2016-04-06T02:53:04.661-0500 I COMMAND [conn11] command admin.$cmd command: isMaster { ismaster: 1.0 } numYields:0 reslen:443 locks:{} protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.712-0500 c20013| 2016-04-06T02:53:04.661-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1661 finished with response: { term: 3, voteGranted: true, reason: "", ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.712-0500 c20013| 2016-04-06T02:52:57.504-0500 D - [PeriodicTaskRunner] cleaning up unused lock buckets of the global lock manager [js_test:multi_coll_drop] 2016-04-06T02:54:15.716-0500 c20013| 2016-04-06T02:53:04.662-0500 I COMMAND [PeriodicTaskRunner] task: UnusedLockCleaner took: 7157ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.716-0500 c20013| 2016-04-06T02:53:04.662-0500 I REPL [ReplicationExecutor] could not find member to sync from [js_test:multi_coll_drop] 2016-04-06T02:54:15.717-0500 c20013| 2016-04-06T02:53:04.662-0500 D ASIO [ReplicationExecutor] Canceling operation; original request was: RemoteCommand 1656 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:52:53.723-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.719-0500 c20013| 2016-04-06T02:53:04.662-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:04.662Z [js_test:multi_coll_drop] 2016-04-06T02:54:15.725-0500 c20013| 2016-04-06T02:53:04.662-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1656 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20013", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, opTime: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.734-0500 c20013| 2016-04-06T02:53:04.662-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:53:04.662Z [js_test:multi_coll_drop] 2016-04-06T02:54:15.739-0500 c20013| 2016-04-06T02:53:04.662-0500 D ASIO [ReplicationExecutor] Canceling operation; original request was: RemoteCommand 1660 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:09.659-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 3, candidateIndex: 2, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929163000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.740-0500 c20013| 2016-04-06T02:53:04.662-0500 I REPL [ReplicationExecutor] dry election run succeeded, running for election [js_test:multi_coll_drop] 2016-04-06T02:54:15.746-0500 c20013| 2016-04-06T02:53:04.662-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to execute command: RemoteCommand 1660 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:09.659-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: true, term: 3, candidateIndex: 2, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929163000|8, t: 3 } } reason: CallbackCanceled: Callback canceled [js_test:multi_coll_drop] 2016-04-06T02:54:15.750-0500 c20013| 2016-04-06T02:53:04.662-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1660 finished with response: CallbackCanceled: Callback canceled [js_test:multi_coll_drop] 2016-04-06T02:54:15.754-0500 c20013| 2016-04-06T02:53:04.662-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1669 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:14.659-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.774-0500 c20013| 2016-04-06T02:53:04.662-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1670 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:53:14.662-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.778-0500 c20013| 2016-04-06T02:53:04.662-0500 D QUERY [replExecDBWorker-1] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:15.783-0500 c20013| 2016-04-06T02:53:04.662-0500 I COMMAND [conn13] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.791-0500 c20013| 2016-04-06T02:53:04.662-0500 D NETWORK [conn13] SocketException: remote: 192.168.100.28:50568 error: 9001 socket exception [CLOSED] server [192.168.100.28:50568] [js_test:multi_coll_drop] 2016-04-06T02:54:15.793-0500 c20013| 2016-04-06T02:53:04.662-0500 I NETWORK [conn13] end connection 192.168.100.28:50568 (6 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:15.797-0500 c20013| 2016-04-06T02:53:04.662-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1671 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:09.662-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 4, candidateIndex: 2, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929163000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.804-0500 c20013| 2016-04-06T02:53:04.662-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1672 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:53:09.662-0500 cmd:{ replSetRequestVotes: 1, setName: "multidrop-configRS", dryRun: false, term: 4, candidateIndex: 2, configVersion: 1, lastCommittedOp: { ts: Timestamp 1459929163000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.805-0500 c20013| 2016-04-06T02:53:04.662-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:15.808-0500 c20013| 2016-04-06T02:53:04.662-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1669 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:15.810-0500 c20013| 2016-04-06T02:53:04.662-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1670 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:15.812-0500 c20013| 2016-04-06T02:53:04.662-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1672 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:15.812-0500 c20013| 2016-04-06T02:53:04.662-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1673 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:15.815-0500 c20013| 2016-04-06T02:53:04.663-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1670 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20013", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, opTime: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.816-0500 c20013| 2016-04-06T02:53:04.663-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:53:07.163Z [js_test:multi_coll_drop] 2016-04-06T02:54:15.827-0500 c20013| 2016-04-06T02:53:04.663-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1669 finished with response: { ok: 1.0, electionTime: new Date(6270347962317012993), state: 1, v: 1, hbmsg: "", set: "multidrop-configRS", term: 3, primaryId: 0, durableOpTime: { ts: Timestamp 1459929171000|2, t: 3 }, opTime: { ts: Timestamp 1459929171000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.829-0500 c20013| 2016-04-06T02:53:04.663-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1671 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:15.830-0500 c20013| 2016-04-06T02:53:04.663-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:07.163Z [js_test:multi_coll_drop] 2016-04-06T02:54:15.833-0500 c20013| 2016-04-06T02:53:04.663-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:15.834-0500 c20013| 2016-04-06T02:53:04.663-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1673 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:54:15.838-0500 c20013| 2016-04-06T02:53:04.664-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1671 finished with response: { term: 3, voteGranted: false, reason: "candidate's data is staler than mine", ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.840-0500 c20013| 2016-04-06T02:53:04.664-0500 I REPL [ReplicationExecutor] VoteRequester: Got no vote from mongovm16:20011 because: candidate's data is staler than mine, resp:{ term: 3, voteGranted: false, reason: "candidate's data is staler than mine", ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.844-0500 c20013| 2016-04-06T02:52:44.610-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:52017 #17 (12 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:15.846-0500 c20013| 2016-04-06T02:53:04.664-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:52556 #18 (8 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:15.847-0500 c20013| 2016-04-06T02:53:04.664-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:52559 #19 (9 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:15.849-0500 c20013| 2016-04-06T02:53:04.664-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:52858 #20 (10 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:15.853-0500 c20013| 2016-04-06T02:52:44.228-0500 D COMMAND [conn14] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.853-0500 c20013| 2016-04-06T02:53:04.664-0500 D COMMAND [conn18] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20011" } [js_test:multi_coll_drop] 2016-04-06T02:54:15.856-0500 c20013| 2016-04-06T02:53:04.664-0500 D COMMAND [conn14] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:54:15.857-0500 c20013| 2016-04-06T02:53:04.665-0500 D COMMAND [conn20] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:54:15.859-0500 c20013| 2016-04-06T02:53:04.665-0500 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20011" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.861-0500 c20013| 2016-04-06T02:53:04.665-0500 I COMMAND [conn20] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20014" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.865-0500 c20013| 2016-04-06T02:53:04.665-0500 I COMMAND [conn14] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } numYields:0 reslen:458 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.866-0500 c20013| 2016-04-06T02:53:04.665-0500 D NETWORK [conn14] SocketException: remote: 192.168.100.28:50633 error: 9001 socket exception [CLOSED] server [192.168.100.28:50633] [js_test:multi_coll_drop] 2016-04-06T02:54:15.868-0500 c20013| 2016-04-06T02:53:04.665-0500 I NETWORK [conn14] end connection 192.168.100.28:50633 (9 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:15.870-0500 c20013| 2016-04-06T02:53:04.665-0500 D COMMAND [conn20] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.871-0500 c20013| 2016-04-06T02:53:04.665-0500 D COMMAND [conn17] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20012" } [js_test:multi_coll_drop] 2016-04-06T02:54:15.873-0500 c20013| 2016-04-06T02:53:04.665-0500 I COMMAND [conn20] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.875-0500 c20013| 2016-04-06T02:53:04.665-0500 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20012" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.875-0500 c20013| 2016-04-06T02:53:04.665-0500 D COMMAND [conn20] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.879-0500 c20013| 2016-04-06T02:53:04.665-0500 D COMMAND [conn19] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20012" } [js_test:multi_coll_drop] 2016-04-06T02:54:15.882-0500 c20013| 2016-04-06T02:53:04.665-0500 I COMMAND [conn20] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:443 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.884-0500 c20013| 2016-04-06T02:53:04.665-0500 D COMMAND [conn18] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.885-0500 c20013| 2016-04-06T02:53:04.665-0500 D COMMAND [conn18] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:54:15.887-0500 c20013| 2016-04-06T02:53:04.665-0500 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20012" } numYields:0 reslen:458 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.890-0500 c20013| 2016-04-06T02:53:04.665-0500 I COMMAND [conn18] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 3 } numYields:0 reslen:458 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.890-0500 c20013| 2016-04-06T02:53:04.665-0500 D COMMAND [conn19] run command local.$cmd { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.899-0500 c20013| 2016-04-06T02:53:04.665-0500 D COMMAND [conn17] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.900-0500 c20013| 2016-04-06T02:53:04.665-0500 D COMMAND [conn17] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:54:15.903-0500 c20013| 2016-04-06T02:53:04.665-0500 D QUERY [conn19] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: 1 } projection: {} limit: 1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:15.907-0500 c20013| 2016-04-06T02:53:04.665-0500 I COMMAND [conn19] command local.oplog.rs command: find { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:254 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.909-0500 c20013| 2016-04-06T02:53:04.666-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1672 finished with response: { term: 4, voteGranted: true, reason: "", ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.912-0500 c20013| 2016-04-06T02:53:04.666-0500 I COMMAND [conn17] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 3 } numYields:0 reslen:458 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.916-0500 c20013| 2016-04-06T02:53:04.666-0500 I REPL [ReplicationExecutor] election succeeded, assuming primary role in term 4 [js_test:multi_coll_drop] 2016-04-06T02:54:15.920-0500 c20013| 2016-04-06T02:53:04.666-0500 I REPL [ReplicationExecutor] transition to PRIMARY [js_test:multi_coll_drop] 2016-04-06T02:54:15.923-0500 c20013| 2016-04-06T02:53:04.666-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:04.666Z [js_test:multi_coll_drop] 2016-04-06T02:54:15.927-0500 c20013| 2016-04-06T02:53:04.666-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:53:04.666Z [js_test:multi_coll_drop] 2016-04-06T02:54:15.931-0500 c20013| 2016-04-06T02:53:04.666-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1679 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:14.666-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.933-0500 c20013| 2016-04-06T02:53:04.666-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:52882 #21 (10 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:15.937-0500 c20013| 2016-04-06T02:53:04.666-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1680 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:53:14.666-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.937-0500 c20013| 2016-04-06T02:53:04.666-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1679 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:15.937-0500 c20013| 2016-04-06T02:53:04.666-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1680 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:15.938-0500 c20013| 2016-04-06T02:53:04.666-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:52883 #22 (11 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:15.940-0500 c20013| 2016-04-06T02:53:04.666-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:52884 #23 (12 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:15.941-0500 c20013| 2016-04-06T02:53:04.666-0500 D COMMAND [conn22] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20012" } [js_test:multi_coll_drop] 2016-04-06T02:54:15.945-0500 c20013| 2016-04-06T02:53:04.667-0500 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20012" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.954-0500 c20013| 2016-04-06T02:53:04.667-0500 D COMMAND [conn22] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:15.955-0500 c20013| 2016-04-06T02:53:04.667-0500 D COMMAND [conn22] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:15.956-0500 c20013| 2016-04-06T02:53:04.667-0500 D REPL [conn22] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.961-0500 c20013| 2016-04-06T02:53:04.667-0500 D REPL [conn22] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|10, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.965-0500 c20013| 2016-04-06T02:53:04.667-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1679 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 4, durableOpTime: { ts: Timestamp 1459929171000|2, t: 3 }, opTime: { ts: Timestamp 1459929171000|2, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:15.971-0500 c20013| 2016-04-06T02:53:04.667-0500 I COMMAND [conn22] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, appliedOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:15.974-0500 c20013| 2016-04-06T02:53:04.667-0500 I REPL [ReplicationExecutor] Member mongovm16:20011 is now in state SECONDARY [js_test:multi_coll_drop] 2016-04-06T02:54:15.978-0500 c20013| 2016-04-06T02:53:04.667-0500 D REPL [ReplicationExecutor] Ignoring older committed snapshot optime: { ts: Timestamp 1459929161000|1, t: 2 }, currentCommittedOpTime: { ts: Timestamp 1459929163000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:15.992-0500 c20013| 2016-04-06T02:53:04.667-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:06.667Z [js_test:multi_coll_drop] 2016-04-06T02:54:15.993-0500 c20013| 2016-04-06T02:53:04.668-0500 D COMMAND [conn23] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:54:15.998-0500 c20013| 2016-04-06T02:53:04.668-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1680 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20013", term: 4, primaryId: 0, durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, opTime: { ts: Timestamp 1459929146000|10, t: 2 } } [js_test:multi_coll_drop] 2016-04-06T02:54:16.017-0500 c20013| 2016-04-06T02:53:04.668-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:53:06.668Z [js_test:multi_coll_drop] 2016-04-06T02:54:16.018-0500 c20013| 2016-04-06T02:53:04.668-0500 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20015" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.021-0500 c20013| 2016-04-06T02:53:04.668-0500 D COMMAND [conn23] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.023-0500 c20013| 2016-04-06T02:53:04.668-0500 I COMMAND [conn23] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.023-0500 c20013| 2016-04-06T02:53:04.668-0500 D COMMAND [conn23] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.024-0500 c20013| 2016-04-06T02:53:04.668-0500 I COMMAND [conn23] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.024-0500 c20013| 2016-04-06T02:53:04.669-0500 D COMMAND [conn21] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20012" } [js_test:multi_coll_drop] 2016-04-06T02:54:16.025-0500 c20013| 2016-04-06T02:53:04.669-0500 I COMMAND [conn21] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20012" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.027-0500 c20013| 2016-04-06T02:53:04.670-0500 D COMMAND [conn21] run command local.$cmd { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929146000|10 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.028-0500 c20013| 2016-04-06T02:53:04.670-0500 I COMMAND [conn21] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929146000|10 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 4 } planSummary: COLLSCAN cursorid:22887452903 keysExamined:0 docsExamined:53 numYields:0 nreturned:53 reslen:20559 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.029-0500 c20013| 2016-04-06T02:53:04.673-0500 D COMMAND [conn21] run command local.$cmd { getMore: 22887452903, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|8, t: 3 } } [js_test:multi_coll_drop] 2016-04-06T02:54:16.032-0500 c20013| 2016-04-06T02:53:04.673-0500 D COMMAND [conn22] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.032-0500 c20013| 2016-04-06T02:53:04.674-0500 D COMMAND [conn22] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.034-0500 c20013| 2016-04-06T02:53:04.674-0500 D REPL [conn22] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.035-0500 c20013| 2016-04-06T02:53:04.674-0500 D REPL [conn22] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929152000|2, t: 3 } and is durable through: { ts: Timestamp 1459929146000|10, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.042-0500 c20013| 2016-04-06T02:53:04.674-0500 I COMMAND [conn22] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929152000|2, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.045-0500 c20013| 2016-04-06T02:53:04.680-0500 D COMMAND [conn22] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.045-0500 c20013| 2016-04-06T02:53:04.680-0500 D COMMAND [conn22] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.047-0500 c20013| 2016-04-06T02:53:04.680-0500 D REPL [conn22] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.049-0500 c20013| 2016-04-06T02:53:04.680-0500 D REPL [conn22] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|8, t: 3 } and is durable through: { ts: Timestamp 1459929146000|10, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.060-0500 c20013| 2016-04-06T02:53:04.680-0500 I COMMAND [conn22] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.066-0500 c20013| 2016-04-06T02:53:04.680-0500 D COMMAND [conn22] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.071-0500 c20013| 2016-04-06T02:53:04.680-0500 D COMMAND [conn22] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.072-0500 c20013| 2016-04-06T02:53:04.680-0500 D REPL [conn22] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.076-0500 c20013| 2016-04-06T02:53:04.680-0500 D REPL [conn22] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|8, t: 3 } and is durable through: { ts: Timestamp 1459929146000|10, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.080-0500 c20013| 2016-04-06T02:53:04.680-0500 I COMMAND [conn22] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.082-0500 c20013| 2016-04-06T02:53:04.680-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:52887 #24 (13 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:16.084-0500 c20013| 2016-04-06T02:53:04.680-0500 D COMMAND [conn24] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20012" } [js_test:multi_coll_drop] 2016-04-06T02:54:16.085-0500 c20013| 2016-04-06T02:53:04.681-0500 I COMMAND [conn24] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20012" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.092-0500 c20013| 2016-04-06T02:53:04.686-0500 D COMMAND [conn24] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.093-0500 c20013| 2016-04-06T02:53:04.686-0500 D COMMAND [conn24] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.101-0500 c20013| 2016-04-06T02:53:04.686-0500 D REPL [conn24] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.104-0500 c20013| 2016-04-06T02:53:04.686-0500 D REPL [conn24] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|11, t: 3 } and is durable through: { ts: Timestamp 1459929146000|10, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.112-0500 c20013| 2016-04-06T02:53:04.686-0500 I COMMAND [conn24] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, appliedOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.117-0500 c20013| 2016-04-06T02:53:04.686-0500 D COMMAND [conn22] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.117-0500 c20013| 2016-04-06T02:53:04.686-0500 D COMMAND [conn22] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.120-0500 c20013| 2016-04-06T02:53:04.686-0500 D REPL [conn22] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.122-0500 c20013| 2016-04-06T02:53:04.686-0500 D REPL [conn22] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|11, t: 3 } and is durable through: { ts: Timestamp 1459929161000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.129-0500 c20013| 2016-04-06T02:53:04.686-0500 I COMMAND [conn22] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.137-0500 c20013| 2016-04-06T02:53:04.688-0500 D COMMAND [conn22] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|12, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.137-0500 c20013| 2016-04-06T02:53:04.688-0500 D COMMAND [conn22] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.140-0500 c20013| 2016-04-06T02:53:04.688-0500 D REPL [conn22] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.144-0500 c20013| 2016-04-06T02:53:04.688-0500 D REPL [conn22] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|12, t: 3 } and is durable through: { ts: Timestamp 1459929161000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.151-0500 c20013| 2016-04-06T02:53:04.688-0500 I COMMAND [conn22] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|12, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.155-0500 c20013| 2016-04-06T02:53:04.689-0500 D COMMAND [conn22] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|15, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.157-0500 c20013| 2016-04-06T02:53:04.689-0500 D COMMAND [conn22] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.160-0500 c20013| 2016-04-06T02:53:04.689-0500 D REPL [conn22] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.164-0500 c20013| 2016-04-06T02:53:04.689-0500 D REPL [conn22] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929161000|15, t: 3 } and is durable through: { ts: Timestamp 1459929161000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.168-0500 c20013| 2016-04-06T02:53:04.689-0500 I COMMAND [conn22] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929161000|15, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.173-0500 c20013| 2016-04-06T02:53:04.690-0500 D COMMAND [conn22] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.174-0500 c20013| 2016-04-06T02:53:04.690-0500 D COMMAND [conn22] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.176-0500 c20013| 2016-04-06T02:53:04.690-0500 D REPL [conn22] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.179-0500 c20013| 2016-04-06T02:53:04.690-0500 D REPL [conn22] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|1, t: 3 } and is durable through: { ts: Timestamp 1459929161000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.184-0500 c20013| 2016-04-06T02:53:04.690-0500 I COMMAND [conn22] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.189-0500 c20013| 2016-04-06T02:53:04.691-0500 D COMMAND [conn22] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.191-0500 c20013| 2016-04-06T02:53:04.691-0500 D COMMAND [conn22] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.195-0500 c20013| 2016-04-06T02:53:04.691-0500 D REPL [conn22] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.198-0500 c20013| 2016-04-06T02:53:04.692-0500 D REPL [conn22] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|1, t: 3 } and is durable through: { ts: Timestamp 1459929161000|11, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.201-0500 c20013| 2016-04-06T02:53:04.692-0500 I COMMAND [conn22] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.204-0500 c20013| 2016-04-06T02:53:04.692-0500 D COMMAND [conn22] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.205-0500 c20013| 2016-04-06T02:53:04.692-0500 D COMMAND [conn22] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.207-0500 c20013| 2016-04-06T02:53:04.692-0500 D REPL [conn22] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.210-0500 c20013| 2016-04-06T02:53:04.692-0500 D REPL [conn22] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|4, t: 3 } and is durable through: { ts: Timestamp 1459929161000|11, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.213-0500 c20013| 2016-04-06T02:53:04.692-0500 I COMMAND [conn22] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929161000|11, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.216-0500 c20013| 2016-04-06T02:53:04.693-0500 D COMMAND [conn24] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.218-0500 c20013| 2016-04-06T02:53:04.693-0500 D COMMAND [conn24] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.221-0500 c20013| 2016-04-06T02:53:04.693-0500 D REPL [conn24] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.235-0500 c20013| 2016-04-06T02:53:04.693-0500 D REPL [conn24] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|4, t: 3 } and is durable through: { ts: Timestamp 1459929162000|1, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.243-0500 c20013| 2016-04-06T02:53:04.693-0500 I COMMAND [conn24] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|1, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.247-0500 c20013| 2016-04-06T02:53:04.693-0500 D COMMAND [conn22] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.248-0500 c20013| 2016-04-06T02:53:04.693-0500 D COMMAND [conn22] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.252-0500 c20013| 2016-04-06T02:53:04.693-0500 D REPL [conn22] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.254-0500 c20013| 2016-04-06T02:53:04.693-0500 D REPL [conn22] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|4, t: 3 } and is durable through: { ts: Timestamp 1459929162000|4, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.265-0500 c20013| 2016-04-06T02:53:04.693-0500 I COMMAND [conn22] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.270-0500 c20013| 2016-04-06T02:53:04.694-0500 D COMMAND [conn22] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.270-0500 c20013| 2016-04-06T02:53:04.694-0500 D COMMAND [conn22] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.273-0500 c20013| 2016-04-06T02:53:04.694-0500 D REPL [conn22] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.275-0500 c20013| 2016-04-06T02:53:04.694-0500 D REPL [conn22] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|5, t: 3 } and is durable through: { ts: Timestamp 1459929162000|4, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.280-0500 c20013| 2016-04-06T02:53:04.694-0500 I COMMAND [conn22] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.284-0500 c20013| 2016-04-06T02:53:04.695-0500 D COMMAND [conn22] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.284-0500 c20013| 2016-04-06T02:53:04.695-0500 D COMMAND [conn22] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.288-0500 c20013| 2016-04-06T02:53:04.695-0500 D REPL [conn22] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.292-0500 c20013| 2016-04-06T02:53:04.695-0500 D REPL [conn22] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|5, t: 3 } and is durable through: { ts: Timestamp 1459929162000|5, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.301-0500 c20013| 2016-04-06T02:53:04.695-0500 I COMMAND [conn22] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.311-0500 c20013| 2016-04-06T02:53:04.696-0500 D COMMAND [conn22] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.312-0500 c20013| 2016-04-06T02:53:04.696-0500 D COMMAND [conn22] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.317-0500 c20013| 2016-04-06T02:53:04.696-0500 D REPL [conn22] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.320-0500 c20013| 2016-04-06T02:53:04.696-0500 D REPL [conn22] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|8, t: 3 } and is durable through: { ts: Timestamp 1459929162000|5, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.325-0500 c20013| 2016-04-06T02:53:04.696-0500 I COMMAND [conn22] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.334-0500 c20013| 2016-04-06T02:53:04.696-0500 D COMMAND [conn22] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.336-0500 c20013| 2016-04-06T02:53:04.696-0500 D COMMAND [conn22] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.340-0500 c20013| 2016-04-06T02:53:04.696-0500 D REPL [conn22] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.343-0500 c20013| 2016-04-06T02:53:04.696-0500 D REPL [conn22] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|8, t: 3 } and is durable through: { ts: Timestamp 1459929162000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.348-0500 c20013| 2016-04-06T02:53:04.696-0500 I COMMAND [conn22] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.355-0500 c20013| 2016-04-06T02:53:04.697-0500 D COMMAND [conn22] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.358-0500 c20013| 2016-04-06T02:53:04.697-0500 D COMMAND [conn22] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.361-0500 c20013| 2016-04-06T02:53:04.697-0500 D REPL [conn22] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.366-0500 c20013| 2016-04-06T02:53:04.697-0500 D REPL [conn22] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|9, t: 3 } and is durable through: { ts: Timestamp 1459929162000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.374-0500 c20013| 2016-04-06T02:53:04.697-0500 I COMMAND [conn22] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.379-0500 c20013| 2016-04-06T02:53:04.698-0500 D COMMAND [conn22] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.382-0500 c20013| 2016-04-06T02:53:04.698-0500 D COMMAND [conn22] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.383-0500 c20013| 2016-04-06T02:53:04.698-0500 D REPL [conn22] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.386-0500 c20013| 2016-04-06T02:53:04.698-0500 D REPL [conn22] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|9, t: 3 } and is durable through: { ts: Timestamp 1459929162000|9, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.389-0500 c20013| 2016-04-06T02:53:04.698-0500 I COMMAND [conn22] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.395-0500 c20013| 2016-04-06T02:53:04.701-0500 D COMMAND [conn22] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.396-0500 c20013| 2016-04-06T02:53:04.701-0500 D COMMAND [conn22] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.400-0500 c20013| 2016-04-06T02:53:04.701-0500 D REPL [conn22] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.407-0500 c20013| 2016-04-06T02:53:04.701-0500 D REPL [conn22] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|12, t: 3 } and is durable through: { ts: Timestamp 1459929162000|9, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.411-0500 c20013| 2016-04-06T02:53:04.701-0500 I COMMAND [conn22] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|9, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.414-0500 c20013| 2016-04-06T02:53:04.701-0500 D COMMAND [conn24] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.414-0500 c20013| 2016-04-06T02:53:04.701-0500 D COMMAND [conn24] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.417-0500 c20013| 2016-04-06T02:53:04.701-0500 D REPL [conn24] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.419-0500 c20013| 2016-04-06T02:53:04.701-0500 D REPL [conn24] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|12, t: 3 } and is durable through: { ts: Timestamp 1459929162000|12, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.426-0500 c20013| 2016-04-06T02:53:04.701-0500 I COMMAND [conn24] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.433-0500 c20013| 2016-04-06T02:53:04.702-0500 D COMMAND [conn24] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.434-0500 c20013| 2016-04-06T02:53:04.702-0500 D COMMAND [conn24] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.436-0500 c20013| 2016-04-06T02:53:04.702-0500 D REPL [conn24] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.439-0500 c20013| 2016-04-06T02:53:04.702-0500 D REPL [conn24] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|13, t: 3 } and is durable through: { ts: Timestamp 1459929162000|12, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.444-0500 c20013| 2016-04-06T02:53:04.703-0500 I COMMAND [conn24] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|12, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.447-0500 c20013| 2016-04-06T02:53:04.704-0500 D COMMAND [conn24] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.448-0500 c20013| 2016-04-06T02:53:04.704-0500 D COMMAND [conn24] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.451-0500 c20013| 2016-04-06T02:53:04.704-0500 D REPL [conn24] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.456-0500 c20013| 2016-04-06T02:53:04.704-0500 D REPL [conn24] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|13, t: 3 } and is durable through: { ts: Timestamp 1459929162000|13, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.465-0500 c20013| 2016-04-06T02:53:04.704-0500 I COMMAND [conn24] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.471-0500 c20013| 2016-04-06T02:53:04.704-0500 D COMMAND [conn22] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.473-0500 c20013| 2016-04-06T02:53:04.704-0500 D COMMAND [conn22] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.492-0500 c20013| 2016-04-06T02:53:04.704-0500 D REPL [conn22] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.495-0500 c20013| 2016-04-06T02:53:04.704-0500 D REPL [conn22] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|16, t: 3 } and is durable through: { ts: Timestamp 1459929162000|13, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.537-0500 c20013| 2016-04-06T02:53:04.704-0500 I COMMAND [conn22] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|13, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.540-0500 c20013| 2016-04-06T02:53:04.706-0500 D COMMAND [conn22] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.541-0500 c20013| 2016-04-06T02:53:04.706-0500 D COMMAND [conn22] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.542-0500 c20013| 2016-04-06T02:53:04.706-0500 D REPL [conn22] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.548-0500 c20013| 2016-04-06T02:53:04.706-0500 D REPL [conn22] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|17, t: 3 } and is durable through: { ts: Timestamp 1459929162000|16, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.555-0500 c20013| 2016-04-06T02:53:04.706-0500 I COMMAND [conn22] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.560-0500 c20013| 2016-04-06T02:53:04.706-0500 D COMMAND [conn24] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.560-0500 c20013| 2016-04-06T02:53:04.706-0500 D COMMAND [conn24] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.567-0500 c20013| 2016-04-06T02:53:04.706-0500 D REPL [conn24] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.573-0500 c20013| 2016-04-06T02:53:04.706-0500 D REPL [conn24] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|17, t: 3 } and is durable through: { ts: Timestamp 1459929162000|16, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.579-0500 c20013| 2016-04-06T02:53:04.706-0500 I COMMAND [conn24] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|16, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.586-0500 c20013| 2016-04-06T02:53:04.707-0500 D COMMAND [conn24] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.586-0500 c20013| 2016-04-06T02:53:04.707-0500 D COMMAND [conn24] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.592-0500 c20013| 2016-04-06T02:53:04.707-0500 D REPL [conn24] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.593-0500 c20013| 2016-04-06T02:53:04.707-0500 D REPL [conn24] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|17, t: 3 } and is durable through: { ts: Timestamp 1459929162000|17, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.598-0500 c20013| 2016-04-06T02:53:04.707-0500 I COMMAND [conn24] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.602-0500 c20013| 2016-04-06T02:53:04.707-0500 D COMMAND [conn24] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|20, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.605-0500 c20013| 2016-04-06T02:53:04.707-0500 D COMMAND [conn24] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.610-0500 c20013| 2016-04-06T02:53:04.707-0500 D REPL [conn24] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.615-0500 c20013| 2016-04-06T02:53:04.707-0500 D REPL [conn24] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|20, t: 3 } and is durable through: { ts: Timestamp 1459929162000|17, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.621-0500 c20013| 2016-04-06T02:53:04.707-0500 I COMMAND [conn24] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|20, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.625-0500 c20013| 2016-04-06T02:53:04.708-0500 D COMMAND [conn24] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.625-0500 c20013| 2016-04-06T02:53:04.708-0500 D COMMAND [conn24] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.628-0500 c20013| 2016-04-06T02:53:04.708-0500 D REPL [conn24] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.631-0500 c20013| 2016-04-06T02:53:04.708-0500 D REPL [conn24] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|21, t: 3 } and is durable through: { ts: Timestamp 1459929162000|17, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.640-0500 c20013| 2016-04-06T02:53:04.708-0500 I COMMAND [conn24] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|17, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.653-0500 c20013| 2016-04-06T02:53:04.709-0500 D COMMAND [conn24] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|20, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.654-0500 c20013| 2016-04-06T02:53:04.709-0500 D COMMAND [conn24] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.660-0500 c20013| 2016-04-06T02:53:04.709-0500 D REPL [conn24] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.662-0500 c20013| 2016-04-06T02:53:04.709-0500 D REPL [conn24] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|21, t: 3 } and is durable through: { ts: Timestamp 1459929162000|20, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.674-0500 c20013| 2016-04-06T02:53:04.709-0500 I COMMAND [conn24] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|20, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.679-0500 c20013| 2016-04-06T02:53:04.710-0500 D COMMAND [conn24] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.679-0500 c20013| 2016-04-06T02:53:04.710-0500 D COMMAND [conn24] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.680-0500 c20013| 2016-04-06T02:53:04.710-0500 D REPL [conn24] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.684-0500 c20013| 2016-04-06T02:53:04.710-0500 D REPL [conn24] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|21, t: 3 } and is durable through: { ts: Timestamp 1459929162000|21, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.687-0500 c20013| 2016-04-06T02:53:04.710-0500 I COMMAND [conn24] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.691-0500 c20013| 2016-04-06T02:53:04.711-0500 D COMMAND [conn24] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.692-0500 c20013| 2016-04-06T02:53:04.711-0500 D COMMAND [conn24] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.695-0500 c20013| 2016-04-06T02:53:04.711-0500 D REPL [conn24] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.698-0500 c20013| 2016-04-06T02:53:04.711-0500 D REPL [conn24] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|24, t: 3 } and is durable through: { ts: Timestamp 1459929162000|21, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.702-0500 c20013| 2016-04-06T02:53:04.711-0500 I COMMAND [conn24] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.704-0500 c20013| 2016-04-06T02:53:04.712-0500 D COMMAND [conn24] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.705-0500 c20013| 2016-04-06T02:53:04.712-0500 D COMMAND [conn24] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.717-0500 c20013| 2016-04-06T02:53:04.712-0500 D REPL [conn24] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.738-0500 c20013| 2016-04-06T02:53:04.712-0500 D REPL [conn24] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|25, t: 3 } and is durable through: { ts: Timestamp 1459929162000|21, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.758-0500 c20013| 2016-04-06T02:53:04.712-0500 I COMMAND [conn24] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|21, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.766-0500 c20013| 2016-04-06T02:53:04.713-0500 D COMMAND [conn24] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.772-0500 c20013| 2016-04-06T02:53:04.713-0500 D COMMAND [conn24] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.773-0500 c20013| 2016-04-06T02:53:04.713-0500 D REPL [conn24] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.773-0500 c20013| 2016-04-06T02:53:04.713-0500 D REPL [conn24] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|25, t: 3 } and is durable through: { ts: Timestamp 1459929162000|24, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.773-0500 c20013| 2016-04-06T02:53:04.713-0500 I COMMAND [conn24] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.783-0500 c20013| 2016-04-06T02:53:04.714-0500 D COMMAND [conn24] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.784-0500 c20013| 2016-04-06T02:53:04.714-0500 D COMMAND [conn24] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.785-0500 c20013| 2016-04-06T02:53:04.714-0500 D REPL [conn24] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.787-0500 c20013| 2016-04-06T02:53:04.714-0500 D REPL [conn24] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|28, t: 3 } and is durable through: { ts: Timestamp 1459929162000|24, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.791-0500 c20013| 2016-04-06T02:53:04.714-0500 I COMMAND [conn24] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|24, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.805-0500 c20013| 2016-04-06T02:53:04.715-0500 D COMMAND [conn24] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.806-0500 c20013| 2016-04-06T02:53:04.715-0500 D COMMAND [conn24] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.808-0500 c20013| 2016-04-06T02:53:04.715-0500 D REPL [conn24] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.810-0500 c20013| 2016-04-06T02:53:04.715-0500 D REPL [conn24] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929162000|28, t: 3 } and is durable through: { ts: Timestamp 1459929162000|25, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.813-0500 c20013| 2016-04-06T02:53:04.715-0500 I COMMAND [conn24] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, appliedOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.819-0500 c20013| 2016-04-06T02:53:04.715-0500 D COMMAND [conn24] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|1, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.819-0500 c20013| 2016-04-06T02:53:04.715-0500 D COMMAND [conn24] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.822-0500 c20013| 2016-04-06T02:53:04.715-0500 D REPL [conn24] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.827-0500 c20013| 2016-04-06T02:53:04.715-0500 D REPL [conn24] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|1, t: 3 } and is durable through: { ts: Timestamp 1459929162000|25, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.859-0500 c20013| 2016-04-06T02:53:04.715-0500 I COMMAND [conn24] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|1, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.863-0500 c20013| 2016-04-06T02:53:04.717-0500 D COMMAND [conn24] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.864-0500 c20013| 2016-04-06T02:53:04.717-0500 D COMMAND [conn24] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.866-0500 ReplSetTest Could not call ismaster on node connection to mongovm16:20011: Error: error doing query: failed: network error while attempting to run command 'ismaster' on host 'mongovm16:20011' [js_test:multi_coll_drop] 2016-04-06T02:54:16.869-0500 c20013| 2016-04-06T02:53:04.717-0500 D REPL [conn24] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.879-0500 c20013| 2016-04-06T02:53:04.717-0500 D REPL [conn24] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|4, t: 3 } and is durable through: { ts: Timestamp 1459929162000|25, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.885-0500 c20013| 2016-04-06T02:53:04.717-0500 I COMMAND [conn24] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|25, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.888-0500 c20013| 2016-04-06T02:53:04.718-0500 D COMMAND [conn24] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.889-0500 c20013| 2016-04-06T02:53:04.718-0500 D COMMAND [conn24] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.892-0500 c20013| 2016-04-06T02:53:04.718-0500 D REPL [conn24] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.895-0500 c20013| 2016-04-06T02:53:04.718-0500 D REPL [conn24] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|4, t: 3 } and is durable through: { ts: Timestamp 1459929162000|28, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.898-0500 c20013| 2016-04-06T02:53:04.718-0500 I COMMAND [conn24] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.904-0500 c20013| 2016-04-06T02:53:04.718-0500 D COMMAND [conn24] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.904-0500 c20013| 2016-04-06T02:53:04.718-0500 D COMMAND [conn24] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.907-0500 c20013| 2016-04-06T02:53:04.718-0500 D REPL [conn24] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.910-0500 c20013| 2016-04-06T02:53:04.718-0500 D REPL [conn24] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|5, t: 3 } and is durable through: { ts: Timestamp 1459929162000|28, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.918-0500 c20013| 2016-04-06T02:53:04.718-0500 I COMMAND [conn24] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929162000|28, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.922-0500 c20013| 2016-04-06T02:53:04.719-0500 D COMMAND [conn24] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.922-0500 c20013| 2016-04-06T02:53:04.719-0500 D COMMAND [conn24] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.926-0500 c20013| 2016-04-06T02:53:04.719-0500 D REPL [conn24] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.933-0500 c20013| 2016-04-06T02:53:04.719-0500 D REPL [conn24] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|5, t: 3 } and is durable through: { ts: Timestamp 1459929163000|4, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.938-0500 c20013| 2016-04-06T02:53:04.719-0500 I COMMAND [conn24] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.941-0500 c20013| 2016-04-06T02:53:04.720-0500 D COMMAND [conn24] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.941-0500 c20013| 2016-04-06T02:53:04.720-0500 D COMMAND [conn24] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.946-0500 c20013| 2016-04-06T02:53:04.720-0500 D REPL [conn24] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.954-0500 c20013| 2016-04-06T02:53:04.720-0500 D REPL [conn24] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|8, t: 3 } and is durable through: { ts: Timestamp 1459929163000|4, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.956-0500 c20013| 2016-04-06T02:53:04.720-0500 I COMMAND [conn24] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|4, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.959-0500 c20013| 2016-04-06T02:53:04.720-0500 D COMMAND [conn24] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.960-0500 c20013| 2016-04-06T02:53:04.720-0500 D COMMAND [conn24] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.964-0500 c20013| 2016-04-06T02:53:04.720-0500 D REPL [conn24] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.973-0500 c20013| 2016-04-06T02:53:04.720-0500 D REPL [conn24] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|8, t: 3 } and is durable through: { ts: Timestamp 1459929163000|5, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.977-0500 c20013| 2016-04-06T02:53:04.720-0500 I COMMAND [conn24] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|5, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:16.985-0500 c20013| 2016-04-06T02:53:04.721-0500 D COMMAND [conn24] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:16.986-0500 c20013| 2016-04-06T02:53:04.721-0500 D COMMAND [conn24] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:16.993-0500 c20013| 2016-04-06T02:53:04.721-0500 D REPL [conn24] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929146000|10, t: 2 } and is durable through: { ts: Timestamp 1459929146000|9, t: 2 } [js_test:multi_coll_drop] 2016-04-06T02:54:16.997-0500 c20013| 2016-04-06T02:53:04.721-0500 D REPL [conn24] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929163000|8, t: 3 } and is durable through: { ts: Timestamp 1459929163000|8, t: 3 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.023-0500 c20013| 2016-04-06T02:53:04.721-0500 I COMMAND [conn24] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929146000|9, t: 2 }, appliedOpTime: { ts: Timestamp 1459929146000|10, t: 2 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.023-0500 c20013| 2016-04-06T02:53:04.721-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:52897 #25 (14 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:17.024-0500 c20013| 2016-04-06T02:53:04.722-0500 D COMMAND [conn25] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20010" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.025-0500 c20013| 2016-04-06T02:53:04.722-0500 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20010" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.026-0500 c20013| 2016-04-06T02:53:04.722-0500 D COMMAND [conn25] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.028-0500 c20013| 2016-04-06T02:53:04.722-0500 I COMMAND [conn25] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.031-0500 c20013| 2016-04-06T02:53:04.722-0500 D COMMAND [conn25] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.034-0500 c20013| 2016-04-06T02:53:04.722-0500 I COMMAND [conn25] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.034-0500 c20013| 2016-04-06T02:53:05.166-0500 D COMMAND [conn20] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.038-0500 c20013| 2016-04-06T02:53:05.167-0500 I COMMAND [conn20] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.038-0500 c20013| 2016-04-06T02:53:05.169-0500 D COMMAND [conn23] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.039-0500 c20013| 2016-04-06T02:53:05.169-0500 I COMMAND [conn23] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.041-0500 c20013| 2016-04-06T02:53:05.223-0500 D COMMAND [conn25] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.045-0500 c20013| 2016-04-06T02:53:05.223-0500 I COMMAND [conn25] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.046-0500 c20013| 2016-04-06T02:53:05.663-0500 D REPL [rsSync] Removing temporary collections from config [js_test:multi_coll_drop] 2016-04-06T02:54:17.046-0500 c20013| 2016-04-06T02:53:05.663-0500 I REPL [rsSync] transition to primary complete; database writes are now permitted [js_test:multi_coll_drop] 2016-04-06T02:54:17.052-0500 c20013| 2016-04-06T02:53:05.663-0500 I COMMAND [conn21] command local.oplog.rs command: getMore { getMore: 22887452903, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929163000|8, t: 3 } } cursorid:22887452903 numYields:1 nreturned:1 reslen:449 locks:{ Global: { acquireCount: { r: 6 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 107 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 989ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.053-0500 c20013| 2016-04-06T02:53:05.675-0500 D COMMAND [conn23] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.054-0500 c20013| 2016-04-06T02:53:05.676-0500 D COMMAND [conn21] run command local.$cmd { killCursors: "oplog.rs", cursors: [ 22887452903 ] } [js_test:multi_coll_drop] 2016-04-06T02:54:17.063-0500 c20013| 2016-04-06T02:53:05.676-0500 I COMMAND [conn21] command local.oplog.rs command: killCursors { killCursors: "oplog.rs", cursors: [ 22887452903 ] } numYields:0 reslen:175 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.064-0500 c20013| 2016-04-06T02:53:05.677-0500 D COMMAND [conn17] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.064-0500 c20013| 2016-04-06T02:53:05.677-0500 D COMMAND [conn17] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:54:17.068-0500 c20013| 2016-04-06T02:53:05.679-0500 D COMMAND [conn20] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.074-0500 c20013| 2016-04-06T02:53:05.682-0500 I COMMAND [conn23] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 6ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.077-0500 c20013| 2016-04-06T02:53:05.682-0500 I COMMAND [conn20] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 2ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.080-0500 c20013| 2016-04-06T02:53:05.682-0500 I COMMAND [conn17] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } numYields:0 reslen:480 locks:{} protocol:op_command 4ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.086-0500 c20013| 2016-04-06T02:53:05.682-0500 D COMMAND [conn10] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929171765), up: 44, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.087-0500 c20013| 2016-04-06T02:53:05.682-0500 D QUERY [conn10] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.089-0500 c20013| 2016-04-06T02:53:05.682-0500 I WRITE [conn10] update config.mongos query: { _id: "mongovm16:20014" } update: { $set: { _id: "mongovm16:20014", ping: new Date(1459929171765), up: 44, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.093-0500 c20013| 2016-04-06T02:53:05.686-0500 D REPL [conn10] Ignoring older committed snapshot from before I became primary, optime: { ts: Timestamp 1459929171000|2, t: 3 }, firstOpTimeOfMyTerm: { ts: Timestamp 1459929185000|1, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.096-0500 c20013| 2016-04-06T02:53:05.686-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929185000|2, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|8, t: 3 }, name-id: "259" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.102-0500 c20013| 2016-04-06T02:53:05.687-0500 D COMMAND [conn16] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929171773), up: 44, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.103-0500 c20013| 2016-04-06T02:53:05.687-0500 D QUERY [conn16] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.107-0500 c20013| 2016-04-06T02:53:05.687-0500 D REPL [conn16] Ignoring older committed snapshot from before I became primary, optime: { ts: Timestamp 1459929171000|2, t: 3 }, firstOpTimeOfMyTerm: { ts: Timestamp 1459929185000|1, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.112-0500 c20013| 2016-04-06T02:53:05.687-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929185000|2, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|8, t: 3 }, name-id: "259" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.118-0500 c20013| 2016-04-06T02:53:05.687-0500 I WRITE [conn16] update config.mongos query: { _id: "mongovm16:20015" } update: { $set: { _id: "mongovm16:20015", ping: new Date(1459929171773), up: 44, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.120-0500 c20013| 2016-04-06T02:53:05.700-0500 D REPL [conn16] Ignoring older committed snapshot from before I became primary, optime: { ts: Timestamp 1459929171000|2, t: 3 }, firstOpTimeOfMyTerm: { ts: Timestamp 1459929185000|1, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.127-0500 c20013| 2016-04-06T02:53:05.700-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929185000|2, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|8, t: 3 }, name-id: "259" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.134-0500 c20013| 2016-04-06T02:53:05.700-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929185000|3, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|8, t: 3 }, name-id: "259" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.135-0500 c20013| 2016-04-06T02:53:05.724-0500 D COMMAND [conn25] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.140-0500 c20013| 2016-04-06T02:53:05.724-0500 I COMMAND [conn25] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.146-0500 c20013| 2016-04-06T02:53:05.724-0500 D COMMAND [conn15] run command config.$cmd { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-64.0", lastmod: Timestamp 1000|75, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -64.0 }, max: { _id: -63.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-64.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-63.0", lastmod: Timestamp 1000|76, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -63.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-63.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|74 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.147-0500 c20013| 2016-04-06T02:53:05.724-0500 D QUERY [conn15] Running query: query: { ns: "multidrop.coll" } sort: { lastmod: -1 } projection: {} ntoreturn=1 [js_test:multi_coll_drop] 2016-04-06T02:54:17.152-0500 c20013| 2016-04-06T02:53:05.724-0500 D QUERY [conn15] score(2.0003) = baseScore(1) + productivity((1 advanced)/(1 works) = 1) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) [js_test:multi_coll_drop] 2016-04-06T02:54:17.155-0500 c20013| 2016-04-06T02:53:05.724-0500 I COMMAND [conn15] query config.chunks query: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } } planSummary: IXSCAN { ns: 1, lastmod: 1 } ntoreturn:1 ntoskip:0 keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:184 locks:{ Global: { acquireCount: { r: 3, W: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.155-0500 c20013| 2016-04-06T02:53:05.724-0500 D QUERY [conn15] Using idhack: { _id: "multidrop.coll-_id_-64.0" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.157-0500 c20013| 2016-04-06T02:53:05.724-0500 D QUERY [conn15] Using idhack: { _id: "multidrop.coll-_id_-63.0" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.160-0500 c20013| 2016-04-06T02:53:05.725-0500 D REPL [conn15] Ignoring older committed snapshot from before I became primary, optime: { ts: Timestamp 1459929171000|2, t: 3 }, firstOpTimeOfMyTerm: { ts: Timestamp 1459929185000|1, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.161-0500 c20013| 2016-04-06T02:53:05.725-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929185000|2, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|8, t: 3 }, name-id: "259" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.168-0500 c20013| 2016-04-06T02:53:05.725-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929185000|3, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|8, t: 3 }, name-id: "259" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.170-0500 c20013| 2016-04-06T02:53:05.727-0500 D REPL [conn15] Ignoring older committed snapshot from before I became primary, optime: { ts: Timestamp 1459929171000|2, t: 3 }, firstOpTimeOfMyTerm: { ts: Timestamp 1459929185000|1, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.173-0500 c20013| 2016-04-06T02:53:05.727-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929185000|2, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|8, t: 3 }, name-id: "259" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.176-0500 c20013| 2016-04-06T02:53:05.727-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929185000|3, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|8, t: 3 }, name-id: "259" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.181-0500 c20013| 2016-04-06T02:53:05.727-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929185000|4, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|8, t: 3 }, name-id: "259" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.184-0500 c20013| 2016-04-06T02:53:06.668-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1683 -- target:mongovm16:20011 db:admin expDate:2016-04-06T02:53:16.668-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.186-0500 c20013| 2016-04-06T02:53:06.668-0500 I ASIO [ReplicationExecutor] dropping unhealthy pooled connection to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:17.188-0500 c20013| 2016-04-06T02:53:06.668-0500 I ASIO [ReplicationExecutor] dropping unhealthy pooled connection to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:17.190-0500 c20013| 2016-04-06T02:53:06.668-0500 I ASIO [ReplicationExecutor] after drop, pool was empty, going to spawn some connections [js_test:multi_coll_drop] 2016-04-06T02:54:17.190-0500 c20013| 2016-04-06T02:53:06.668-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:17.192-0500 c20013| 2016-04-06T02:53:06.668-0500 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1685 -- target:mongovm16:20012 db:admin expDate:2016-04-06T02:53:16.668-0500 cmd:{ replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20013", fromId: 2, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.198-0500 c20013| 2016-04-06T02:53:06.668-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1685 on host mongovm16:20012 [js_test:multi_coll_drop] 2016-04-06T02:54:17.199-0500 c20013| 2016-04-06T02:53:06.668-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1684 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:17.201-0500 c20013| 2016-04-06T02:53:06.673-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1685 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", term: 4, primaryId: 2, durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, opTime: { ts: Timestamp 1459929185000|1, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:54:17.208-0500 c20013| 2016-04-06T02:53:06.673-0500 D REPL [ReplicationExecutor] Ignoring older committed snapshot from before I became primary, optime: { ts: Timestamp 1459929171000|2, t: 3 }, firstOpTimeOfMyTerm: { ts: Timestamp 1459929185000|1, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.215-0500 c20013| 2016-04-06T02:53:06.673-0500 D REPL [ReplicationExecutor] Required snapshot optime: { ts: Timestamp 1459929185000|2, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|8, t: 3 }, name-id: "259" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.219-0500 c20013| 2016-04-06T02:53:06.673-0500 D REPL [ReplicationExecutor] Required snapshot optime: { ts: Timestamp 1459929185000|3, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|8, t: 3 }, name-id: "259" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.222-0500 c20013| 2016-04-06T02:53:06.673-0500 D REPL [ReplicationExecutor] Required snapshot optime: { ts: Timestamp 1459929185000|4, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929163000|8, t: 3 }, name-id: "259" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.226-0500 c20013| 2016-04-06T02:53:06.673-0500 D REPL [ReplicationExecutor] Updating _lastCommittedOpTime to { ts: Timestamp 1459929185000|1, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.228-0500 c20013| 2016-04-06T02:53:06.673-0500 D REPL [ReplicationExecutor] Required snapshot optime: { ts: Timestamp 1459929185000|2, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|1, t: 4 }, name-id: "260" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.237-0500 c20013| 2016-04-06T02:53:06.673-0500 D REPL [ReplicationExecutor] Required snapshot optime: { ts: Timestamp 1459929185000|3, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|1, t: 4 }, name-id: "260" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.241-0500 c20013| 2016-04-06T02:53:06.673-0500 D REPL [ReplicationExecutor] Required snapshot optime: { ts: Timestamp 1459929185000|4, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|1, t: 4 }, name-id: "260" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.245-0500 c20013| 2016-04-06T02:53:06.673-0500 D REPL [ReplicationExecutor] Required snapshot optime: { ts: Timestamp 1459929185000|2, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|1, t: 4 }, name-id: "260" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.248-0500 c20013| 2016-04-06T02:53:06.673-0500 D REPL [ReplicationExecutor] Required snapshot optime: { ts: Timestamp 1459929185000|3, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|1, t: 4 }, name-id: "260" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.251-0500 c20013| 2016-04-06T02:53:06.673-0500 D REPL [ReplicationExecutor] Required snapshot optime: { ts: Timestamp 1459929185000|4, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|1, t: 4 }, name-id: "260" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.252-0500 c20013| 2016-04-06T02:53:06.673-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20012 at 2016-04-06T07:53:08.673Z [js_test:multi_coll_drop] 2016-04-06T02:54:17.255-0500 c20013| 2016-04-06T02:53:06.675-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:17.256-0500 c20013| 2016-04-06T02:53:06.675-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1684 finished with response: {} [js_test:multi_coll_drop] 2016-04-06T02:54:17.258-0500 c20013| 2016-04-06T02:53:06.675-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1683 on host mongovm16:20011 [js_test:multi_coll_drop] 2016-04-06T02:54:17.263-0500 c20013| 2016-04-06T02:53:06.675-0500 D ASIO [NetworkInterfaceASIO-Replication-0] Request 1683 finished with response: { ok: 1.0, state: 2, v: 1, hbmsg: "", set: "multidrop-configRS", syncingTo: "mongovm16:20012", term: 4, durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, opTime: { ts: Timestamp 1459929185000|1, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:54:17.269-0500 c20013| 2016-04-06T02:53:06.675-0500 D REPL [ReplicationExecutor] Ignoring older committed snapshot optime: { ts: Timestamp 1459929163000|8, t: 3 }, currentCommittedOpTime: { ts: Timestamp 1459929185000|1, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.271-0500 c20013| 2016-04-06T02:53:06.676-0500 D REPL [ReplicationExecutor] Required snapshot optime: { ts: Timestamp 1459929185000|2, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|1, t: 4 }, name-id: "260" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.273-0500 c20013| 2016-04-06T02:53:06.676-0500 D REPL [ReplicationExecutor] Required snapshot optime: { ts: Timestamp 1459929185000|3, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|1, t: 4 }, name-id: "260" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.280-0500 c20013| 2016-04-06T02:53:06.676-0500 D REPL [ReplicationExecutor] Required snapshot optime: { ts: Timestamp 1459929185000|4, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|1, t: 4 }, name-id: "260" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.282-0500 c20013| 2016-04-06T02:53:06.676-0500 D REPL [ReplicationExecutor] Required snapshot optime: { ts: Timestamp 1459929185000|2, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|1, t: 4 }, name-id: "260" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.288-0500 c20013| 2016-04-06T02:53:06.676-0500 D REPL [ReplicationExecutor] Required snapshot optime: { ts: Timestamp 1459929185000|3, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|1, t: 4 }, name-id: "260" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.293-0500 c20013| 2016-04-06T02:53:06.676-0500 D REPL [ReplicationExecutor] Required snapshot optime: { ts: Timestamp 1459929185000|4, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|1, t: 4 }, name-id: "260" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.297-0500 c20013| 2016-04-06T02:53:06.676-0500 D REPL [ReplicationExecutor] Scheduling heartbeat to mongovm16:20011 at 2016-04-06T07:53:08.676Z [js_test:multi_coll_drop] 2016-04-06T02:54:17.298-0500 c20013| 2016-04-06T02:53:06.841-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:53018 #26 (15 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:17.299-0500 c20013| 2016-04-06T02:53:06.842-0500 D COMMAND [conn26] run command admin.$cmd { isMaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.305-0500 c20013| 2016-04-06T02:53:06.842-0500 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1 } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.308-0500 c20013| 2016-04-06T02:53:06.842-0500 D COMMAND [conn26] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.310-0500 c20013| 2016-04-06T02:53:06.844-0500 I COMMAND [conn26] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.311-0500 c20013| 2016-04-06T02:53:07.133-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:53034 #27 (16 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:17.312-0500 c20013| 2016-04-06T02:53:07.133-0500 D COMMAND [conn27] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.316-0500 c20013| 2016-04-06T02:53:07.133-0500 I COMMAND [conn27] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20014" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.321-0500 c20013| 2016-04-06T02:53:07.133-0500 D COMMAND [conn27] run command admin.$cmd { _getUserCacheGeneration: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.324-0500 c20013| 2016-04-06T02:53:07.133-0500 D COMMAND [conn27] command: _getUserCacheGeneration [js_test:multi_coll_drop] 2016-04-06T02:54:17.325-0500 c20013| 2016-04-06T02:53:07.134-0500 I COMMAND [conn27] command admin.$cmd command: _getUserCacheGeneration { _getUserCacheGeneration: 1, maxTimeMS: 30000 } numYields:0 reslen:317 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.328-0500 c20013| 2016-04-06T02:53:07.161-0500 D REPL [NetworkInterfaceASIO-SyncSourceFeedback-0] Reporter failed to prepare update command with status: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:54:17.332-0500 c20013| 2016-04-06T02:53:07.161-0500 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to mongovm16:20011: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:54:17.335-0500 c20013| 2016-04-06T02:53:07.161-0500 D REPL [SyncSourceFeedback] The replication progress command (replSetUpdatePosition) failed and will be retried: InvalidSyncSource: Sync target is no longer valid [js_test:multi_coll_drop] 2016-04-06T02:54:17.336-0500 c20013| 2016-04-06T02:53:07.167-0500 D COMMAND [conn18] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.339-0500 c20013| 2016-04-06T02:53:07.167-0500 D COMMAND [conn18] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:54:17.341-0500 c20013| 2016-04-06T02:53:07.167-0500 I COMMAND [conn18] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20011", fromId: 0, term: 4 } numYields:0 reslen:480 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.344-0500 c20013| 2016-04-06T02:53:07.374-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:53046 #28 (17 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:17.347-0500 c20013| 2016-04-06T02:53:07.374-0500 D COMMAND [conn28] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.350-0500 c20013| 2016-04-06T02:53:07.374-0500 I COMMAND [conn28] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20015" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.353-0500 c20013| 2016-04-06T02:53:07.375-0500 D COMMAND [conn28] run command admin.$cmd { _getUserCacheGeneration: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.353-0500 c20013| 2016-04-06T02:53:07.375-0500 D COMMAND [conn28] command: _getUserCacheGeneration [js_test:multi_coll_drop] 2016-04-06T02:54:17.358-0500 c20013| 2016-04-06T02:53:07.375-0500 I COMMAND [conn28] command admin.$cmd command: _getUserCacheGeneration { _getUserCacheGeneration: 1, maxTimeMS: 30000 } numYields:0 reslen:317 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.364-0500 c20013| 2016-04-06T02:53:08.184-0500 D COMMAND [conn17] run command admin.$cmd { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.364-0500 c20013| 2016-04-06T02:53:08.184-0500 D COMMAND [conn17] command: replSetHeartbeat [js_test:multi_coll_drop] 2016-04-06T02:54:17.367-0500 c20013| 2016-04-06T02:53:08.184-0500 I COMMAND [conn17] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "multidrop-configRS", configVersion: 1, from: "mongovm16:20012", fromId: 1, term: 4 } numYields:0 reslen:480 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.368-0500 c20013| 2016-04-06T02:53:08.188-0500 D COMMAND [conn18] run command local.$cmd { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } [js_test:multi_coll_drop] 2016-04-06T02:54:17.370-0500 c20013| 2016-04-06T02:53:08.188-0500 D QUERY [conn18] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: 1 } projection: {} limit: 1, planSummary: COLLSCAN [js_test:multi_coll_drop] 2016-04-06T02:54:17.373-0500 c20013| 2016-04-06T02:53:08.188-0500 I COMMAND [conn18] command local.oplog.rs command: find { find: "oplog.rs", limit: 1, sort: { $natural: 1 } } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:254 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.375-0500 c20013| 2016-04-06T02:53:08.193-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:53094 #29 (18 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:17.376-0500 c20013| 2016-04-06T02:53:08.194-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:53095 #30 (19 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:17.377-0500 c20013| 2016-04-06T02:53:08.194-0500 D COMMAND [conn29] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20011" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.378-0500 c20013| 2016-04-06T02:53:08.194-0500 D COMMAND [conn30] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20011" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.380-0500 c20013| 2016-04-06T02:53:08.194-0500 I COMMAND [conn30] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20011" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.383-0500 c20013| 2016-04-06T02:53:08.194-0500 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20011" } numYields:0 reslen:482 locks:{} protocol:op_query 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.386-0500 c20013| 2016-04-06T02:53:08.195-0500 D COMMAND [conn29] run command local.$cmd { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929185000|1 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.399-0500 c20013| 2016-04-06T02:53:08.195-0500 D COMMAND [conn30] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:17.401-0500 c20013| 2016-04-06T02:53:08.195-0500 D COMMAND [conn30] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:17.405-0500 c20013| 2016-04-06T02:53:08.195-0500 D REPL [conn30] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929185000|1, t: 4 } and is durable through: { ts: Timestamp 1459929185000|1, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.408-0500 c20013| 2016-04-06T02:53:08.195-0500 D REPL [conn30] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929185000|1, t: 4 } and is durable through: { ts: Timestamp 1459929185000|1, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.416-0500 c20013| 2016-04-06T02:53:08.195-0500 I COMMAND [conn30] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.428-0500 c20013| 2016-04-06T02:53:08.195-0500 I COMMAND [conn29] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1459929185000|1 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 4 } planSummary: COLLSCAN cursorid:23953707769 keysExamined:0 docsExamined:4 numYields:0 nreturned:4 reslen:1494 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.430-0500 c20013| 2016-04-06T02:53:08.198-0500 D COMMAND [conn29] run command local.$cmd { getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929185000|1, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:54:17.434-0500 c20013| 2016-04-06T02:53:08.198-0500 D COMMAND [conn30] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|2, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:17.435-0500 c20013| 2016-04-06T02:53:08.198-0500 D COMMAND [conn30] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:17.440-0500 c20013| 2016-04-06T02:53:08.198-0500 D REPL [conn30] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929185000|2, t: 4 } and is durable through: { ts: Timestamp 1459929185000|1, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.442-0500 c20013| 2016-04-06T02:53:08.198-0500 D REPL [conn30] Required snapshot optime: { ts: Timestamp 1459929185000|2, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|1, t: 4 }, name-id: "260" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.445-0500 c20013| 2016-04-06T02:53:08.198-0500 D REPL [conn30] Required snapshot optime: { ts: Timestamp 1459929185000|3, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|1, t: 4 }, name-id: "260" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.450-0500 c20013| 2016-04-06T02:53:08.198-0500 D REPL [conn30] Required snapshot optime: { ts: Timestamp 1459929185000|4, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|1, t: 4 }, name-id: "260" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.457-0500 c20013| 2016-04-06T02:53:08.198-0500 D REPL [conn30] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929185000|1, t: 4 } and is durable through: { ts: Timestamp 1459929185000|1, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.463-0500 c20013| 2016-04-06T02:53:08.198-0500 I COMMAND [conn30] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|2, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.480-0500 c20013| 2016-04-06T02:53:08.199-0500 D COMMAND [conn30] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|3, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:17.483-0500 c20013| 2016-04-06T02:53:08.199-0500 D COMMAND [conn30] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:17.488-0500 c20013| 2016-04-06T02:53:08.199-0500 D REPL [conn30] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929185000|3, t: 4 } and is durable through: { ts: Timestamp 1459929185000|1, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.491-0500 c20013| 2016-04-06T02:53:08.199-0500 D REPL [conn30] Required snapshot optime: { ts: Timestamp 1459929185000|2, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|1, t: 4 }, name-id: "260" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.493-0500 c20013| 2016-04-06T02:53:08.199-0500 D REPL [conn30] Required snapshot optime: { ts: Timestamp 1459929185000|3, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|1, t: 4 }, name-id: "260" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.496-0500 c20013| 2016-04-06T02:53:08.199-0500 D REPL [conn30] Required snapshot optime: { ts: Timestamp 1459929185000|4, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|1, t: 4 }, name-id: "260" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.499-0500 c20013| 2016-04-06T02:53:08.199-0500 D REPL [conn30] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929185000|1, t: 4 } and is durable through: { ts: Timestamp 1459929185000|1, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.501-0500 c20013| 2016-04-06T02:53:08.199-0500 I COMMAND [conn30] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|3, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.506-0500 c20013| 2016-04-06T02:53:08.201-0500 D COMMAND [conn30] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:17.507-0500 c20013| 2016-04-06T02:53:08.201-0500 D COMMAND [conn30] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:17.513-0500 c20013| 2016-04-06T02:53:08.201-0500 D REPL [conn30] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929185000|4, t: 4 } and is durable through: { ts: Timestamp 1459929185000|1, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.517-0500 c20013| 2016-04-06T02:53:08.201-0500 D REPL [conn30] Required snapshot optime: { ts: Timestamp 1459929185000|2, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|1, t: 4 }, name-id: "260" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.520-0500 c20013| 2016-04-06T02:53:08.201-0500 D REPL [conn30] Required snapshot optime: { ts: Timestamp 1459929185000|3, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|1, t: 4 }, name-id: "260" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.523-0500 c20013| 2016-04-06T02:53:08.201-0500 D REPL [conn30] Required snapshot optime: { ts: Timestamp 1459929185000|4, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|1, t: 4 }, name-id: "260" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.525-0500 c20013| 2016-04-06T02:53:08.201-0500 D REPL [conn30] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929185000|1, t: 4 } and is durable through: { ts: Timestamp 1459929185000|1, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.530-0500 c20013| 2016-04-06T02:53:08.201-0500 I COMMAND [conn30] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.536-0500 c20013| 2016-04-06T02:53:08.203-0500 D COMMAND [conn30] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:17.536-0500 c20013| 2016-04-06T02:53:08.203-0500 D COMMAND [conn30] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:17.539-0500 c20013| 2016-04-06T02:53:08.203-0500 D REPL [conn30] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929185000|4, t: 4 } and is durable through: { ts: Timestamp 1459929185000|2, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.540-0500 c20013| 2016-04-06T02:53:08.203-0500 D REPL [conn30] Updating _lastCommittedOpTime to { ts: Timestamp 1459929185000|2, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.542-0500 c20013| 2016-04-06T02:53:08.203-0500 D REPL [conn30] Required snapshot optime: { ts: Timestamp 1459929185000|3, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|2, t: 4 }, name-id: "261" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.556-0500 c20013| 2016-04-06T02:53:08.203-0500 D REPL [conn30] Required snapshot optime: { ts: Timestamp 1459929185000|4, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|2, t: 4 }, name-id: "261" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.560-0500 c20013| 2016-04-06T02:53:08.203-0500 D REPL [conn30] Required snapshot optime: { ts: Timestamp 1459929185000|3, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|2, t: 4 }, name-id: "261" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.567-0500 c20013| 2016-04-06T02:53:08.203-0500 D REPL [conn30] Required snapshot optime: { ts: Timestamp 1459929185000|4, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|2, t: 4 }, name-id: "261" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.596-0500 c20013| 2016-04-06T02:53:08.203-0500 D REPL [conn30] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929185000|1, t: 4 } and is durable through: { ts: Timestamp 1459929185000|1, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.604-0500 c20013| 2016-04-06T02:53:08.203-0500 I COMMAND [conn30] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|2, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.608-0500 c20013| 2016-04-06T02:53:08.205-0500 I COMMAND [conn10] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929171765), up: 44, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:386 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 2523ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.621-0500 c20013| 2016-04-06T02:53:08.205-0500 I COMMAND [conn29] command local.oplog.rs command: getMore { getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929185000|1, t: 4 } } cursorid:23953707769 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 7ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.628-0500 c20013| 2016-04-06T02:53:08.207-0500 D COMMAND [conn29] run command local.$cmd { getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929185000|2, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:54:17.639-0500 c20013| 2016-04-06T02:53:08.208-0500 D COMMAND [conn30] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:17.640-0500 c20013| 2016-04-06T02:53:08.208-0500 D COMMAND [conn30] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:17.647-0500 c20013| 2016-04-06T02:53:08.208-0500 D REPL [conn30] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929185000|4, t: 4 } and is durable through: { ts: Timestamp 1459929185000|4, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.648-0500 c20013| 2016-04-06T02:53:08.208-0500 D REPL [conn30] Updating _lastCommittedOpTime to { ts: Timestamp 1459929185000|4, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.653-0500 c20013| 2016-04-06T02:53:08.208-0500 D REPL [conn30] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929185000|1, t: 4 } and is durable through: { ts: Timestamp 1459929185000|1, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.656-0500 c20013| 2016-04-06T02:53:08.208-0500 I COMMAND [conn30] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.659-0500 c20013| 2016-04-06T02:53:08.209-0500 D COMMAND [conn10] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|2, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.666-0500 c20013| 2016-04-06T02:53:08.209-0500 D COMMAND [conn10] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|2, t: 4 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:17.669-0500 c20013| 2016-04-06T02:53:08.209-0500 D COMMAND [conn10] Using 'committed' snapshot. { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|2, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.670-0500 c20013| 2016-04-06T02:53:08.209-0500 D QUERY [conn10] Using idhack: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:17.675-0500 c20013| 2016-04-06T02:53:08.212-0500 I COMMAND [conn16] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929171773), up: 44, waiting: false, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:386 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 2525ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.681-0500 c20013| 2016-04-06T02:53:08.212-0500 I COMMAND [conn15] command config.chunks command: applyOps { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-64.0", lastmod: Timestamp 1000|75, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -64.0 }, max: { _id: -63.0 }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-64.0" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "multidrop.coll-_id_-63.0", lastmod: Timestamp 1000|76, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0'), ns: "multidrop.coll", min: { _id: -63.0 }, max: { _id: MaxKey }, shard: "shard0000" }, o2: { _id: "multidrop.coll-_id_-63.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "multidrop.coll" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|74 } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:343 locks:{ Global: { acquireCount: { r: 6, w: 1, W: 3 } }, Database: { acquireCount: { r: 1, w: 1 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 2487ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.686-0500 c20013| 2016-04-06T02:53:08.212-0500 I COMMAND [conn29] command local.oplog.rs command: getMore { getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929185000|2, t: 4 } } cursorid:23953707769 numYields:0 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.702-0500 c20013| 2016-04-06T02:53:08.212-0500 I COMMAND [conn10] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|2, t: 4 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:434 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 3ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.715-0500 c20013| 2016-04-06T02:53:08.213-0500 D COMMAND [conn15] run command config.$cmd { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:53:08.213-0500-5704c06465c17830b843f1c8", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929188213), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -64.0 }, max: { _id: MaxKey } }, left: { min: { _id: -64.0 }, max: { _id: -63.0 }, lastmod: Timestamp 1000|75, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -63.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|76, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.720-0500 c20013| 2016-04-06T02:53:08.216-0500 D COMMAND [conn29] run command local.$cmd { getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929185000|4, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:54:17.722-0500 c20013| 2016-04-06T02:53:08.216-0500 I COMMAND [conn29] command local.oplog.rs command: getMore { getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929185000|4, t: 4 } } cursorid:23953707769 numYields:0 nreturned:1 reslen:887 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.724-0500 c20013| 2016-04-06T02:53:08.219-0500 D COMMAND [conn16] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|4, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.726-0500 c20013| 2016-04-06T02:53:08.219-0500 D COMMAND [conn16] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|4, t: 4 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:17.731-0500 c20013| 2016-04-06T02:53:08.219-0500 D COMMAND [conn16] Using 'committed' snapshot. { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|4, t: 4 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.732-0500 c20013| 2016-04-06T02:53:08.219-0500 D QUERY [conn16] Using idhack: query: { _id: "balancer" } sort: {} projection: {} limit: 1 [js_test:multi_coll_drop] 2016-04-06T02:54:17.740-0500 c20013| 2016-04-06T02:53:08.221-0500 D COMMAND [conn10] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929188220), up: 61, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.743-0500 c20013| 2016-04-06T02:53:08.221-0500 D QUERY [conn10] Using idhack: { _id: "mongovm16:20014" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.750-0500 c20013| 2016-04-06T02:53:08.221-0500 I WRITE [conn10] update config.mongos query: { _id: "mongovm16:20014" } update: { $set: { _id: "mongovm16:20014", ping: new Date(1459929188220), up: 61, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.752-0500 c20013| 2016-04-06T02:53:08.221-0500 I COMMAND [conn16] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929185000|4, t: 4 } }, limit: 1, maxTimeMS: 30000 } planSummary: IDHACK keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:428 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 1ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.754-0500 c20013| 2016-04-06T02:53:08.221-0500 D COMMAND [conn29] run command local.$cmd { getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929185000|4, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:54:17.757-0500 c20013| 2016-04-06T02:53:08.221-0500 I COMMAND [conn29] command local.oplog.rs command: getMore { getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929185000|4, t: 4 } } cursorid:23953707769 numYields:0 nreturned:1 reslen:522 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.761-0500 c20013| 2016-04-06T02:53:08.221-0500 D COMMAND [conn16] run command config.$cmd { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929188221), up: 61, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.761-0500 c20013| 2016-04-06T02:53:08.221-0500 D QUERY [conn16] Using idhack: { _id: "mongovm16:20015" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.764-0500 c20013| 2016-04-06T02:53:08.221-0500 I WRITE [conn16] update config.mongos query: { _id: "mongovm16:20015" } update: { $set: { _id: "mongovm16:20015", ping: new Date(1459929188221), up: 61, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.766-0500 c20013| 2016-04-06T02:53:08.225-0500 D COMMAND [conn29] run command local.$cmd { getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929185000|4, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:54:17.772-0500 c20013| 2016-04-06T02:53:08.225-0500 D COMMAND [conn30] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|2, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:17.773-0500 c20013| 2016-04-06T02:53:08.225-0500 D COMMAND [conn30] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:17.777-0500 c20013| 2016-04-06T02:53:08.225-0500 I COMMAND [conn29] command local.oplog.rs command: getMore { getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929185000|4, t: 4 } } cursorid:23953707769 numYields:0 nreturned:1 reslen:522 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.781-0500 c20013| 2016-04-06T02:53:08.225-0500 D REPL [conn30] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929188000|2, t: 4 } and is durable through: { ts: Timestamp 1459929185000|4, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.787-0500 c20013| 2016-04-06T02:53:08.225-0500 D REPL [conn30] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929185000|1, t: 4 } and is durable through: { ts: Timestamp 1459929185000|1, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.797-0500 c20013| 2016-04-06T02:53:08.225-0500 I COMMAND [conn30] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|2, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.809-0500 c20013| 2016-04-06T02:53:08.225-0500 I NETWORK [initandlisten] connection accepted from 192.168.100.28:53100 #31 (20 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:17.812-0500 c20013| 2016-04-06T02:53:08.225-0500 D COMMAND [conn30] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|2, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:17.812-0500 c20013| 2016-04-06T02:53:08.225-0500 D COMMAND [conn30] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:17.815-0500 c20013| 2016-04-06T02:53:08.225-0500 D REPL [conn30] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929188000|2, t: 4 } and is durable through: { ts: Timestamp 1459929185000|4, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.817-0500 c20013| 2016-04-06T02:53:08.225-0500 D REPL [conn30] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929185000|1, t: 4 } and is durable through: { ts: Timestamp 1459929185000|1, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.820-0500 c20013| 2016-04-06T02:53:08.226-0500 I COMMAND [conn30] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|2, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.822-0500 c20013| 2016-04-06T02:53:08.226-0500 D COMMAND [conn31] run command admin.$cmd { isMaster: 1, hostInfo: "mongovm16:20011" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.825-0500 c20013| 2016-04-06T02:53:08.227-0500 D COMMAND [conn30] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|3, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:17.826-0500 c20013| 2016-04-06T02:53:08.227-0500 D COMMAND [conn30] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:17.827-0500 c20013| 2016-04-06T02:53:08.227-0500 D REPL [conn30] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929188000|3, t: 4 } and is durable through: { ts: Timestamp 1459929185000|4, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.835-0500 c20013| 2016-04-06T02:53:08.227-0500 D REPL [conn30] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929185000|1, t: 4 } and is durable through: { ts: Timestamp 1459929185000|1, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.842-0500 c20013| 2016-04-06T02:53:08.227-0500 I COMMAND [conn30] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929185000|4, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|3, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.846-0500 sh20013| 2016-04-06T02:53:08.228-0500 D COMMAND [conn29] run command local.$cmd { getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929185000|4, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:54:17.852-0500 sh20014| 2016-04-06T02:53:58.159-0500 D ASIO [conn1] startCommand: RemoteCommand 988 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:28.159-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c09606c33406d4d9c0db'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929238159), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.852-0500 sh20014| 2016-04-06T02:53:58.159-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 988 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:17.855-0500 sh20014| 2016-04-06T02:53:58.160-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 988 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.862-0500 sh20014| 2016-04-06T02:53:58.160-0500 D ASIO [conn1] startCommand: RemoteCommand 990 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:28.160-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929237000|1, t: 8 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.862-0500 sh20014| 2016-04-06T02:53:58.160-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 990 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:17.866-0500 sh20014| 2016-04-06T02:53:58.160-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 990 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.869-0500 sh20014| 2016-04-06T02:53:58.161-0500 D ASIO [conn1] startCommand: RemoteCommand 992 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:28.161-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929237000|1, t: 8 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.873-0500 sh20014| 2016-04-06T02:53:58.161-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 992 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:17.875-0500 sh20014| 2016-04-06T02:53:58.161-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 992 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929228990) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.879-0500 sh20014| 2016-04-06T02:53:58.161-0500 D ASIO [conn1] startCommand: RemoteCommand 994 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:54:28.161-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.880-0500 sh20014| 2016-04-06T02:53:58.161-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 994 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:17.892-0500 sh20014| 2016-04-06T02:53:58.162-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] warning: log line attempted (22kB) over max size (10kB), printing beginning and end ... Request 994 finished with response: { host: "mongovm16:20013", advisoryHostFQDNs: [], version: "3.3.4-37-g36f3ff8", process: "mongod", pid: 66033, uptime: 121.0, uptimeMillis: 120821, uptimeEstimate: 85.0, localTime: new Date(1459929238161), asserts: { regular: 0, warning: 0, msg: 0, user: 29, rollovers: 0 }, connections: { current: 13, available: 51187, totalCreated: 71 }, extra_info: { note: "fields vary by platform", heap_usage_bytes: 134492608, page_faults: 0 }, globalLock: { totalTime: 120817000, currentQueue: { total: 0, readers: 0, writers: 0 }, activeClients: { total: 30, readers: 0, writers: 0 } }, locks: { Global: { acquireCount: { r: 3754, w: 912, R: 212, W: 393 }, acquireWaitCount: { r: 23, w: 1, W: 10 }, timeAcquiringMicros: { r: 85380, w: 28554, W: 4350 } }, Database: { acquireCount: { r: 1133, w: 234, W: 678 }, acquireWaitCount: { r: 136, W: 5 }, timeAcquiringMicros: { r: 15600, W: 2901 } }, Collection: { acquireCount: { r: 644, w: 214 } }, Metadata: { acquireCount: { w: 71, W: 552 }, acquireWaitCount: { W: 8 }, timeAcquiringMicros: { W: 616 } }, oplog: { acquireCount: { r: 518, w: 27, R: 1, W: 1 } } }, network: { bytesIn: 156561, bytesOut: 953032, numRequests: 716 }, opcounters: { insert: 3, query: 155, update: 10, delete: 0, getmore: 53, command: 517 }, opcountersRepl: { insert: 64, query: 0, update: 184, delete: 0, getmore: 0, command: 0 }, repl: { hosts: [ "mongovm16:20011", "mongovm16:20012", "mongovm16:20013" ], setName: "multidrop-configRS", setVersion: 1, ismaster: true, secondary: false, primary: "mongovm16:20013", me: "mongovm16:20013", electionId: ObjectId('7fffffff0000000000000008'), rbid: 1885590396 }, storageEngine: { name: "wiredTiger", supportsCommittedReads: true, readOnly: false, persistent: true }, tcmalloc: { generic: { current_allocated_bytes: 134494128, heap_size: 138121216 }, tcmalloc: { pageheap_free_bytes: 749568, pageheap_unmapped_bytes: 0, max_total_thread_cache_bytes: 1073741824, current_total_thread_cache_bytes: 1579400, total_free_bytes: 2877520, central_cache_free_bytes: 257608, transfer_cache_free_bytes: 1040512, thread_cache_free_bytes: 1579400, aggressive_memory_decommit: 0, size_classes: [ { bytes_per_object: 0, pages_per_span: 0, num_spans: 0, num_thread_objs: 0, num_central_objs: 0, num_transfer_objs: 0, free_bytes: 0, allocated_bytes: 0 }, { bytes_per_object: 8, pages_per_span: 2, num_spans: 2, num_thread_objs: 129, num_central_objs: 993, num_transfer_objs: 0, free_bytes: 8976, allocated_bytes: 16384 }, { bytes_per_object: 16, pages_per_span: 2, num_spans: 4, num_thread_objs: 576, num_central_objs: 439, num_transfer_objs: 0, free_bytes: 16240, allocated_bytes: 32768 }, { bytes_per_object: 32, pages_per_span: 2, num_spans: 37, num_thread_objs: 1503, num_central_objs: 102, num_transfer_objs: 1536, free_bytes: 100512, allocated_bytes: 303104 }, { bytes_per_object: 48, pages_per_span: 2, num_spans: 23, num_thread_objs: 902, num_central_objs: 95, num_transfer_objs: 0, free_bytes: 47856, allocated_bytes: 188416 }, { bytes_per_object: 64, pages_per_span: 2, num_spans: 62, num_thread_objs: 658, num_central_objs: 104, num_transfer_objs: 6016, free_bytes: 433792, allocated_bytes: 507904 }, { bytes_per_object: 80, pages_per_span: 2, num_spans: 37, num_thread_objs: 554, num_central_objs: 6, num_transfer_objs: 2142, free_bytes: 216160, allocated_bytes: 303104 }, { bytes_per_object: 96, pages_per_span: 2, num_spa .......... cheSetFilter: { failed: 0, total: 0 }, profile: { failed: 0, total: 0 }, reIndex: { failed: 0, total: 0 }, renameCollection: { failed: 0, total: 0 }, repairCursor: { failed: 0, total: 0 }, repairDatabase: { failed: 0, total: 0 }, replSetDeclareElectionWinner: { failed: 0, total: 0 }, replSetElect: { failed: 0, total: 0 }, replSetFreeze: { failed: 0, total: 0 }, replSetFresh: { failed: 0, total: 0 }, replSetGetConfig: { failed: 0, total: 0 }, replSetGetRBID: { failed: 0, total: 2 }, replSetGetStatus: { failed: 0, total: 0 }, replSetHeartbeat: { failed: 0, total: 84 }, replSetInitiate: { failed: 0, total: 0 }, replSetMaintenance: { failed: 0, total: 0 }, replSetReconfig: { failed: 0, total: 0 }, replSetRequestVotes: { failed: 0, total: 6 }, replSetStepDown: { failed: 0, total: 0 }, replSetSyncFrom: { failed: 0, total: 0 }, replSetTest: { failed: 0, total: 0 }, replSetUpdatePosition: { failed: 0, total: 121 }, resetError: { failed: 0, total: 0 }, resync: { failed: 0, total: 0 }, revokePrivilegesFromRole: { failed: 0, total: 0 }, revokeRolesFromRole: { failed: 0, total: 0 }, revokeRolesFromUser: { failed: 0, total: 0 }, rolesInfo: { failed: 0, total: 0 }, saslContinue: { failed: 0, total: 0 }, saslStart: { failed: 0, total: 0 }, serverStatus: { failed: 0, total: 27 }, setCommittedSnapshot: { failed: 0, total: 0 }, setParameter: { failed: 0, total: 0 }, setShardVersion: { failed: 0, total: 0 }, shardConnPoolStats: { failed: 0, total: 0 }, shardingState: { failed: 0, total: 0 }, shutdown: { failed: 0, total: 0 }, sleep: { failed: 0, total: 0 }, splitChunk: { failed: 0, total: 0 }, splitVector: { failed: 0, total: 0 }, stageDebug: { failed: 0, total: 0 }, top: { failed: 0, total: 0 }, touch: { failed: 0, total: 0 }, unsetSharding: { failed: 0, total: 0 }, update: { failed: 0, total: 10 }, updateRole: { failed: 0, total: 0 }, updateUser: { failed: 0, total: 0 }, usersInfo: { failed: 0, total: 0 }, validate: { failed: 0, total: 0 }, whatsmyuri: { failed: 0, total: 0 }, writebacklisten: { failed: 0, total: 0 } }, cursor: { timedOut: 0, open: { noTimeout: 0, pinned: 2, total: 3 } }, document: { deleted: 0, inserted: 6, returned: 403, updated: 17 }, getLastError: { wtime: { num: 23, totalMillis: 20204 }, wtimeouts: 0 }, operation: { fastmod: 0, idhack: 77, scanAndOrder: 0, writeConflicts: 0 }, queryExecutor: { scanned: 197, scannedObjects: 396 }, record: { moves: 0 }, repl: { executor: { counters: { eventCreated: 23, eventWait: 23, cancels: 591, waits: 1755, scheduledNetCmd: 105, scheduledDBWork: 4, scheduledXclWork: 6, scheduledWorkAt: 678, scheduledWork: 1888, schedulingFailures: 0 }, queues: { networkInProgress: 0, dbWorkInProgress: 0, exclusiveInProgress: 0, sleepers: 3, ready: 0, free: 15 }, unsignaledEvents: 3, eventWaiters: 0, shuttingDown: false, networkInterface: " [js_test:multi_coll_drop] 2016-04-06T02:54:17.892-0500 sh20014| NetworkInterfaceASIO Operations' Diagnostic: [js_test:multi_coll_drop] 2016-04-06T02:54:17.895-0500 sh20014| Operation: Count: [js_test:multi_coll_drop] 2016-04-06T02:54:17.895-0500 sh20014| Connecting 0 [js_test:multi_coll_drop] 2016-04-06T02:54:17.897-0500 sh20014| In Progress 0 [js_test:multi_coll_drop] 2016-04-06T02:54:17.898-0500 sh20014| Succeeded 95 [js_test:multi_coll_drop] 2016-04-06T02:54:17.902-0500 sh20014| Canceled..." }, apply: { batches: { num: 209, totalMillis: 0 }, ops: 216 }, buffer: { count: 0, maxSizeBytes: 268435456, sizeBytes: 0 }, network: { bytes: 73272, getmores: { num: 399, totalMillis: 33888 }, ops: 226, readersCreated: 1 }, preload: { docs: { num: 0, totalMillis: 0 }, indexes: { num: 0, totalMillis: 0 } } }, storage: { freelist: { search: { bucketExhausted: 0, requests: 0, scanned: 0 } } }, ttl: { deletedDocuments: 0, passes: 2 } }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.905-0500 sh20014| 2016-04-06T02:53:58.163-0500 D SHARDING [conn1] checking last ping for lock 'multidrop.coll' against last seen process mongovm16:20010:1459929128:185613966 and ping 2016-04-06T02:53:48.990-0500 [js_test:multi_coll_drop] 2016-04-06T02:54:17.907-0500 sh20014| 2016-04-06T02:53:58.163-0500 D SHARDING [conn1] could not force lock 'multidrop.coll' because elapsed time 2060 < takeover time 900000 ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.908-0500 sh20014| 2016-04-06T02:53:58.163-0500 D SHARDING [conn1] distributed lock 'multidrop.coll' was not acquired. [js_test:multi_coll_drop] 2016-04-06T02:54:17.912-0500 sh20012| 2016-04-06T02:53:49.042-0500 D COMMAND [conn45] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929228000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929228000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:17.912-0500 sh20012| 2016-04-06T02:53:49.042-0500 D COMMAND [conn45] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:17.915-0500 sh20012| 2016-04-06T02:53:49.042-0500 D REPL [conn45] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929228000|3, t: 7 } and is durable through: { ts: Timestamp 1459929228000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.918-0500 sh20012| 2016-04-06T02:53:49.042-0500 D REPL [conn45] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929226000|2, t: 7 } and is durable through: { ts: Timestamp 1459929226000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.919-0500 sh20015| 2016-04-06T02:54:01.302-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends [js_test:multi_coll_drop] 2016-04-06T02:54:17.925-0500 2016-04-06T02:54:03.830-0500 I NETWORK sh20012| 2016-04-06T02:53:49.042-0500 I COMMAND [conn45] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929228000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929228000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.929-0500 sh20012| 2016-04-06T02:53:49.043-0500 I COMMAND [conn47] command local.oplog.rs command: getMore { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929228000|1, t: 7 } } cursorid:22842679084 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 45ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.931-0500 sh20012| 2016-04-06T02:53:49.043-0500 D COMMAND [conn47] run command local.$cmd { getMore: 22842679084, collection: "oplog.rs", maxTimeMS: 2500, term: 7, lastKnownCommittedOpTime: { ts: Timestamp 1459929228000|3, t: 7 } } [js_test:multi_coll_drop] 2016-04-06T02:54:17.934-0500 sh20013| 2016-04-06T02:53:08.230-0500 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, hostInfo: "mongovm16:20011" } numYields:0 reslen:482 locks:{} protocol:op_query 3ms [js_test:multi_coll_drop] 2016-04-06T02:54:17.936-0500 sh20013| 2016-04-06T02:53:08.235-0500 D REPL [conn15] Required snapshot optime: { ts: Timestamp 1459929188000|1, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|4, t: 4 }, name-id: "263" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.939-0500 sh20013| 2016-04-06T02:53:08.267-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929188000|1, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|4, t: 4 }, name-id: "263" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.942-0500 sh20013| 2016-04-06T02:53:08.267-0500 D REPL [conn10] Required snapshot optime: { ts: Timestamp 1459929188000|2, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|4, t: 4 }, name-id: "263" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.944-0500 sh20010| 2016-04-06T02:54:01.325-0500 I NETWORK [conn7] end connection 192.168.100.28:35663 (6 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:17.945-0500 sh20010| 2016-04-06T02:54:01.415-0500 I NETWORK [conn2] end connection 192.168.100.28:58975 (5 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:17.946-0500 sh20010| 2016-04-06T02:54:01.415-0500 I NETWORK [conn5] end connection 192.168.100.28:59091 (5 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:17.947-0500 sh20010| 2016-04-06T02:54:01.416-0500 I NETWORK [conn4] end connection 192.168.100.28:59090 (3 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:17.949-0500 sh20010| 2016-04-06T02:54:01.416-0500 I NETWORK [conn6] end connection 192.168.100.28:35660 (2 connections now open) [js_test:multi_coll_drop] 2016-04-06T02:54:17.952-0500 sh20010| 2016-04-06T02:54:01.416-0500 I NETWORK [conn3] end connection 192.168.100.28:59083 (1 connection now open) [js_test:multi_coll_drop] 2016-04-06T02:54:17.956-0500 sh20010| 2016-04-06T02:54:01.502-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends [js_test:multi_coll_drop] 2016-04-06T02:54:17.957-0500 sh20010| 2016-04-06T02:54:01.502-0500 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture [js_test:multi_coll_drop] 2016-04-06T02:54:17.959-0500 sh20010| 2016-04-06T02:54:01.518-0500 W SHARDING [signalProcessingThread] error encountered while cleaning up distributed ping entry for mongovm16:20010:1459929128:185613966 :: caused by :: ShutdownInProgress: Shutdown in progress [js_test:multi_coll_drop] 2016-04-06T02:54:17.960-0500 sh20010| 2016-04-06T02:54:01.518-0500 I CONTROL [signalProcessingThread] now exiting [js_test:multi_coll_drop] 2016-04-06T02:54:17.961-0500 sh20010| 2016-04-06T02:54:01.518-0500 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... [js_test:multi_coll_drop] 2016-04-06T02:54:17.962-0500 sh20010| 2016-04-06T02:54:01.518-0500 I NETWORK [signalProcessingThread] closing listening socket: 10 [js_test:multi_coll_drop] 2016-04-06T02:54:17.963-0500 sh20010| 2016-04-06T02:54:01.518-0500 I NETWORK [signalProcessingThread] closing listening socket: 11 [js_test:multi_coll_drop] 2016-04-06T02:54:17.966-0500 sh20010| 2016-04-06T02:54:01.518-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-20010.sock [js_test:multi_coll_drop] 2016-04-06T02:54:17.966-0500 sh20010| 2016-04-06T02:54:01.518-0500 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... [js_test:multi_coll_drop] 2016-04-06T02:54:17.969-0500 sh20010| 2016-04-06T02:54:01.518-0500 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down [js_test:multi_coll_drop] 2016-04-06T02:54:17.969-0500 sh20010| 2016-04-06T02:54:01.674-0500 I STORAGE [signalProcessingThread] shutdown: removing fs lock... [js_test:multi_coll_drop] 2016-04-06T02:54:17.970-0500 sh20010| 2016-04-06T02:54:01.675-0500 I CONTROL [signalProcessingThread] shutting down with code:0 [js_test:multi_coll_drop] 2016-04-06T02:54:17.970-0500 sh20010| 2016-04-06T02:54:01.675-0500 I CONTROL [initandlisten] shutting down with code:0 [js_test:multi_coll_drop] 2016-04-06T02:54:17.973-0500 [ReplicaSetMonitorWatcher] Detected bad connection created at 1459929232609652 microSec, clearing pool for mongovm16:20012 of 0 connections [js_test:multi_coll_drop] 2016-04-06T02:54:17.973-0500 sh20013| 2016-04-06T02:53:08.267-0500 D REPL [conn16] Required snapshot optime: { ts: Timestamp 1459929188000|3, t: 4 } is not yet part of the current 'committed' snapshot: { optime: { ts: Timestamp 1459929185000|4, t: 4 }, name-id: "263" } [js_test:multi_coll_drop] 2016-04-06T02:54:17.978-0500 sh20013| 2016-04-06T02:53:08.270-0500 D COMMAND [conn31] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|3, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|3, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:17.979-0500 sh20011| 2016-04-06T02:53:22.051-0500 D COMMAND [conn62] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929201000|1, t: 5 } } } [js_test:multi_coll_drop] 2016-04-06T02:54:17.983-0500 sh20014| 2016-04-06T02:53:58.663-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c09606c33406d4d9c0dc, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:17.984-0500 ReplSetTest Could not call ismaster on node connection to mongovm16:20012: Error: error doing query: failed: network error while attempting to run command 'ismaster' on host 'mongovm16:20012' [js_test:multi_coll_drop] 2016-04-06T02:54:17.986-0500 sh20014| 2016-04-06T02:53:58.663-0500 D ASIO [conn1] startCommand: RemoteCommand 996 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:28.663-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c09606c33406d4d9c0dc'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929238663), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.987-0500 sh20014| 2016-04-06T02:53:58.663-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 996 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:17.990-0500 sh20014| 2016-04-06T02:53:58.664-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 996 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.992-0500 sh20014| 2016-04-06T02:53:58.664-0500 D ASIO [conn1] startCommand: RemoteCommand 998 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:28.664-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929237000|1, t: 8 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:17.992-0500 sh20014| 2016-04-06T02:53:58.664-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 998 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:17.999-0500 sh20014| 2016-04-06T02:53:58.664-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 998 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.006-0500 sh20014| 2016-04-06T02:53:58.665-0500 D ASIO [conn1] startCommand: RemoteCommand 1000 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:28.665-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929237000|1, t: 8 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.006-0500 sh20014| 2016-04-06T02:53:58.666-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 1000 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:18.008-0500 sh20014| 2016-04-06T02:53:58.666-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 1000 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929228990) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.008-0500 sh20014| 2016-04-06T02:53:58.666-0500 D ASIO [conn1] startCommand: RemoteCommand 1002 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:54:28.666-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.009-0500 sh20014| 2016-04-06T02:53:58.666-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 1002 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:18.043-0500 sh20014| 2016-04-06T02:53:58.668-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] warning: log line attempted (22kB) over max size (10kB), printing beginning and end ... Request 1002 finished with response: { host: "mongovm16:20013", advisoryHostFQDNs: [], version: "3.3.4-37-g36f3ff8", process: "mongod", pid: 66033, uptime: 121.0, uptimeMillis: 121326, uptimeEstimate: 85.0, localTime: new Date(1459929238666), asserts: { regular: 0, warning: 0, msg: 0, user: 30, rollovers: 0 }, connections: { current: 13, available: 51187, totalCreated: 71 }, extra_info: { note: "fields vary by platform", heap_usage_bytes: 134364000, page_faults: 0 }, globalLock: { totalTime: 121322000, currentQueue: { total: 0, readers: 0, writers: 0 }, activeClients: { total: 30, readers: 0, writers: 0 } }, locks: { Global: { acquireCount: { r: 3759, w: 913, R: 212, W: 393 }, acquireWaitCount: { r: 23, w: 1, W: 10 }, timeAcquiringMicros: { r: 85380, w: 28554, W: 4350 } }, Database: { acquireCount: { r: 1135, w: 235, W: 678 }, acquireWaitCount: { r: 136, W: 5 }, timeAcquiringMicros: { r: 15600, W: 2901 } }, Collection: { acquireCount: { r: 646, w: 215 } }, Metadata: { acquireCount: { w: 71, W: 552 }, acquireWaitCount: { W: 8 }, timeAcquiringMicros: { W: 616 } }, oplog: { acquireCount: { r: 518, w: 27, R: 1, W: 1 } } }, network: { bytesIn: 157522, bytesOut: 980047, numRequests: 720 }, opcounters: { insert: 3, query: 157, update: 10, delete: 0, getmore: 53, command: 519 }, opcountersRepl: { insert: 64, query: 0, update: 184, delete: 0, getmore: 0, command: 0 }, repl: { hosts: [ "mongovm16:20011", "mongovm16:20012", "mongovm16:20013" ], setName: "multidrop-configRS", setVersion: 1, ismaster: true, secondary: false, primary: "mongovm16:20013", me: "mongovm16:20013", electionId: ObjectId('7fffffff0000000000000008'), rbid: 1885590396 }, storageEngine: { name: "wiredTiger", supportsCommittedReads: true, readOnly: false, persistent: true }, tcmalloc: { generic: { current_allocated_bytes: 134365520, heap_size: 138121216 }, tcmalloc: { pageheap_free_bytes: 761856, pageheap_unmapped_bytes: 0, max_total_thread_cache_bytes: 1073741824, current_total_thread_cache_bytes: 1649752, total_free_bytes: 2993840, central_cache_free_bytes: 287576, transfer_cache_free_bytes: 1056512, thread_cache_free_bytes: 1649752, aggressive_memory_decommit: 0, size_classes: [ { bytes_per_object: 0, pages_per_span: 0, num_spans: 0, num_thread_objs: 0, num_central_objs: 0, num_transfer_objs: 0, free_bytes: 0, allocated_bytes: 0 }, { bytes_per_object: 8, pages_per_span: 2, num_spans: 2, num_thread_objs: 129, num_central_objs: 993, num_transfer_objs: 0, free_bytes: 8976, allocated_bytes: 16384 }, { bytes_per_object: 16, pages_per_span: 2, num_spans: 4, num_thread_objs: 569, num_central_objs: 447, num_transfer_objs: 0, free_bytes: 16256, allocated_bytes: 32768 }, { bytes_per_object: 32, pages_per_span: 2, num_spans: 37, num_thread_objs: 1525, num_central_objs: 203, num_transfer_objs: 1536, free_bytes: 104448, allocated_bytes: 303104 }, { bytes_per_object: 48, pages_per_span: 2, num_spans: 24, num_thread_objs: 891, num_central_objs: 332, num_transfer_objs: 0, free_bytes: 58704, allocated_bytes: 196608 }, { bytes_per_object: 64, pages_per_span: 2, num_spans: 62, num_thread_objs: 680, num_central_objs: 128, num_transfer_objs: 6016, free_bytes: 436736, allocated_bytes: 507904 }, { bytes_per_object: 80, pages_per_span: 2, num_spans: 38, num_thread_objs: 557, num_central_objs: 108, num_transfer_objs: 2142, free_bytes: 224560, allocated_bytes: 311296 }, { bytes_per_object: 96, pages_per_span: 2, num .......... cheSetFilter: { failed: 0, total: 0 }, profile: { failed: 0, total: 0 }, reIndex: { failed: 0, total: 0 }, renameCollection: { failed: 0, total: 0 }, repairCursor: { failed: 0, total: 0 }, repairDatabase: { failed: 0, total: 0 }, replSetDeclareElectionWinner: { failed: 0, total: 0 }, replSetElect: { failed: 0, total: 0 }, replSetFreeze: { failed: 0, total: 0 }, replSetFresh: { failed: 0, total: 0 }, replSetGetConfig: { failed: 0, total: 0 }, replSetGetRBID: { failed: 0, total: 2 }, replSetGetStatus: { failed: 0, total: 0 }, replSetHeartbeat: { failed: 0, total: 84 }, replSetInitiate: { failed: 0, total: 0 }, replSetMaintenance: { failed: 0, total: 0 }, replSetReconfig: { failed: 0, total: 0 }, replSetRequestVotes: { failed: 0, total: 6 }, replSetStepDown: { failed: 0, total: 0 }, replSetSyncFrom: { failed: 0, total: 0 }, replSetTest: { failed: 0, total: 0 }, replSetUpdatePosition: { failed: 0, total: 121 }, resetError: { failed: 0, total: 0 }, resync: { failed: 0, total: 0 }, revokePrivilegesFromRole: { failed: 0, total: 0 }, revokeRolesFromRole: { failed: 0, total: 0 }, revokeRolesFromUser: { failed: 0, total: 0 }, rolesInfo: { failed: 0, total: 0 }, saslContinue: { failed: 0, total: 0 }, saslStart: { failed: 0, total: 0 }, serverStatus: { failed: 0, total: 28 }, setCommittedSnapshot: { failed: 0, total: 0 }, setParameter: { failed: 0, total: 0 }, setShardVersion: { failed: 0, total: 0 }, shardConnPoolStats: { failed: 0, total: 0 }, shardingState: { failed: 0, total: 0 }, shutdown: { failed: 0, total: 0 }, sleep: { failed: 0, total: 0 }, splitChunk: { failed: 0, total: 0 }, splitVector: { failed: 0, total: 0 }, stageDebug: { failed: 0, total: 0 }, top: { failed: 0, total: 0 }, touch: { failed: 0, total: 0 }, unsetSharding: { failed: 0, total: 0 }, update: { failed: 0, total: 10 }, updateRole: { failed: 0, total: 0 }, updateUser: { failed: 0, total: 0 }, usersInfo: { failed: 0, total: 0 }, validate: { failed: 0, total: 0 }, whatsmyuri: { failed: 0, total: 0 }, writebacklisten: { failed: 0, total: 0 } }, cursor: { timedOut: 0, open: { noTimeout: 0, pinned: 2, total: 3 } }, document: { deleted: 0, inserted: 6, returned: 405, updated: 17 }, getLastError: { wtime: { num: 23, totalMillis: 20204 }, wtimeouts: 0 }, operation: { fastmod: 0, idhack: 79, scanAndOrder: 0, writeConflicts: 0 }, queryExecutor: { scanned: 199, scannedObjects: 398 }, record: { moves: 0 }, repl: { executor: { counters: { eventCreated: 23, eventWait: 23, cancels: 591, waits: 1759, scheduledNetCmd: 107, scheduledDBWork: 4, scheduledXclWork: 6, scheduledWorkAt: 680, scheduledWork: 1892, schedulingFailures: 0 }, queues: { networkInProgress: 0, dbWorkInProgress: 0, exclusiveInProgress: 0, sleepers: 3, ready: 0, free: 15 }, unsignaledEvents: 3, eventWaiters: 0, shuttingDown: false, networkInterface: " [js_test:multi_coll_drop] 2016-04-06T02:54:18.044-0500 sh20014| NetworkInterfaceASIO Operations' Diagnostic: [js_test:multi_coll_drop] 2016-04-06T02:54:18.044-0500 sh20014| Operation: Count: [js_test:multi_coll_drop] 2016-04-06T02:54:18.044-0500 sh20014| Connecting 0 [js_test:multi_coll_drop] 2016-04-06T02:54:18.046-0500 sh20014| In Progress 0 [js_test:multi_coll_drop] 2016-04-06T02:54:18.052-0500 sh20014| Succeeded 97 [js_test:multi_coll_drop] 2016-04-06T02:54:18.062-0500 sh20014| Canceled..." }, apply: { batches: { num: 209, totalMillis: 0 }, ops: 216 }, buffer: { count: 0, maxSizeBytes: 268435456, sizeBytes: 0 }, network: { bytes: 73272, getmores: { num: 399, totalMillis: 33888 }, ops: 226, readersCreated: 1 }, preload: { docs: { num: 0, totalMillis: 0 }, indexes: { num: 0, totalMillis: 0 } } }, storage: { freelist: { search: { bucketExhausted: 0, requests: 0, scanned: 0 } } }, ttl: { deletedDocuments: 0, passes: 2 } }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.069-0500 sh20014| 2016-04-06T02:53:58.668-0500 D SHARDING [conn1] checking last ping for lock 'multidrop.coll' against last seen process mongovm16:20010:1459929128:185613966 and ping 2016-04-06T02:53:48.990-0500 [js_test:multi_coll_drop] 2016-04-06T02:54:18.071-0500 sh20014| 2016-04-06T02:53:58.668-0500 D SHARDING [conn1] could not force lock 'multidrop.coll' because elapsed time 2565 < takeover time 900000 ms [js_test:multi_coll_drop] 2016-04-06T02:54:18.071-0500 sh20014| 2016-04-06T02:53:58.668-0500 D SHARDING [conn1] distributed lock 'multidrop.coll' was not acquired. [js_test:multi_coll_drop] 2016-04-06T02:54:18.074-0500 2016-04-06T02:54:06.103-0500 I NETWORK [ReplicaSetMonitorWatcher] Socket closed remotely, no longer connected (idle 14 secs, remote host 192.168.100.28:20011) [js_test:multi_coll_drop] 2016-04-06T02:54:18.076-0500 sh20013| 2016-04-06T02:53:08.270-0500 D COMMAND [conn31] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:18.077-0500 sh20013| 2016-04-06T02:53:08.270-0500 D REPL [conn31] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929188000|3, t: 4 } and is durable through: { ts: Timestamp 1459929188000|3, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.081-0500 sh20013| 2016-04-06T02:53:08.270-0500 D REPL [conn31] Updating _lastCommittedOpTime to { ts: Timestamp 1459929188000|3, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.086-0500 sh20013| 2016-04-06T02:53:08.270-0500 D REPL [conn31] received notification that node with memberID 1 in config with version 1 has reached optime: { ts: Timestamp 1459929185000|1, t: 4 } and is durable through: { ts: Timestamp 1459929185000|1, t: 4 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.094-0500 sh20013| 2016-04-06T02:53:08.270-0500 I COMMAND [conn29] command local.oplog.rs command: getMore { getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929185000|4, t: 4 } } cursorid:23953707769 numYields:1 nreturned:0 reslen:352 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 42ms [js_test:multi_coll_drop] 2016-04-06T02:54:18.102-0500 sh20013| 2016-04-06T02:53:08.270-0500 I COMMAND [conn31] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929188000|3, t: 4 }, appliedOpTime: { ts: Timestamp 1459929188000|3, t: 4 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, appliedOpTime: { ts: Timestamp 1459929185000|1, t: 4 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, appliedOpTime: { ts: Timestamp 1459929163000|8, t: 3 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:18.109-0500 sh20013| 2016-04-06T02:53:08.270-0500 I COMMAND [conn16] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20015" }, u: { $set: { _id: "mongovm16:20015", ping: new Date(1459929188221), up: 61, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:386 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 49ms [js_test:multi_coll_drop] 2016-04-06T02:54:18.122-0500 sh20013| 2016-04-06T02:53:08.271-0500 I COMMAND [conn15] command config.changelog command: insert { insert: "changelog", documents: [ { _id: "mongovm16-2016-04-06T02:53:08.213-0500-5704c06465c17830b843f1c8", server: "mongovm16", clientAddr: "192.168.100.28:59091", time: new Date(1459929188213), what: "split", ns: "multidrop.coll", details: { before: { min: { _id: -64.0 }, max: { _id: MaxKey } }, left: { min: { _id: -64.0 }, max: { _id: -63.0 }, lastmod: Timestamp 1000|75, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') }, right: { min: { _id: -63.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|76, lastmodEpoch: ObjectId('5704c02806c33406d4d9c0c0') } } } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } ninserted:1 numYields:0 reslen:371 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 2, W: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 57ms [js_test:multi_coll_drop] 2016-04-06T02:54:18.135-0500 sh20013| 2016-04-06T02:53:08.271-0500 I COMMAND [conn10] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "mongovm16:20014" }, u: { $set: { _id: "mongovm16:20014", ping: new Date(1459929188220), up: 61, waiting: true, mongoVersion: "3.3.4-37-g36f3ff8" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } numYields:0 reslen:386 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 49ms [js_test:multi_coll_drop] 2016-04-06T02:54:18.140-0500 sh20013| 2016-04-06T02:53:08.274-0500 D COMMAND [conn15] run command config.$cmd { findAndModify: "locks", query: { ts: ObjectId('5704c04b65c17830b843f1c7') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.141-0500 sh20013| 2016-04-06T02:53:08.274-0500 D QUERY [conn15] Relevant index 0 is kp: { ts: 1 } name: 'ts_1' io: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:18.144-0500 sh20014| 2016-04-06T02:53:59.168-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c09706c33406d4d9c0dd, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:18.152-0500 sh20014| 2016-04-06T02:53:59.168-0500 D ASIO [conn1] startCommand: RemoteCommand 1004 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:29.168-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c09706c33406d4d9c0dd'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929239168), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.155-0500 sh20014| 2016-04-06T02:53:59.172-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 1004 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:18.159-0500 sh20014| 2016-04-06T02:53:59.172-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 1004 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.164-0500 sh20014| 2016-04-06T02:53:59.173-0500 D ASIO [conn1] startCommand: RemoteCommand 1006 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:29.173-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929237000|1, t: 8 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.169-0500 sh20014| 2016-04-06T02:53:59.173-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 1006 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:18.172-0500 sh20014| 2016-04-06T02:53:59.173-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 1006 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.175-0500 sh20014| 2016-04-06T02:53:59.173-0500 D ASIO [conn1] startCommand: RemoteCommand 1008 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:29.173-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929237000|1, t: 8 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.176-0500 sh20014| 2016-04-06T02:53:59.173-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 1008 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:18.179-0500 sh20014| 2016-04-06T02:53:59.174-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 1008 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929228990) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.180-0500 sh20014| 2016-04-06T02:53:59.175-0500 D ASIO [conn1] startCommand: RemoteCommand 1010 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:54:29.175-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.181-0500 sh20014| 2016-04-06T02:53:59.176-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 1010 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:18.200-0500 sh20014| 2016-04-06T02:53:59.177-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] warning: log line attempted (22kB) over max size (10kB), printing beginning and end ... Request 1010 finished with response: { host: "mongovm16:20013", advisoryHostFQDNs: [], version: "3.3.4-37-g36f3ff8", process: "mongod", pid: 66033, uptime: 122.0, uptimeMillis: 121836, uptimeEstimate: 86.0, localTime: new Date(1459929239176), asserts: { regular: 0, warning: 0, msg: 0, user: 31, rollovers: 0 }, connections: { current: 13, available: 51187, totalCreated: 71 }, extra_info: { note: "fields vary by platform", heap_usage_bytes: 134364256, page_faults: 0 }, globalLock: { totalTime: 121832000, currentQueue: { total: 0, readers: 0, writers: 0 }, activeClients: { total: 30, readers: 0, writers: 0 } }, locks: { Global: { acquireCount: { r: 3766, w: 914, R: 212, W: 393 }, acquireWaitCount: { r: 23, w: 1, W: 10 }, timeAcquiringMicros: { r: 85380, w: 28554, W: 4350 } }, Database: { acquireCount: { r: 1138, w: 236, W: 678 }, acquireWaitCount: { r: 136, W: 5 }, timeAcquiringMicros: { r: 15600, W: 2901 } }, Collection: { acquireCount: { r: 648, w: 216 } }, Metadata: { acquireCount: { w: 71, W: 552 }, acquireWaitCount: { W: 8 }, timeAcquiringMicros: { W: 616 } }, oplog: { acquireCount: { r: 519, w: 27, R: 1, W: 1 } } }, network: { bytesIn: 158710, bytesOut: 1008041, numRequests: 726 }, opcounters: { insert: 3, query: 159, update: 10, delete: 0, getmore: 53, command: 523 }, opcountersRepl: { insert: 64, query: 0, update: 184, delete: 0, getmore: 0, command: 0 }, repl: { hosts: [ "mongovm16:20011", "mongovm16:20012", "mongovm16:20013" ], setName: "multidrop-configRS", setVersion: 1, ismaster: true, secondary: false, primary: "mongovm16:20013", me: "mongovm16:20013", electionId: ObjectId('7fffffff0000000000000008'), rbid: 1885590396 }, storageEngine: { name: "wiredTiger", supportsCommittedReads: true, readOnly: false, persistent: true }, tcmalloc: { generic: { current_allocated_bytes: 134365776, heap_size: 138121216 }, tcmalloc: { pageheap_free_bytes: 761856, pageheap_unmapped_bytes: 0, max_total_thread_cache_bytes: 1073741824, current_total_thread_cache_bytes: 1632712, total_free_bytes: 2993584, central_cache_free_bytes: 288040, transfer_cache_free_bytes: 1072832, thread_cache_free_bytes: 1632712, aggressive_memory_decommit: 0, size_classes: [ { bytes_per_object: 0, pages_per_span: 0, num_spans: 0, num_thread_objs: 0, num_central_objs: 0, num_transfer_objs: 0, free_bytes: 0, allocated_bytes: 0 }, { bytes_per_object: 8, pages_per_span: 2, num_spans: 2, num_thread_objs: 129, num_central_objs: 993, num_transfer_objs: 0, free_bytes: 8976, allocated_bytes: 16384 }, { bytes_per_object: 16, pages_per_span: 2, num_spans: 4, num_thread_objs: 594, num_central_objs: 422, num_transfer_objs: 0, free_bytes: 16256, allocated_bytes: 32768 }, { bytes_per_object: 32, pages_per_span: 2, num_spans: 37, num_thread_objs: 1534, num_central_objs: 193, num_transfer_objs: 1536, free_bytes: 104416, allocated_bytes: 303104 }, { bytes_per_object: 48, pages_per_span: 2, num_spans: 24, num_thread_objs: 734, num_central_objs: 319, num_transfer_objs: 170, free_bytes: 58704, allocated_bytes: 196608 }, { bytes_per_object: 64, pages_per_span: 2, num_spans: 62, num_thread_objs: 680, num_central_objs: 128, num_transfer_objs: 6016, free_bytes: 436736, allocated_bytes: 507904 }, { bytes_per_object: 80, pages_per_span: 2, num_spans: 38, num_thread_objs: 456, num_central_objs: 107, num_transfer_objs: 2244, free_bytes: 224560, allocated_bytes: 311296 }, { bytes_per_object: 96, pages_per_span: 2, .......... cheSetFilter: { failed: 0, total: 0 }, profile: { failed: 0, total: 0 }, reIndex: { failed: 0, total: 0 }, renameCollection: { failed: 0, total: 0 }, repairCursor: { failed: 0, total: 0 }, repairDatabase: { failed: 0, total: 0 }, replSetDeclareElectionWinner: { failed: 0, total: 0 }, replSetElect: { failed: 0, total: 0 }, replSetFreeze: { failed: 0, total: 0 }, replSetFresh: { failed: 0, total: 0 }, replSetGetConfig: { failed: 0, total: 0 }, replSetGetRBID: { failed: 0, total: 2 }, replSetGetStatus: { failed: 0, total: 0 }, replSetHeartbeat: { failed: 0, total: 85 }, replSetInitiate: { failed: 0, total: 0 }, replSetMaintenance: { failed: 0, total: 0 }, replSetReconfig: { failed: 0, total: 0 }, replSetRequestVotes: { failed: 0, total: 6 }, replSetStepDown: { failed: 0, total: 0 }, replSetSyncFrom: { failed: 0, total: 0 }, replSetTest: { failed: 0, total: 0 }, replSetUpdatePosition: { failed: 0, total: 121 }, resetError: { failed: 0, total: 0 }, resync: { failed: 0, total: 0 }, revokePrivilegesFromRole: { failed: 0, total: 0 }, revokeRolesFromRole: { failed: 0, total: 0 }, revokeRolesFromUser: { failed: 0, total: 0 }, rolesInfo: { failed: 0, total: 0 }, saslContinue: { failed: 0, total: 0 }, saslStart: { failed: 0, total: 0 }, serverStatus: { failed: 0, total: 29 }, setCommittedSnapshot: { failed: 0, total: 0 }, setParameter: { failed: 0, total: 0 }, setShardVersion: { failed: 0, total: 0 }, shardConnPoolStats: { failed: 0, total: 0 }, shardingState: { failed: 0, total: 0 }, shutdown: { failed: 0, total: 0 }, sleep: { failed: 0, total: 0 }, splitChunk: { failed: 0, total: 0 }, splitVector: { failed: 0, total: 0 }, stageDebug: { failed: 0, total: 0 }, top: { failed: 0, total: 0 }, touch: { failed: 0, total: 0 }, unsetSharding: { failed: 0, total: 0 }, update: { failed: 0, total: 10 }, updateRole: { failed: 0, total: 0 }, updateUser: { failed: 0, total: 0 }, usersInfo: { failed: 0, total: 0 }, validate: { failed: 0, total: 0 }, whatsmyuri: { failed: 0, total: 0 }, writebacklisten: { failed: 0, total: 0 } }, cursor: { timedOut: 0, open: { noTimeout: 0, pinned: 2, total: 3 } }, document: { deleted: 0, inserted: 6, returned: 407, updated: 17 }, getLastError: { wtime: { num: 23, totalMillis: 20204 }, wtimeouts: 0 }, operation: { fastmod: 0, idhack: 81, scanAndOrder: 0, writeConflicts: 0 }, queryExecutor: { scanned: 201, scannedObjects: 400 }, record: { moves: 0 }, repl: { executor: { counters: { eventCreated: 23, eventWait: 23, cancels: 591, waits: 1768, scheduledNetCmd: 107, scheduledDBWork: 4, scheduledXclWork: 6, scheduledWorkAt: 680, scheduledWork: 1901, schedulingFailures: 0 }, queues: { networkInProgress: 0, dbWorkInProgress: 0, exclusiveInProgress: 0, sleepers: 3, ready: 0, free: 15 }, unsignaledEvents: 3, eventWaiters: 0, shuttingDown: false, networkInterface: " [js_test:multi_coll_drop] 2016-04-06T02:54:18.201-0500 sh20014| NetworkInterfaceASIO Operations' Diagnostic: [js_test:multi_coll_drop] 2016-04-06T02:54:18.201-0500 sh20014| Operation: Count: [js_test:multi_coll_drop] 2016-04-06T02:54:18.202-0500 sh20014| Connecting 0 [js_test:multi_coll_drop] 2016-04-06T02:54:18.204-0500 sh20014| In Progress 0 [js_test:multi_coll_drop] 2016-04-06T02:54:18.206-0500 sh20014| Succeeded 97 [js_test:multi_coll_drop] 2016-04-06T02:54:18.213-0500 sh20014| Canceled..." }, apply: { batches: { num: 209, totalMillis: 0 }, ops: 216 }, buffer: { count: 0, maxSizeBytes: 268435456, sizeBytes: 0 }, network: { bytes: 73272, getmores: { num: 399, totalMillis: 33888 }, ops: 226, readersCreated: 1 }, preload: { docs: { num: 0, totalMillis: 0 }, indexes: { num: 0, totalMillis: 0 } } }, storage: { freelist: { search: { bucketExhausted: 0, requests: 0, scanned: 0 } } }, ttl: { deletedDocuments: 0, passes: 2 } }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.218-0500 sh20014| 2016-04-06T02:53:59.177-0500 D SHARDING [conn1] checking last ping for lock 'multidrop.coll' against last seen process mongovm16:20010:1459929128:185613966 and ping 2016-04-06T02:53:48.990-0500 [js_test:multi_coll_drop] 2016-04-06T02:54:18.222-0500 sh20014| 2016-04-06T02:53:59.177-0500 D SHARDING [conn1] could not force lock 'multidrop.coll' because elapsed time 3074 < takeover time 900000 ms [js_test:multi_coll_drop] 2016-04-06T02:54:18.223-0500 sh20014| 2016-04-06T02:53:59.177-0500 D SHARDING [conn1] distributed lock 'multidrop.coll' was not acquired. [js_test:multi_coll_drop] 2016-04-06T02:54:18.227-0500 sh20014| 2016-04-06T02:53:59.677-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c09706c33406d4d9c0de, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:18.230-0500 sh20014| 2016-04-06T02:53:59.677-0500 D ASIO [conn1] startCommand: RemoteCommand 1012 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:29.677-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c09706c33406d4d9c0de'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929239677), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.232-0500 sh20014| 2016-04-06T02:53:59.678-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 1012 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:18.244-0500 sh20014| 2016-04-06T02:53:59.679-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 1012 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.252-0500 sh20014| 2016-04-06T02:53:59.679-0500 D ASIO [conn1] startCommand: RemoteCommand 1014 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:29.679-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929237000|1, t: 8 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.255-0500 sh20014| 2016-04-06T02:53:59.679-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 1014 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:18.263-0500 sh20014| 2016-04-06T02:53:59.680-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 1014 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.269-0500 sh20014| 2016-04-06T02:53:59.680-0500 D ASIO [conn1] startCommand: RemoteCommand 1016 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:29.680-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929237000|1, t: 8 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.270-0500 sh20014| 2016-04-06T02:53:59.681-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 1016 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:18.275-0500 sh20014| 2016-04-06T02:53:59.681-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 1016 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929228990) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.279-0500 sh20014| 2016-04-06T02:53:59.681-0500 D ASIO [conn1] startCommand: RemoteCommand 1018 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:54:29.681-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.292-0500 sh20014| 2016-04-06T02:53:59.682-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 1018 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:18.306-0500 sh20014| 2016-04-06T02:53:59.683-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] warning: log line attempted (22kB) over max size (10kB), printing beginning and end ... Request 1018 finished with response: { host: "mongovm16:20013", advisoryHostFQDNs: [], version: "3.3.4-37-g36f3ff8", process: "mongod", pid: 66033, uptime: 122.0, uptimeMillis: 122342, uptimeEstimate: 86.0, localTime: new Date(1459929239682), asserts: { regular: 0, warning: 0, msg: 0, user: 32, rollovers: 0 }, connections: { current: 13, available: 51187, totalCreated: 71 }, extra_info: { note: "fields vary by platform", heap_usage_bytes: 134364528, page_faults: 0 }, globalLock: { totalTime: 122338000, currentQueue: { total: 0, readers: 0, writers: 0 }, activeClients: { total: 30, readers: 0, writers: 0 } }, locks: { Global: { acquireCount: { r: 3783, w: 915, R: 212, W: 393 }, acquireWaitCount: { r: 23, w: 1, W: 10 }, timeAcquiringMicros: { r: 85380, w: 28554, W: 4350 } }, Database: { acquireCount: { r: 1146, w: 237, W: 678 }, acquireWaitCount: { r: 136, W: 5 }, timeAcquiringMicros: { r: 15600, W: 2901 } }, Collection: { acquireCount: { r: 650, w: 217 } }, Metadata: { acquireCount: { w: 71, W: 552 }, acquireWaitCount: { W: 8 }, timeAcquiringMicros: { W: 616 } }, oplog: { acquireCount: { r: 525, w: 27, R: 1, W: 1 } } }, network: { bytesIn: 160973, bytesOut: 1035988, numRequests: 734 }, opcounters: { insert: 3, query: 161, update: 10, delete: 0, getmore: 55, command: 527 }, opcountersRepl: { insert: 64, query: 0, update: 184, delete: 0, getmore: 0, command: 0 }, repl: { hosts: [ "mongovm16:20011", "mongovm16:20012", "mongovm16:20013" ], setName: "multidrop-configRS", setVersion: 1, ismaster: true, secondary: false, primary: "mongovm16:20013", me: "mongovm16:20013", electionId: ObjectId('7fffffff0000000000000008'), rbid: 1885590396 }, storageEngine: { name: "wiredTiger", supportsCommittedReads: true, readOnly: false, persistent: true }, tcmalloc: { generic: { current_allocated_bytes: 134366048, heap_size: 138121216 }, tcmalloc: { pageheap_free_bytes: 753664, pageheap_unmapped_bytes: 0, max_total_thread_cache_bytes: 1073741824, current_total_thread_cache_bytes: 1646920, total_free_bytes: 3001504, central_cache_free_bytes: 281752, transfer_cache_free_bytes: 1072832, thread_cache_free_bytes: 1646920, aggressive_memory_decommit: 0, size_classes: [ { bytes_per_object: 0, pages_per_span: 0, num_spans: 0, num_thread_objs: 0, num_central_objs: 0, num_transfer_objs: 0, free_bytes: 0, allocated_bytes: 0 }, { bytes_per_object: 8, pages_per_span: 2, num_spans: 2, num_thread_objs: 129, num_central_objs: 993, num_transfer_objs: 0, free_bytes: 8976, allocated_bytes: 16384 }, { bytes_per_object: 16, pages_per_span: 2, num_spans: 4, num_thread_objs: 594, num_central_objs: 422, num_transfer_objs: 0, free_bytes: 16256, allocated_bytes: 32768 }, { bytes_per_object: 32, pages_per_span: 2, num_spans: 37, num_thread_objs: 1603, num_central_objs: 124, num_transfer_objs: 1536, free_bytes: 104416, allocated_bytes: 303104 }, { bytes_per_object: 48, pages_per_span: 2, num_spans: 24, num_thread_objs: 741, num_central_objs: 311, num_transfer_objs: 170, free_bytes: 58656, allocated_bytes: 196608 }, { bytes_per_object: 64, pages_per_span: 2, num_spans: 62, num_thread_objs: 680, num_central_objs: 128, num_transfer_objs: 6016, free_bytes: 436736, allocated_bytes: 507904 }, { bytes_per_object: 80, pages_per_span: 2, num_spans: 38, num_thread_objs: 487, num_central_objs: 76, num_transfer_objs: 2244, free_bytes: 224560, allocated_bytes: 311296 }, { bytes_per_object: 96, pages_per_span: 2, n .......... cheSetFilter: { failed: 0, total: 0 }, profile: { failed: 0, total: 0 }, reIndex: { failed: 0, total: 0 }, renameCollection: { failed: 0, total: 0 }, repairCursor: { failed: 0, total: 0 }, repairDatabase: { failed: 0, total: 0 }, replSetDeclareElectionWinner: { failed: 0, total: 0 }, replSetElect: { failed: 0, total: 0 }, replSetFreeze: { failed: 0, total: 0 }, replSetFresh: { failed: 0, total: 0 }, replSetGetConfig: { failed: 0, total: 0 }, replSetGetRBID: { failed: 0, total: 2 }, replSetGetStatus: { failed: 0, total: 0 }, replSetHeartbeat: { failed: 0, total: 85 }, replSetInitiate: { failed: 0, total: 0 }, replSetMaintenance: { failed: 0, total: 0 }, replSetReconfig: { failed: 0, total: 0 }, replSetRequestVotes: { failed: 0, total: 6 }, replSetStepDown: { failed: 0, total: 0 }, replSetSyncFrom: { failed: 0, total: 0 }, replSetTest: { failed: 0, total: 0 }, replSetUpdatePosition: { failed: 0, total: 123 }, resetError: { failed: 0, total: 0 }, resync: { failed: 0, total: 0 }, revokePrivilegesFromRole: { failed: 0, total: 0 }, revokeRolesFromRole: { failed: 0, total: 0 }, revokeRolesFromUser: { failed: 0, total: 0 }, rolesInfo: { failed: 0, total: 0 }, saslContinue: { failed: 0, total: 0 }, saslStart: { failed: 0, total: 0 }, serverStatus: { failed: 0, total: 30 }, setCommittedSnapshot: { failed: 0, total: 0 }, setParameter: { failed: 0, total: 0 }, setShardVersion: { failed: 0, total: 0 }, shardConnPoolStats: { failed: 0, total: 0 }, shardingState: { failed: 0, total: 0 }, shutdown: { failed: 0, total: 0 }, sleep: { failed: 0, total: 0 }, splitChunk: { failed: 0, total: 0 }, splitVector: { failed: 0, total: 0 }, stageDebug: { failed: 0, total: 0 }, top: { failed: 0, total: 0 }, touch: { failed: 0, total: 0 }, unsetSharding: { failed: 0, total: 0 }, update: { failed: 0, total: 10 }, updateRole: { failed: 0, total: 0 }, updateUser: { failed: 0, total: 0 }, usersInfo: { failed: 0, total: 0 }, validate: { failed: 0, total: 0 }, whatsmyuri: { failed: 0, total: 0 }, writebacklisten: { failed: 0, total: 0 } }, cursor: { timedOut: 0, open: { noTimeout: 0, pinned: 2, total: 3 } }, document: { deleted: 0, inserted: 6, returned: 409, updated: 17 }, getLastError: { wtime: { num: 23, totalMillis: 20204 }, wtimeouts: 0 }, operation: { fastmod: 0, idhack: 83, scanAndOrder: 0, writeConflicts: 0 }, queryExecutor: { scanned: 203, scannedObjects: 402 }, record: { moves: 0 }, repl: { executor: { counters: { eventCreated: 23, eventWait: 23, cancels: 593, waits: 1776, scheduledNetCmd: 107, scheduledDBWork: 4, scheduledXclWork: 6, scheduledWorkAt: 682, scheduledWork: 1911, schedulingFailures: 0 }, queues: { networkInProgress: 0, dbWorkInProgress: 0, exclusiveInProgress: 0, sleepers: 3, ready: 0, free: 15 }, unsignaledEvents: 3, eventWaiters: 0, shuttingDown: false, networkInterface: " [js_test:multi_coll_drop] 2016-04-06T02:54:18.307-0500 sh20014| NetworkInterfaceASIO Operations' Diagnostic: [js_test:multi_coll_drop] 2016-04-06T02:54:18.308-0500 sh20014| Operation: Count: [js_test:multi_coll_drop] 2016-04-06T02:54:18.310-0500 sh20014| Connecting 0 [js_test:multi_coll_drop] 2016-04-06T02:54:18.311-0500 sh20014| In Progress 0 [js_test:multi_coll_drop] 2016-04-06T02:54:18.312-0500 sh20014| Succeeded 97 [js_test:multi_coll_drop] 2016-04-06T02:54:18.316-0500 sh20014| Canceled..." }, apply: { batches: { num: 209, totalMillis: 0 }, ops: 216 }, buffer: { count: 0, maxSizeBytes: 268435456, sizeBytes: 0 }, network: { bytes: 73272, getmores: { num: 399, totalMillis: 33888 }, ops: 226, readersCreated: 1 }, preload: { docs: { num: 0, totalMillis: 0 }, indexes: { num: 0, totalMillis: 0 } } }, storage: { freelist: { search: { bucketExhausted: 0, requests: 0, scanned: 0 } } }, ttl: { deletedDocuments: 0, passes: 2 } }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.320-0500 sh20014| 2016-04-06T02:53:59.683-0500 D SHARDING [conn1] checking last ping for lock 'multidrop.coll' against last seen process mongovm16:20010:1459929128:185613966 and ping 2016-04-06T02:53:48.990-0500 [js_test:multi_coll_drop] 2016-04-06T02:54:18.321-0500 sh20014| 2016-04-06T02:53:59.683-0500 D SHARDING [conn1] could not force lock 'multidrop.coll' because elapsed time 3581 < takeover time 900000 ms [js_test:multi_coll_drop] 2016-04-06T02:54:18.322-0500 sh20014| 2016-04-06T02:53:59.683-0500 D SHARDING [conn1] distributed lock 'multidrop.coll' was not acquired. [js_test:multi_coll_drop] 2016-04-06T02:54:18.324-0500 sh20014| 2016-04-06T02:54:00.183-0500 D SHARDING [conn1] trying to acquire new distributed lock for multidrop.coll ( lock timeout : 900000 ms, ping interval : 30000 ms, process : mongovm16:20014:1459929123:-665935931 ) with lockSessionID: 5704c09806c33406d4d9c0df, why: drop [js_test:multi_coll_drop] 2016-04-06T02:54:18.331-0500 sh20014| 2016-04-06T02:54:00.183-0500 D ASIO [conn1] startCommand: RemoteCommand 1020 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:30.183-0500 cmd:{ findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c09806c33406d4d9c0df'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929240183), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.332-0500 sh20014| 2016-04-06T02:54:00.184-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 1020 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:18.335-0500 sh20014| 2016-04-06T02:54:00.184-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 1020 finished with response: { ok: 0.0, errmsg: "E11000 duplicate key error collection: config.locks index: _id_ dup key: { : "multidrop.coll" }", code: 11000 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.337-0500 sh20013| 2016-04-06T02:53:08.274-0500 D QUERY [conn15] Only one plan is available; it will be run but will not be cached. query: { ts: ObjectId('5704c04b65c17830b843f1c7') } sort: {} projection: {}, planSummary: IXSCAN { ts: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.339-0500 sh20013| 2016-04-06T02:53:08.277-0500 D COMMAND [conn29] run command local.$cmd { getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|3, t: 4 } } [js_test:multi_coll_drop] 2016-04-06T02:54:18.341-0500 sh20013| 2016-04-06T02:53:08.277-0500 I COMMAND [conn29] command local.oplog.rs command: getMore { getMore: 23953707769, collection: "oplog.rs", maxTimeMS: 2500, term: 4, lastKnownCommittedOpTime: { ts: Timestamp 1459929188000|3, t: 4 } } cursorid:23953707769 numYields:0 nreturned:1 reslen:495 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:18.345-0500 sh20012| 2016-04-06T02:53:49.047-0500 D COMMAND [conn45] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929228000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929228000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } [js_test:multi_coll_drop] 2016-04-06T02:54:18.346-0500 sh20012| 2016-04-06T02:53:49.047-0500 D COMMAND [conn45] command: replSetUpdatePosition [js_test:multi_coll_drop] 2016-04-06T02:54:18.347-0500 sh20012| 2016-04-06T02:53:49.047-0500 D REPL [conn45] received notification that node with memberID 0 in config with version 1 has reached optime: { ts: Timestamp 1459929228000|3, t: 7 } and is durable through: { ts: Timestamp 1459929228000|3, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.350-0500 sh20012| 2016-04-06T02:53:49.047-0500 D REPL [conn45] received notification that node with memberID 2 in config with version 1 has reached optime: { ts: Timestamp 1459929226000|2, t: 7 } and is durable through: { ts: Timestamp 1459929226000|2, t: 7 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.357-0500 sh20012| 2016-04-06T02:53:49.047-0500 I COMMAND [conn45] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1459929228000|3, t: 7 }, appliedOpTime: { ts: Timestamp 1459929228000|3, t: 7 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, appliedOpTime: { ts: Timestamp 1459929210000|1, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, appliedOpTime: { ts: Timestamp 1459929226000|2, t: 7 }, memberId: 2, cfgver: 1 } ] } numYields:0 reslen:82 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:18.358-0500 sh20012| 2016-04-06T02:53:49.116-0500 D COMMAND [conn33] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.362-0500 sh20012| 2016-04-06T02:53:49.122-0500 I COMMAND [conn33] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 5ms [js_test:multi_coll_drop] 2016-04-06T02:54:18.362-0500 sh20012| 2016-04-06T02:53:49.144-0500 D COMMAND [conn34] run command admin.$cmd { ismaster: 1 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.364-0500 sh20012| 2016-04-06T02:53:49.144-0500 I COMMAND [conn34] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:467 locks:{} protocol:op_command 0ms [js_test:multi_coll_drop] 2016-04-06T02:54:18.370-0500 sh20012| 2016-04-06T02:53:49.349-0500 D COMMAND [conn42] run command config.$cmd { findAndModify: "locks", query: { _id: "multidrop.coll", state: 0 }, update: { $set: { ts: ObjectId('5704c08d06c33406d4d9c0d6'), state: 2, who: "mongovm16:20014:1459929123:-665935931:conn1", process: "mongovm16:20014:1459929123:-665935931", when: new Date(1459929229348), why: "drop" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.373-0500 sh20012| 2016-04-06T02:53:49.349-0500 D QUERY [conn42] Relevant index 0 is kp: { _id: 1 } unique name: '_id_' io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } [js_test:multi_coll_drop] 2016-04-06T02:54:18.374-0500 2016-04-06T02:54:06.103-0500 W NETWORK [ReplicaSetMonitorWatcher] Failed to connect to 192.168.100.28:20011, reason: Connection refused [js_test:multi_coll_drop] 2016-04-06T02:54:18.378-0500 sh20014| 2016-04-06T02:54:00.184-0500 D ASIO [conn1] startCommand: RemoteCommand 1022 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:30.184-0500 cmd:{ find: "locks", filter: { _id: "multidrop.coll" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929237000|1, t: 8 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.381-0500 sh20014| 2016-04-06T02:54:00.184-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 1022 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:18.387-0500 sh20014| 2016-04-06T02:54:00.185-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 1022 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "multidrop.coll", state: 2, ts: ObjectId('5704c06465c17830b843f1cb'), who: "mongovm16:20010:1459929128:185613966:conn5", process: "mongovm16:20010:1459929128:185613966", when: new Date(1459929188727), why: "splitting chunk [{ _id: -62.0 }, { _id: MaxKey }) in multidrop.coll" } ], id: 0, ns: "config.locks" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.390-0500 sh20014| 2016-04-06T02:54:00.185-0500 D ASIO [conn1] startCommand: RemoteCommand 1024 -- target:mongovm16:20013 db:config expDate:2016-04-06T02:54:30.185-0500 cmd:{ find: "lockpings", filter: { _id: "mongovm16:20010:1459929128:185613966" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1459929237000|1, t: 8 } }, limit: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.391-0500 sh20014| 2016-04-06T02:54:00.185-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 1024 on host mongovm16:20013 [js_test:multi_coll_drop] 2016-04-06T02:54:18.394-0500 sh20014| 2016-04-06T02:54:00.185-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Request 1024 finished with response: { waitedMS: 0, cursor: { firstBatch: [ { _id: "mongovm16:20010:1459929128:185613966", ping: new Date(1459929228990) } ], id: 0, ns: "config.lockpings" }, ok: 1.0 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.394-0500 sh20014| 2016-04-06T02:54:00.185-0500 D ASIO [conn1] startCommand: RemoteCommand 1026 -- target:mongovm16:20013 db:admin expDate:2016-04-06T02:54:30.185-0500 cmd:{ serverStatus: 1, maxTimeMS: 30000 } [js_test:multi_coll_drop] 2016-04-06T02:54:18.395-0500 sh20014| 2016-04-06T02:54:00.185-0500 D ASIO [NetworkInterfaceASIO-ShardRegistry-0] Starting asynchronous command 1026 on host mongovm16:20013